Limits of AI in the Law

by Tyler Kaestner

Artificial intelligence (“AI”) systems have captured the attention and fascination of many; with the launch of the revolutionary “ChatGPT,” AI has become more accessible than ever. With the next step in its evolution, the tech company OpenAI has already outdone their introductory GPT model with “GPT-4.” “GPT-4 is an updated version of the company’s large language model, which is trained on vast amounts of online data to generate complex responses to user prompts.”1 This ability to “generate complex responses” is referred to as “generative AI.” This technology has wide reaching applications – including integration in search engines, language learning programs, and in the legal field.2

            This base GPT-4 technology has quickly made its way into program offerings by legal research companies, including Casetext and LexisNexis.3 Casetext’s “Cocounsel” program has promised to aid in legal research and tasks, becoming a premier “legal AI” tool. The program targets the general AI technology to the law practice by programming in a vast data base of case law and legal information, as well as allowing lawyers to upload documents for the program to review. These tech companies are also developing ways to bring legal tools to non-lawyers, including one company that is looking to make a “one-click lawsuit” for people to use in situations such as receiving an unwanted “robocall.”4

            While a battle between robots might inspire visions of Transformers fight scenes – there is certainly as much peril in them. Yes, AI technology is rapidly improving, and it can be quite impressive, but it is also imperfect. Current AI technologies are still riddled with errors that prevent them from operating at the level of reliability required in the practice of law. For example, in a highly publicized case, a law firm in New York was sanctioned by a Court for including several fictitious cases in a brief submitted to the court.5 In their defense, the lawyers said that they used ChatGPT to help in their legal research, and that these fake cases were given by the AI system. Not only that, but in an effort to double check the results, the lawyer asked the chat-bot whether the cases were real before including them in his brief – which the chat-bot again purported them to be real, and existing on legal databases, even representing them as being authored by real judges!6

            Yes, that means that ChatGPT’s program completely made up court cases, but was sophisticated (if it were a person, we might say devious) enough to incorporate real elements into them. The newer version, GPT-4, is said to be “60% less likely to make stuff up” – this leaves far too big a margin of error that the program could still flat out make stuff up. This raises major legal ethics concerns. With the ability for false information to be generated from the AI systems, even the co-founder of Casetext, the company that created the “legal AI” assistant, urges that the program “still requires attorney oversight.”7 Lastly, there is the fact that in order to power these systems, it requires lawyers to disclose sensitive personal information, that should otherwise be kept strictly confidential, to the AI software.3 With tech companies like OpenAI keeping their data processing mechanisms secret, there is no telling where a person’s information might end up, or how a lawyer could control where it goes.

            In sum, there is reason to be excited about the prospects around AI for personal use, and there might be way a person finds it makes life easier. But in the context of the practice of law, there is still too much room for error and violation of legal ethics for it to be relied on. The trained hand of an attorney is still required to meet the needs of their clients.

1 https://www.cnn.com/2023/03/16/tech/gpt-4-use-cases/index.html

2 https://www.theguardian.com/technology/2023/mar/15/what-is-gpt-4-and-how-does-it-differ-from-chatgpt

3 https://www.abajournal.com/columns/article/the-future-is-now-the-rise-of-ai-powered-legal-assistants

4 https://www.cnn.com/2023/03/16/tech/gpt-4-use-cases/index.html

5 https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/

6 https://www.bbc.com/news/world-us-canada-65735769

7 https://law.stanford.edu/2023/04/19/gpt-4-passes-the-bar-exam-what-that-means-for-artificial-intelligence-tools-in-the-legal-industry/