A Case Gone Wrong
The recent story of the attorney facing disciplinary proceedings for submitting a brief written by AI which just made up citations to fake cases is already infamous among lawyers. In brief, Steven Schwartz, a New York lawyer handling a personal injury case for his client against an airline, asked Chat GPT to draft a brief for him, explaining why the case should not be dismissed on statute of limitations grounds. Chat GPT generated a brief complete with citations to cases with names like Zicherman v. Korean Air Lines and Martinez v. Delta Air Lines, explaining why the case should not be dismissed. The problem was, those cases did not exist. They were entirely made up by Chat GPT.
The attorney who had Chat GPT generate the brief apologized to the court on June 9, stating he did not understand that Chat GPT would simply make things up. He stated he thought, based on speaking with his adult children, that Chat GPT was a very powerful search engine that would find more cases than typical databases. He apologized repeatedly for not checking the cases to make sure they were real and showed the court printouts of his “conversation” with Chat GPT, asking it to confirm the cases cited were real (which Chat GPT did).
Can Chat GPT Be Your Lawyer?
Since early spring 2023, lawyers have been discussing the role of new artificial intelligence programs in the practice of law. With the newer versions of large language model (LLM) AI now able to pass the Bar exam, there is even talk of whether AI will replace lawyers. For those arguing it will, the unfortunate case of Mr. Schwartz should give them some pause. A lawyer with a firm grasp of legal ethics would no doubt tell you that even if Mr. Schwartz wanted to use AI to generate a first draft of a brief for him, he was obligated to review it himself and check to ensure the research it cited was accurate before submitting it to the court. But at a more fundamental level, lawyers need to understand what Chat GPT is and what it does before they can ever hope to use it ethically.
The first problem is with the term, “Artificial Intelligence,” which suggests to the average listener that large language model AI like Chat GPT is capable of thought. The user interface with these programs certainly suggests they engage in “thought” like a human being, given their ability to hold an intelligible conversation with a person, but that is not how they operate. LLMs are built by reviewing massive amounts of plain language (by “scraping” the internet – basically reviewing everything available to them online). They then use the prompt given to them by the user (“write me a sonnet about a bullfrog in the style of Shakespeare” or “write me a legal brief opposing a motion to dismiss for failing to file a case against an airline for personal injury within the statute of limitations, relying on New York federal case law”) to generate a plain language response by predicting what they think the user wants to hear.
This leads into the much bigger problem: LLM AI was not designed to determine what is “true” or “accurate.” It decides what to tell you based on its predictions of how words are used. If the answer it generates “sounds” like something a legal case would say (i.e., it uses legal terms correctly, the citations are formatted and placed correctly, and it addresses the subject matter requested), that means the AI is functioning as it was intended to. To the LLM AI model, the answer it provided is a good one, even if it is full of fake cases and made-up statements of law. Based on that premise, the idea of using AI to generate a brief and then submit it the court without careful review would be alarming to any attorney.
Ethically Using AI in the Practice of Law
None of this means AI will never have useful applications in law. It may even take over certain legal tasks. It might prove highly adept at performing first levels of document review, generating standard terms for contracts, or generating basic discovery demands, all of which would greatly streamline legal work for attorneys and their clients. But attorneys cannot ethically depend on AI to perform full tasks that are then signed off on by a lawyer. Anything generated by AI will still need to be carefully reviewed and checked for completeness and accuracy before it is ever served on the other side or filed with the court.
When and whether it is appropriate to use AI in handling a specific matter is one lawyers must consider carefully and should discuss fully with their clients. New technologies have a role in the practice of law, but are no substitute for the experience, wise judgment, and expertise good counsel can provide their clients.