1. Mata v. Avianca Was Not Mainly About ChatGPT
Mata v. Avianca: The First ChatGPT Misuse Case
The case Mata v. Avianca was a personal injury lawsuit against an airline in the U.S. District Court for the Southern District of New York (SDNY). However, the reason it became a landmark legal case was not the lawsuit itself, but the sanctions issued against the plaintiff’s lawyers for citing fake legal cases made up by ChatGPT. At least that was the popular version of the story emphasized by some reports. The reality, according to the judge’s opinion related to the sanctions, is that the penalty was about the attorneys doubling down on their misuse of AI in an attempt to conceal it. They had several opportunities to admit their fault and come clean (page 2, Mata v. Avianca, Inc., No. 1:2022cv01461 - Document 54 (S.D.N.Y. 2023)).
Take this New York Times headline “A Man Sued Avianca Airline. His Lawyer Used ChatGPT,” May 27, 2023. This article, written before the sanctions hearing in June 2023, focused on the ChatGPT-gone-wrong angle. By contrast, Sarah Isgur of the Advisory Opinions podcast had a very good breakdown noting the attorney’s responsibility and the back-and-forth that preceded the sanctions (episode “Excessive Fines and Strange Bedfellows,” May 31, 2023). However, in that podcast episode the hosts questioned the utility of ChatGPT for legal research and said “that is what Lexis and Westlaw are for” but as of 2025 both tools have added AI features including use of OpenAI’s GPT large language models (LLMs).[^1]
I am not an attorney and the opinions expressed in this article should not be construed as legal advice.
Hallucinating cases about airlines.
Why Care? Our Firm Doesn’t Use AI
Before I get into the details of the case, I want to point out that only one attorney directly used AI. It was his first time using ChatGPT. But another attorney and the law firm also got in trouble. It only takes one person using AI without proper training and without an AI policy to harm the firm. It seems that one of the drivers for AI use was access to other federal research tools was too expensive or unavailable, a problem that may be more common for solo firms and smaller firms.
Partner of Levidow, Levidow & Oberman: “We regret what's occurred. We practice primarily in state court, and Fast Case has been enough. There was a billing error and we did not have Federal access.” Matthew Russell Lee’s Newsletter Substack
You might say, “Fine! We just won’t use AI then.” Do you have a written policy stating that? Do you really not use AI? I have two simple questions:
- Do you have Microsoft Office? (then you probably have Office 365 Copilot)
- Do you search for things on Google? (then you probably see the AI Overview) If the answer to either is yes (extremely likely), are you taking measures to avoid using these AI features? If not, how can you say you don’t use AI? Simply put, avoiding AI is not the default option. It requires conscious effort to avoid the features being added to existing software, from word processors to specialty legal research tools.
Overview of Fake Citations
The lawyers submitted hallucinated cases including the court and judges who supposedly issued them, hallucinated docket numbers and made up dates.