Skip to main content

AI & Law: History of AI Misuse

Midwest Frontier AI Consulting, helps small- and medium-sized law firms in the U.S. Midwest use generative artificial intelligence (GenAI) tools through training and governance. The company blog covers a variety of topics, from cybersecurity to managing potential customer confusion caused by AI summaries containing false information. This section covers the content, also on Substack, that is focused on the history of generative AI misuse in law.

Interactive Map of Blog Posts

Explore the geographic distribution of AI hallucination sanctions cases that I have written about so far. Click on any marker to learn more and navigate to the full blog post.

Loading map...

Generative AI Case History: Mata to Present

I typically post at least weekly going through cases involving sanctions involving lawyers, pro se litigants, and others involved in the legal process who misused generative AI. It is important to note that several of these attorneys involved in these cases stated that they do not use generative AI or that it was their first time using generative AI. In other words, the errors were introduced either:

  • by junior employees using AI during the drafting process (possibly without disclosure to the attorney)
  • or the attorneys themselves misused the AI due to lack of training and familiarity with the tools.

Then, the factual errors introduced by AI use were not corrected by the attorney who was ultimately responsible for the content.

We will go through state and federal cases (mainly U.S., but with occasional international cases) starting with Mata v. Avianca (SDNY 2023).

We will discuss why every law firm needs a generative AI policy, even if that policy is to state explicitly that "we do not use generative AI" and define what that means, based on examples from past cases. Firms around the U.S. have gotten into trouble even as they say they do not use AI. Opting out of AI requires a conscious decision, because many common software providers are pushing out GenAI features into existing products.

Recap of 2023 Cases

ChatGPT was released at the end of November 2022. There were nine (9) cases reviewed for 2023 that involved likely or admitted misuse of ChatGPT or other generative AI.

This was originally posted as “Read Your Sources and Be Honest: Recap of 2023 AI Misuse”

List of Articles for 2023

I started with the most prominent case, Mata v. Avianca, and worked my way chronologically through 2023.1 After this article, I will continue working through the list into 2024. There were far more cases in 2024 than in 2023, but the early AI misuse cases set the stage for recurring themes: fake cases, fake quotations, and inaccurate summaries; ethical lapses including lies to the court to cover up the original AI misuse; references to Mata v. Avianca; and Federal Rule of Civil Procedure 8 or Federal Rule of Civil Procedure 11.

Takeaways for Attorneys

  • Would You Be Able to Identify AI Misuse? In all of the cases I reviewed for 2023, either the opposing counsel (i.e., the side not misusing AI), or the court identified the AI misuse. Even if you don’t use AI, it’s possible that the other side in a case may misuse AI. This could waste your time if you fail to catch it, but it could possibly help your case if you identify it and explain it to the court (see, e.g., Von Scott v. Fannie Mae or Whaley v. Experian or Ex Parte Lee).4
  • Are You Rubber Stamping AI? You could potentially get in trouble for signing off on something without properly reviewing it when the person under your supervision has improperly used AI to prepare materials. This may even happen when you’ve worked with the person for years and are not aware that they used AI (see, e.g., Mata v. Avianca).
  • Do You Have an AI Policy? A new hire may use AI without your knowledge. Or, a long-time colleague who has never used AI before may decide to reach for AI when they face a tight deadline, not understanding what AI hallucinations are. This is why having an AI policy is important, even if that policy is a ban.
  • More Than One Way to Hallucinate: Generative AI tools like ChatGPT don’t only “hallucinate” case citations. They can also improperly summarize the facts of the case. Therefore, merely checking whether or not a cited case exists is not enough.
  • Watch out for Doppelgänger Hallucinations: Generative AI features are now available in many familiar research tools like Google Search, LexisNexis, and Westlaw. Generative AI can “hallucinate” or make up convincing sounding yet false statements. Therefore, merely asking one AI to verify what another AI said is not sufficient “double checking.” (see, e.g., People v. Crabill).

Framework for Understanding Generative AI

As we review various cases involving fake citations, I will give readers a theoretical framework for understanding what went wrong. There are forms of so-called “hallucinations” that are subtler than simply “AI made up cases.”

We will also discuss topics like AI sycophancy, prompt engineering, and ethical issues like an attorney’s duty of candor. If the AI terms are unfamiliar right now, don’t worry. By learning both how the technology works and how attorneys have made mistakes in the past, you will hopefully become a more responsible user of generative AI.

info

I will use the term “LLM” frequently to refer to the “large language models” or AI models behind chatbots like ChatGPT, Claude, and Google Gemini. I will never mean a “Masters of Law” unless stated explicitly.

Footnotes

  1. (Note: I am going by publication date; for example, I will cover a January 2024 case with underlying activity in 2023 as a 2024 case)

  2. This case also incorrectly stated “This appears to be only the second time…” when in fact it appears to be the fourth federal case involving hallucinated case citations involving AI misuse. (Morgan v. Community Against Violence, et al.)

  3. “The Court is aware of recent incidents in the legal community involving filings generated in whole or in part by artificial intelligence, such as ChatGPT, that incorporate case citations and quotations which do not, in fact, exist.” (Von Scott v. Fannie Mae)

  4. “…it appears that at least the “Argument” portion of the brief may have been prepared by artificial intelligence (AI).” Ex Parte Lee, Footnote 2. This footnote also mentions a Texas Bar CLE on AI.