Skip to main content

AI & Law: History of AI Misuse

Midwest Frontier AI Consulting, helps small- and medium-sized law firms in the U.S. Midwest use generative artificial intelligence (GenAI) tools through training and governance. The company blog covers a variety of topics, from cybersecurity to managing potential customer confusion caused by AI summaries containing false information. This section covers the content, also on Substack, that is focused on the history of generative AI misuse in law.

Interactive Map of Blog Posts

Explore the geographic distribution of AI hallucination sanctions cases that I have written about so far. Click on any marker to learn more and navigate to the full blog post.

Loading map...

Generative AI Case History: Mata to Present

I typically post at least weekly going through cases involving sanctions involving lawyers, pro se litigants, and others involved in the legal process who misused generative AI. It is important to note that several of these attorneys involved in these cases stated that they do not use generative AI or that it was their first time using generative AI. In other words, the errors were introduced either:

  • by junior employees using AI during the drafting process (possibly without disclosure to the attorney)
  • or the attorneys themselves misused the AI due to lack of training and familiarity with the tools.

Then, the factual errors introduced by AI use were not corrected by the attorney who was ultimately responsible for the content.

We will go through state and federal cases (mainly U.S., but with occasional international cases) starting with Mata v. Avianca (SDNY 2023).

We will discuss why every law firm needs a generative AI policy, even if that policy is to state explicitly that "we do not use generative AI" and define what that means, based on examples from past cases. Firms around the U.S. have gotten into trouble even as they say they do not use AI. Opting out of AI requires a conscious decision, because many common software providers are pushing out GenAI features into existing products.

Framework for Understanding Generative AI

As we review various cases involving fake citations, I will give readers a theoretical framework for understanding what went wrong. There are forms of so-called “hallucinations” that are subtler than simply “AI made up cases.”

We will also discuss topics like AI sycophancy, prompt engineering, and ethical issues like an attorney’s duty of candor. If the AI terms are unfamiliar right now, don’t worry. By learning both how the technology works and how attorneys have made mistakes in the past, you will hopefully become a more responsible user of generative AI.

info

I will use the term “LLM” frequently to refer to the “large language models” or AI models behind chatbots like ChatGPT, Claude, and Google Gemini. I will never mean a “Masters of Law” unless stated explicitly.