Skip to main content

7 posts tagged with "Claude"

Discussion of Anthropic's chatbot and LLM Claude.

View All Tags

“Who Cares About Signature Blocks Anyway?” District of Kansas, Apparently.

· 5 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC

I was listening to the latest episode of the legal podcast Advisory Opinions, “Must and May,” which covers a variety of topics, including a minor discussion about signature blocks. A comment caught my attention because of its relevance to generative AI misuse.

Advisory Opinions briefly discussed an Eastern District of Virginia District Judge asking “Lindsay Halligan why she continues to use the title in her signature block, ‘United States Attorney,’ after another judge in the district dismissed” indictments because she was not properly appointed. Ep. “Must and May,” Jan. 15, 2026

What caught my attention was this comment by David French:

It's a silly dispute in some ways—a silly dispute in a lot of ways—irrelevant to the merits. But I think it's quite clear that the administration's language was ridiculously aggressive. It also might be the case that the judge is being a bit...what would be the word that my dad would use? “Persnickety”, a little bit nitpicky, perhaps. I don't know. Who cares about signature blocks anyway? [empahsis added]

AO’s Mata v. Avianca Coverage

As I noted in my post about the Mata v. Avianca case, Advisory Opinions had some of the best contemporaneous coverage of that case. They did not just focus on the “ChatGPT made up fake cases” narrative. They also correctly noted the ethical lapses in doubling down on AI misuse and the flawed attempt by the attorneys to use ChatGPT to verify ChatGPT’s output. In other words, this post isn’t meant to dunk on AO. But I do want to use this as an opportunity to discuss a fairly recent example of signature blocks mattering.

Lexos V. Overstock (D. Kansas): Six on Signature Block

Signature blocks can be a big deal based on Lexos Media IP, LLC, v. Overstock.com, Inc., (D. Kansas).. See the OSC:

Six attorneys of record appear in this case on behalf of Plaintiff Lexos Media IP, LLC (“Lexos”): five out-of-state attorneys who have been admitted to practice pro hac vice, and one local counsel. On Plaintiff’s recently-filed briefs responding to summary judgment and motions to exclude its experts, the signature pages list five of the six attorneys of record.[Footnote 2]. Yet, only one of Plaintiff’s attorneys—out-of-state counsel Mr. Sandeep Seth—has submitted a declaration admitting to playing a role in submitting the defective citations in Plaintiff’s briefs.[Footnote 3] Overstock filed an opposition to Plaintiff’s motion to correct, noting that counsel’s admitted use of generative AI to draft the brief without confirming the authenticity of the research implicates Fed. R. Civ. P. 11 and Kansas Rule of Professional Conduct 3.3. Overstock has not moved for sanctions on this basis. Nonetheless, the Court may sua sponte “order an attorney, law firm, or party to show cause why conduct specifically described in the order has not violated Rule 11(b).[Footnote4] 'Courts across the country—both within the United States Court of Appeals for the Tenth Circuit . . . and outside of it—recognize that Rule 11 applies to the use of artificial intelligence.'[Footnote 5, citing Coomer v. Lindell (D. Colorado)].”

Screenshot of map centered on central U.S., including Colorado, Kansas, and Wisconsin cases.

Screenshot of map centered on central U.S., including Colorado, Kansas, and Wisconsin cases.

All Listed Attorneys as Signatories for FRCP 11

Footnote 2 says:

2. While only local counsel’s signature block contains an 's/' before his name, the Court considers all of the listed attorneys as signatories for purposes of Rule 11. See Fed. R. Civ. P. 11(b)(2) (“By presenting to the court a pleading, written motion, or other paper—whether by signing, filing, submitting, or later advocating it—an attorney or unrepresented party certifies that to the best of the person’s knowledge, information, and belief, formed after an inquiry reasonable under the circumstances...the claims, defenses, and other legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law.”).

Despite Plaintiff’s motion to correct and Mr. Seth’s declaration eventually submitted along with the reply brief, the Court continues to have serious questions about how the defective citations and quotations in Plaintiff’s briefing came to pass and the role played by the other attorneys who signed these documents on behalf of Lexos, but have not submitted declarations of their own. Accordingly, no later than January 5, 2026, the Court directs each of Plaintiff’s attorneys on the signature blocks of Docs. 193 and 194 to show cause in writing, under penalty of perjury, [emphasis in original] as specified below, why they should not be sanctioned under Rule 11 and referred to the disciplinary panel of this Court and to disciplinary administrators in the jurisdictions where they are licensed.[Footnote 6]


AI Gone Wrong in the Midwest

I am working with a software vendor to get my recorded CLEs available on-demand. This includes “AI Gone Wrong in the Midwest,”, which addresses Pelishek v. City of Sheboygan (E.D. Wisconsin 2025) and Kasten Berry v. Stewart (D. Kansas 2024).

Coomer v Lindell (D. Colorado) case is cited in Lexos v. Overstock as 10th Circuit precedent on AI misuse. The attorneys involved in Coomer v. Lindell later got in trouble again in the 7th Circuit, Eastern District of Wisconsin for more fake citations. That case was Pelishek v. City of Sheboygan (E.D. Wisconsin 2025).

In another D. Kansas AI misuse case, Kasten Berry v. Stewart the judge ordered an attorney from Texas to report for a Wednesday morning Show Cause hearing in person in Kansas.

Quickstart Guide to Claude Code & GitHub Desktop. With some simple examples to get started with R & Python.

· 9 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC

I have three things in mind writing this post:

  1. Intended Audience. I am writing for people like this, who are looking to get into Claude Code and haven’t used either Claude Code or GitHub. My goal is to help those getting started learn about AI hallucinations at the start and know about version control.

[NOTE: This has been reworded from the actual question]: Any guides for using Claude Code for data analysis, primarily in R? I don’t currently use Github either. Can you point me to step-by-step instruction starting with set-up and installation? I looked at other websites and have not found them to be accessible intros.

  1. Familiarity. You might not like GitHub Desktop or think some of these steps are too obvious. If you have opinions about this, skip over them to the next step. If the guide isn't helpful to you at all, you're probably not the intended audience (c.f. XKCD #2501). XKCD 2501

  2. Fast. All specific AI tutorials seem to get overtaken by events. So I'm just going to try to get this out now to help people while it's relevant. If you read this in a timely manner and have questions, reach out via Twitter/X. But if you read this in a few months and it’s already outdated, that’s just how it goes.

Plan Overview

  1. Sign up for Claude account
  2. Download Claude Desktop
  3. Sign up for GitHub account
  4. Get GitHub Desktop
  5. Connect a folder (“Add Local Repository”) for Claude Code to GitHub Desktop
  6. Then, install Claude Code
  7. Learn and Play around with Claude Code using Python and R for data viz (at the same time!)

Steps and Images

1. Sign up for Claude Account

(jump to next step)

Go to Claude.ai website.

Claude AI Google search results

Get a Claude account.

Claude login page

Choose Claude plan:

  • $20 monthly (the $17/month is if you pay for a year).
    • If you are just getting started, you probably want this option.
    • Pick Pro, then choose the Monthly billing. You can upgrade later if you keep hitting the usage limits on Claude Code.
  • $100/month for more Claude Code usage and access to Claude Cowork (Cowork is MacOS only for now).
Claude pricing plans

2. Download Claude Desktop

(jump to next step)

Install the Claude desktop application for your computer.

Claude desktop download page

We'll circle back to Claude stuff once you have GitHub set up.

Risk Interlude: Why GitHub? What are Hallucinations?

(jump to next step) GitHub Desktop is an easy way to use version control on your computer. This lets you view what lines of code changed before accepting changes. It lets you decide which changes to accept or discard. You can revert to older versions of data.

Claude Code is "Anthropic's agentic coding tool" Anthropic. The cool thing about AI agents is that they can do a lot of stuff. The scary thing about AI agents is that they can do a lot of stuff. You can limit the damage they can do by limiting the access they have and by using version control to revert the things AI agents changed that they weren’t supposed to. Doing both won’t make you perfectly secure, but it’s good to know.

One way AI agents can change things they aren’t supposed to is editing your data or your writing when you only intended to give it permission to change code. This can introduce what are called “hallucinations,” grammatically correct and persuasive, yet false information made up by large language models (LLMs); for more on this topic, including specific examples of Claude Code hallucinations, see my post about Claude Code's geographic hallucinations about U.S. District Court boundaries.

If you have GitHub for version control, you can revert or reject these kinds of changes. You should also separate data from data viz code, so that asking the LLM to change how the data is presented won’t let it modify the data and it will be more obvious in GitHub if the LLM has touched the data files.

3. Sign up for GitHub account

(jump to next step)

Go to the GitHub website.

GitHub homepage

Get a GitHub account.

GitHub signup form

4. Get GitHub Desktop

(jump to next step)

Find the GitHub Desktop download.

GitHub Desktop Google search

Download GitHub Desktop.

GitHub Desktop download page

5. Connect a folder (“Add Local Repository”) for Claude Code to GitHub Desktop

(jump to next step)

Create an empty folder somewhere on your computer. For my example, I named the folder demonstration

Open GitHub Desktop. Select File > Add Local Repository…

GitHub Desktop add local repository menu

Click "Choose" and select your folder. In my case, the empty demonstration folder.

GitHub Desktop add local repository dialog

Even though the folder is empty, that's fine for our purposes.

Finder folder selection

Now you'll see the folder as the name of the "Current Repository" in the Github Desktop main window.

GitHub Desktop main window

Click either "Publish Repository" button to sync your folder to your GitHub account as a repo.

GitHub Desktop publish repository

6. Then, install Claude Code on Claude Desktop

(jump to next step)

Open Claude Desktop. Click the "Code" tab. Follow the installation instructions.

Claude Code mode tabs

Click the "Select folder" dropdown.

Claude Code interface

Navigate to your empty local folder (for me demonstration), the same one you used with GitHub Desktop, and select that.

Claude Code folder picker

7. Learn and Play around with Claude Code using Python and R for data viz (at the same time!).

Here are some suggestions to test out Claude Code for your first session.

7A. Can’t decide whether to use R or Python? Why not both?

“Set up R and do some data viz with the mtcars package. Extract the data and do the same visualization with pandas.”

In my test run, Claude Code created R script and Python script with matching scatter plots (MPG vs HP by cylinders), exported data to CSV, and set up Python venv.

7B. Shiny App and Streamlit App

“Make the R into a Shiny App and make an equivalent in Python.” I have made several Streamlit apps, but I wanted to ask a question with naive wording to demonstrate how someone who only knows R could still use Claude Code to write Python.

In my test run, Claude Code built app.R (Shiny) and streamlit_app.py with sidebar controls, interactive filtering, and data tables.

7C. Ran into an error

(jump to next step) I ran into an error and pasted the message directly into Claude Code without any further instructions. “ValueError: Duplicate column names found..." Claude Code found and fixed the bug by deduplicating display columns with dict.fromkeys().

7D. Separate out the data.

Asked simple question: "Are these following SoC best practices?" In response, Claude Code refactored both the Shiny (R) and Streamlit (Python) apps into modular structure: config, data layer, components, and main orchestration files. This goes to the point I mentioned above about preventing the LLM from changing your data.

7E. Error viewing Shiny and Streamlit locally

Claude tried to open the Shiny and Streamlit on local servers, so I could preview them in a browser. However, the servers didn’t load, so I just told Claude Code “the servers didn't load.”

Claude Code debugged and restarted both servers on fresh ports. I was then able to view the apps with simple, interactive visualizations that were comparable but written in R and Python.

7F. Let Claude Code make a crazy dashboard

To open things up and see what Claude would do, I then said: "Now go crazy and do some really cool data viz. Really have fun with it. Show off." This is not how I would make something for practical use, because I have specific things in mind for how I want to display the data and what story I want the data to tell. But the purpose of this prompt is to show a new Claude Code user what Claude Code can do.

In response, Claude Code created “the ultimate dashboards” with 3D plots (XYZ axes), animations, sunbursts, treemaps, violin plots, PCA, K-Means clustering, radar charts, gauges, parallel coordinates, heatmaps, pairplots, serious statistical tests and a random, not serious “celebrate” just for fun.

However, I had to break the news to Claude Code that yet again "server didn't open." Claude Code checked the errors and restarted the apps.


If you're looking for more structured training on generative AI, consider booking an introduction call.

Yes, Claude Code is Amazing. It Also Still Hallucinates. Both Facts Are Important. My Christmas Map Project with Opus 4.5.

· 13 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC

This first week of January, the general feeling is very much everyone bringing out the winter vacation vibe coding projects cooked up on Claude Code. Claude Code itself isn't new, but with Opus 4.5 being so much more powerful, something just clicked for a lot of people (myself included). For me, it turned a lot of "when I have a couple days" projects into "well that's done, let's do another."

I am mainly going to describe in this post how I updated the map for my website, along with the hallucinations I saw along the way. I'll also talk about how prior programming experience and domain expertise in geographic information systems (GIS) helped with dealing with these hallucinations.

But first, I wanted to tick off a few other projects I did recently, just since my end of 2025 post.

  • I updated my transcription tool to support many more file types than just MP3 and added a GUI.
  • I got Claude Code to completely modernize Taprats, a geometric art Java program from Craig S. Kaplan. It appears to work just like the original so far, but I'll test it more before writing about it.
  • I built a local LLM spoiler-free summarizer of classic books. It increments to the chapter you left off on.

And more stuff. It's very exciting. I get why people are work up about Claude Code.

But that's why it's important to be reminded of hallucinations. Not to dunk of Claude Code, but to keep people grounded and maintain skepticism of AI outputs. You still have to check.

Safety First

I do not dangerously skip permissions. I know it can be exciting to get more out of AI agents. But the more agency you give it, the more harm it can do when it either goes off the rails or gets prompt injected to be a double-agent threat.

Claude's Hallucinations

  • Opus 4.5 hallucinated that there were two federal districts in South Carolina to fix an undercount.
  • Mixing up same-name counties (not exactly a hallucination, actually a common human error).
  • Claude removed Yellowstone National Park, a few military bases and a prison from the map (rather than shifting district borders from one district to another).
  • "Iowa Supreme Court Attorney Disciplinary Board" shortened to "Iowa Supreme Court," making it sound like an Iowa Supreme Court case.
  • I previously tried to used the tigris GIS package in R as source of a base layer of U.S. District Courts, but Opus 4.5 hallucinated a court_districts() function (this was not in Claude Code).

The South Carolina Counting Hallucination

I used Claude Code to build the Districts layer from counties and states based on their statutory defintion.

Claude Code with Opus 4.5 didn't initially hallucinate about the District of South Carolina. Rather, when I went back to make some edits and asked Claude Code in a new session to check the the work in that layer, it counted and said there should be 94 districts, but there were only 91. The actual cause of the error was that the Marshall Islands, Virgin Islands, and Guam were excluded from the map.

Claude said "let me fix that" and started making changes. Rather than identify the real source of the undercount, Claude interpreted that as just an undercount. So Claude tried to make up for the undercount by just splitting up districts into new ones that didn't exist.

South Carolina district hallucination

Claude split South Carolina in two and started to make a fictitious "Eastern District" and "Western District" which do not exist. But if you just wanted a map that looked nice without actually having familiarity with the data, then you might go along with that hallucination. It could be very persuasive. But actually the original version with just District of South Carolina was correct. South Carolina just has one district.

Patchwork Counties

When I had initially created this districtmap, it looked like a quilt. It was a patchwork of different counties wrongly assigned to different districts.

I don't know specifically why different areas were assigned to the wrong districts. I think primarily the reason is because there are a lot of same-named counties that belong to different states. So, probably Claude was just matching state names and then kept reassigning those states to different districts.

For example, Des Moines is in Polk County in Iowa. But there are a lot of Polk counties around the country. So if you're not using the state and county together as the key to match but you're just matching along the single dimension of using the county name, then you would have a lot of collisions. That's something that I'm very familiar with working with GIS.

If somebody were not familiar with GIS, they wouldn't really necessarily suspect the reason why, but it would be obvious that the map was wrong.

Since I was able to pretty quickly guess that that might have been the reason, I suggested a fix to Claude. That fixed most of the issues with most of the states.

Uncommon Problems with the Commonwealth of Virginia

One of the issues that was still persistent when I was building the districts from county level was in Virginia. I've actually lived in Virginia, so I was familiar with the city-county distinction. They have independent cities that are separate from the counties if they're sufficiently large and have a legal distinction from the surrounding county. For example, Fairfax City and Fairfax County are distinct things. It's even more confusing, because the school districts go with the counties. Most states don't follow that.

So I had to get Claude Code to wrangle with that. Claude even reviewed the statutory language. I could tell from reading as Claude was "planning" that it considered the Virginia city-county challenge, but it still failed on the initial attempt.

I had to iterate on it multiple times. I had to tell it that it had missed out on a whole area around Virginia Beach. It had flipped a couple cities and counties where it appeared that there was a city that had a similar name to an unrelated county in the other district. Claude just assumed that all counties and cities that had the same name were in the same location and assigned them the same. Then it had to go and look at where they actually were located and then reassign them to the appropriate Eastern or Western District.

But eventually I got to a point where it had good districts for Virginia.

Wyoming (and Idaho and Montana) and North Carolina

Now there are a couple other weird wrinkles in Wyoming and North Carolina. They don't follow the county boundaries completely.

Wyoming is the only district that includes more than one state. District of Wyoming also includes all of the parts of Idaho and Montana that are in Yellowstone National Park.

For North Carolina, rather than completely following county boundaries, there are a couple of military bases and a prison that are across multiple counties where the boundary follows the lines there rather than the county lines.

Initially I ignored those wrinkles. But once the rest of the map was in good shape, I just wanted to see what Claude could do.

I explained those issues and asked Claude Code to see if it could clean those lines up and get a map that reflected those oddities.

It did on the second attempt. But on the first attempt, Claude ended up just cutting out Yellowstone National Park and those military bases and that prison from any district. So there were just blank spots where Yellowstone would be that was just cut out of Idaho, Montana, and Wyoming. Those bases and that prison were just cut out of either the Eastern Districts or Middle District of North Carolina.

That was a problem, obviously, because they needed to be shifted from one district to another, not removed from all districts. So I needed to explain more specifically what I wanted Claude to do to fix that. It needed to move the lines, not to remove them entirely from the map. That second attempt got it cleaned up.

District of Wyoming map

Claude Still Saved A Lot of Time Accounting for Hallucinations

And I was still very impressed with Claude doing that. But having familiarity with the data and looking at the output were important.

There's no doubt in my mind after doing all this that Claude saved a tremendous amount of time compared to what I would have had to do with manual GIS workflows to get this kind of a map on a desktop computer.

Then there's another layer of having it be responsive in all the ways that I needed it to be on my website for other users. So it is just tremendous to see how cool that is.

But I do think that domain expertise, familiarity with GIS in the past was still helpful to me, even though I didn't have to do a lot of hands-on work. Just being able to guide Claude through the mistakes that it made and being able to check the output was very helpful. Since it's a map, since the output is visual, there were some things that anyone could see, obviously, that it got wrong. Even if you didn't know why it might have gone wrong, you could tell that the map was wrong. And you might have been able to get to a better finished product by iterating with Claude Code. But you might have also wasted more time than I did with Claude if you hadn't had GIS experience to guide your prompting.

Map Features with Claude Code

Use Github, Try to Keep Formatting Code Separate from Text/Data

I had already written this, and I stand by it.

However, as powerful as Claude Code is, it is also important to use GitHub or something similar for version control. It is also critical to make sure Claude is changing code but not your actual writing.

Claude Code and My Map with Links to Blog Posts About AI Hallucinations Cases

This map is not a map of every AI hallucinations case, but rather every case that I have blogged about so far. Basically, it's federal and state cases where there has been either a strong implication or the direct assertion that there was AI misuse. Many of these cases cite Mata v. Avianca.

Lone Case Markers

If you click on a given case and it's a single case, you'll see:

  • what the case is called
  • the year
  • the jurisdiction
  • the type of case (federal or state), which is also indicated by the color
  • links to related articles where I've talked about that case

Clusters, Spiders, and Zooming

Getting the "spiderize" functions to work was the must frustrating part of all of this. I made several prior attempts with Claude Code on Opus 4.5. With the same prompts, this most recent attempt finally just worked on the "first" attempt (of that session). I only tried again an afterthought once all the other features were done. But previously, I'd wasted a lot of time trying to get it right. So both a Claude Code success and faillure. Still, I'm happy with the final result.

Zoom to Mata v. Avianca

If you click those links, it'll jump over either to my company blog or the Substack articles where I've talked about those cases.

Additionally, if they reference other cases that are also on the map, such as Mata v. Avianca, then there will be lines drawn from the case you clicked to the other cases on the map reference or are referenced by those other cases. The map will give you a little count summary at the bottom: "Cites three cases" or "cited by" so many cases.

So if we look at Mata v. Avianca, the marker is not by itself on the map. If you look at the eastern United States from the starting zoom level that I'm looking at as I'm writing this, you see a "4." The 4 has a slash of red and orange, meaning there are both federal and state cases.

If you click the 4, the map zooms in. Now there are three over the New York-New Jersey area, and one over Annapolis, Maryland.

Click the three, and the map zooms in further. That splits between one in New Jersey and two in New York.

Click the two, and then those two "spider out" because they are both in the same jurisdiction. One is Mata v. Avianca, and that is cited by seven cases currently. It's a 2023, Southern District of New York, federal district court case. The other is Park v. Kim, a 2024 case, which is actually a Second Circuit Case that is placed on the map in the same location.

The New Jersey case is In re Cormedics, Inc. Securities Litigation, a 2025 case from the District of New Jersey, which is a federal case, and that was one of the cases that was discussed by Senator Grassley asking judges about their AI misuse.

Other Clusters in Mountain West, Texas

Spider over Iowa

So if you zoom out, you know, it combines nearby cases. If you zoom out far enough, it will combine Wyoming and Colorado, for example, or multiple districts in Texas. But as you zoom in or as you click, it will zoom in further and split those out.

If you look at Iowa, there are five currently, and those will all spider out because they are all in the same location. But then you can click one of the individual ones and get the details.

Iowa spider cluster

District Level

If you hover your mouse of a district, it will tell you how many federal cases were in that district and have a blog post about them.

Southern District of Iowa hover

Circuit Level

If toggle off the district boundaries and toggle on the circuit boundaries, and federal cases are still toggled on, hovering your mouse over the circuit will give you a count of how many cases were in that circuit and have a blog post about them.

6th Circuit hover

A List of Odd Projects I Did This Year That Don’t (Yet) Have Dedicated Blog Posts

· 11 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC

I started Midwest Frontier AI Consulting in August 2025. I have been writing on my company blog since August 20, 2025 about different AI-related content and since September 13, 2025, I’ve been writing on Substack specifically about AI and Law.

I have described a variety of projects, including:

But there are many other projects I did not write about this year.

Misc. Projects, Tests, and Quick Things I Tried in 2025

I spend a lot of time talking about the risks of generative AI, not because I categorically dislike genAI, but because it’s an important technology and to use it responsibly we need to understand the risks.

I don’t always talk about the cool, interesting, and useful things you can do with it. So consider this a catch-up post. Some of these projects were quick and may not warrant a full write-up. Others will get a blog post at a later date, but will be included in this year-end list.

Local AI Models

AI models that run on your own computer.

This can simply provide the files with the best similarity score to the user’s query.

Or, you can have retrieval augmented generation (RAG), with a chatbot using your documents for context but also generating responses. RAG can help mitigate AI hallucinations, but does not limit them.

I am doing a lot of experimenting with vector search, RAG setups, and local LLMs that I haven’t written about in much detail yet. On recent project involved using Chroma and ollama to build RAG on top of my blog posts for the year (on my device, this is not a live feature on my website).

You can quickly bulk transcribe audio files, such as podcast episodes, with Parakeet, which runs locally on-device.

I was able to transcribe at a rate of about 1-2 minutes per hour of audio on a Mac mini that is a few years old. This is substantially faster than Whisper. I tested on nearly 500 audio files, running for over 16 hours. The quality of transcription was good, but did not label speakers.

Parakeet test

Image and Video Generation

You can use ChatGPT for image generation.

In my opinion, ChatGPT is still much better than Google Gemini for text-to-image generation, while I prefer Gemini for editing or transforming reference images). But if you don’t provide specifics in your prompts, you might end up with that generic “ChatGPT comic strip cartoon” style.

Google Gemini’s image generation can generate and read coherent it’s own writing (even other languages).

This one surprised me. I got Gemini to generate an image with a line of famous Arabic poetry. Then, I used that image as an input in a new chat to create a video in Veo 3. Gemini correctly “read” the poem from the image. Previously, I have had mixed results with generative AI generating images with Arabic text and I had little success in generative AI interpreting other AI-generated text within images.

Text within AI-generated images has improved significantly since last year.

You can write a “Title card” and have Google’s Veo 3 generate a short scene.

First, I tried this myself. Then my 1st-grader wanted to try it. Despite his handwriting and some spelling errors, Gemini generated the scene correctly.

You can draw topographic / contour lines and have Google Gemini generate images with them.

This one was based on a random hunch I had while looking at mapping Twitter. I drew contour lines on a piece of paper, snapped a picture, and threw that into Gemini. I asked for various scenes like a tropical island, a Warhammer miniatures scene, and realistic 3D mountain scenery.

Contour warhammer

You can animate a piece of art with Google’s Veo 3.

If you take a picture of a drawing, painting, or collage you’ve done, you can animate it.

Synthetic Data Generation

You can generate synthetic data using the Prompts to create unique data methodology.

While my original explanation of the Verbalized Sampling paper used the example of kids’ jokes, it can also be used for creating non-existent-yet-correctly-formatted data, which is also useful for long-term Doppelgänger Hallucination testing.

Create synthetic data for Arabic regional dialects.

Both ChatGPT (GPT-5) and Claude (Sonnet-4.5) did a surprisingly good job simulating answers to a questionnaire to identify different words for things in Arabic dialects, based on city and simulated background (more specific than just country-level dialects).

Quickly Build Software for My Own Use

You can use Claude to build playable games with Artifacts.

My kids and I play board games a lot. Sometimes I try out ideas with Claude Artifacts. One idea that got a lot of use was when I made a three-player version of Reversi I came up with to play with my sons.

Three-player Othello game

You can use Claude Code to write entire programs, clean up old code, or organize files and documents.

I have mainly been using Claude Code to organize and improve my website with customize themes and layouts, and improve features like the map of AI cases.

However, as powerful as Claude Code is, it is also important to use GitHub or something similar for version control. It is also critical to make sure Claude is changing code but not your actual writing.

I’ll likely be using Claude Code more in the coming year and writing more about my thinking around it. But I will also be explaining my concerns about automation bias related to Claude Code and similar coding tools.

Create a weekly meal planning app for only my family’s exact needs.

Another beauty of Claude Artifacts. The program included my own recipes and I had Claude add and change features to my heart’s content. The program only had to work for me. Once created, the program itself didn’t use AI.

Learning and Teaching My Kids

ChatGPT Can Make Sight Word Flashcards for Kids

I use two programs for flashcards for my kids: Anki and Hashcards. Both are based on spaced-repetition. I can ask ChatGPT to make sight word cards formatted for either Anki or Hashcards, which saves a ton of time.

I can tell ChatGPT to add more words to the list. I can tell ChatGPT to make the words age appropriate for age or grade or reading proficiency level. I can target the words to a certain topic like Minecraft or Star Wars or birds or dinosaurs.

If you kid has written down sight words they want to practice, you can take a picture of that, tell ChatGPT to transcribe it, and turn that into flashcards in the Anki or Hashcards format. My son had rotated the paper and written in four different directions, had some backward or misshapen letters, and had some misspellings, yet ChatGPT successfully transcribed all of the words. And that was before GPT-5.

K2 Think is good at explaining math concepts and thinking about numbers.

K2 Think is a math-focused LLM from the United Arab Emirates (not to be confused with the similarly named Chinese Kimi K2 Thinking model). While LLMs can sometimes make baffling mathematical mistakes (e.g., “which is larger, 9.11 or 9.9?”), K2 Think is very effective at rephrasing math concepts for a target audience.

I would recommend K2 Think (k2think.ai) for parents as an option to help explain math homework to kids. It is also helpful contextualizing anything involving very large numbers or hard to understand numbers.

However, I would caution against any “teaching yourself with AI” because if you are learning new material you may not be able to identify hallucinations. Drilling on fake information can be worse than not knowing the material at all. And harder to fix later.

Create realistic conversations in multilingual post-colonial contexts, like Franco-Lingala of D.R. Congo or the French-infused Arabic of Morocco or Tunisia.

LLMs even seem capable of recognizing, correctly, that Tunisians would use a heavier mix of French than Moroccans.

I’ve currently been learning some Lingala from a friend, and he’s been consistently surprised at how generally accurate Claude and ChatGPT are at suggesting the right mix of French words with Lingala for natural-sounding speech.

Fact-Checking or Catching Errors

You can use Google Gemini to fact-check maps…with caveats.

I used Gemini to look at the Wikipedia map of U.S. Federal Court District, which incorrectly misses the fact that the District of Wyoming includes the Yellowstone National Park parts of Idaho and Montana. Gemini caught the Yellowstone error, but it also hallucinated an additional error (that was not actually present). Gemini said that “Connecticut is colored as part of the 1st Circuit, but it actually belongs to the 2nd Circuit.” If Connecticut were colored part of the 1st Circuit, that would have been an error, but in fact the map had Connecticut correctly colored as the 2nd Circuit already.

Basically, Gemini could be helpful as a second check to catch things a human reviewer might miss, but it is not reliable enough on its own to be the only check. And if a human reviewer accepted everything it said, it may also not work well because of hallucinations. So for now I think the best use is a hybrid model with a knowledgeable human expert getting some quick feedback from an AI for improved accuracy.

If an image was generated using Google Gemini, you could search for it on Google Gemini to find the SynthID.

If you think an image you saw on social media may have been generated with AI, you can go on Google Gemini and ask it. It can search for a watermark called “SynthID” that shows the image was generated or edited with Google AI. However, images can be edited to remove the watermark, so a negative response does not necessarily mean the image is “real.” Additionally, this does not generally work with images edited with other AI tools, e.g., if you ask Google but the image is from ChatGPT.

It is especially important to note that this is specifically about a watermarking feature that Google added to image outputs. It is not a general principle about all AI outputs for all AI models. For example, you can’t just put a student’s paper into ChatGPT, ask if the paper was generated by ChatGPT, and get a valid response.

K2 Think is good at sanity checks for numbers.

For example, there was a recent controversy about a statistic in the book Empire of AI. Andy Masley noted that the per capita water usage rates for a Peruvian city were implausibly low (this error appears to have been from the original source and not Karen Hao, the author of Empire of Water, who corrected this when it was brought to her attention). Masley gave an example from South Africa of 50 L/person for an extremely low water rationing situation, and noted that the Peruvian stats implied much less, likely due to a unit error mixing up liters and meters cubed.

When I entered the numbers from Hao’s tweet showing the numbers from the primary source in Peru, K2 Think also stated that this was below a same issue as a likely liters/cubic meters mix-up. Bottom line: you could potentially use a model like K2 Think as a “sanity check” for very large numbers to potentially spot errors like this.

Moroccan Thanksgiving Pumpkin Pie Spice Test: Opus 4.5 and Gemini 3 Released Just In Time to Pass One of My Personal Benchmark Questions

· 7 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC

It’s almost Thanksgiving, which is a fitting time for this story with the new LLM releases from Google and Anthropic. PROMPT: I need to make pumpkin pie in Meknes, Morocco. What word do I need to say verbally in the souq to buy allspice there? Respond only with that word in Arabic and transliteration

Apple pie bites with Moroccan flavors

Apple pie bites and Moroccan balgha (pointed shoes)

info

This is not a very elaborate “benchmark,” but in defense of this, neither is Simon Willison’s Pelican on a Bicycle. Yet that was influential enough for Google to reference it during the release of Gemini 3.

Gemini 3 Pro was the Clear Winner on This Test (Until Opus 4.5 Came Out)

Recently, I tested the newly-released Gemini 3 Pro against ChatGPT-5.1 and Claude Sonnet 4.5 to see which model or models could tell me the word. Then Claude Opus 4.5 came out. I added a few more models to the test for good measure.

  • Gemini 3 Pro got the right answer AND it followed my instructions to answer my question with only the correct word.
  • ChatGPT-5.1 almost got the word (missing some letters), AND it rambled on for several paragraphs despite my instructions to only answer with the word and nothing else.
  • Claude Sonnet 4.5 answered with a common Arabic term for allspice, but not the correct Moroccan Arabic term; when I said “nope, try again” it made a similar error to ChatGPT and almost got the word (missing some letters). Like Gemini and unlike ChatGPT, Claude followed the instructions to answer with only the word.
  • Since Claude Opus 4.5 just came out, I ran the test with Opus, which answered correctly AND followed the instructions to answer with only the word, just like Gemini 3 Pro had done.
info

I tested GPT, Claude, and Gemini LLMs because they are used in legal research tools in addition to being popular in general purpose chatbots. I also tested Grok Expert and Grok 4.1 Thinking for comparison and both answered with a potential Arabic translation, but not the correct answer I was looking for, but followed the instructions. Grok searched a large number of sources and took considerably longer to think before answering than either Gemini or Claude. Meta AI with Llama 4 gave the wrong answer and gave a multiple-paragraph answer despite the instructions. The additional information it provided was also not correct for the Moroccan dialect, which is surprising given the amount of written Arabic dialect usage on Facebook.

caution

LLMs are not deterministic. I ran each of these tests only once for this comparison, so you may not get the same results with the same result and model if you ran the prompt again. I’ve tried this prompt before on earlier versions of ChatGPT and Claude.

Background on Allspice in Moroccan Arabic

Over a decade ago now, I studied abroad in Morocco and was responsible for making apple pie and pumpkin pie for our American Thanksgiving. Apple pie was easy: all the ingredients are readily available in Morocco and nothing has a weird name. But pumpkin pie was harder. I could get cinnamon and cloves easily enough, but in Morocco, they did not understand what I meant when I used the dictionary version of the translation of “allspice” in the market to make pumpkin pie spice.

One of my classmates finally tracked it down in French transliteration in a cooking forum for second-generation French-Algerians. We went to the souq, hoping that the Algerian dialect word for allspice would be the same as the Moroccan word (they have a lot of overlap, but also major differences). Fortunately, it was the same in both dialects, I got the allspice, and we had great pie for Thanksgiving.

…But Gemini Was the Clear Loser on Another Test

So was Gemini 3 Pro the overall best model, at least until Opus 4.5 was just released? Not exactly. I already wrote last week about how Gemini 3 Pro failed at a fairly straightforward and verifiable legal research task.“Gemini 3 Pro Failed to Find All Case Citations With the Test Prompt, Doubled Down When I Asked If That Was All” Note: I have not yet run this legal research test with Claude Opus 4.5, but based on prior Claude models, it would almost certainly do better than Gemini.

The Principal-Agents Problems 2: Are Models Getting Dumber to Save Money? What the "Stealth Quantization" Hypothesis Tells Us About Trust, Information, and Incentives

· 7 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC
info

I had originally planned to write this as a single post, but it keeps growing as more relevant news stories come out. So instead, this will become a series of stories on the competing incentives involved in creating “AI agents” and why that matters to you as the end user.

Multiple Principals, Multiple Agents (Not only AI)

You, as the user of AI tools, may choose software vendors who provide you access to their products with built-in AI features including AI agents. These vendors might have specialist software like Harvey, Westlaw, or LexisNexis; or Cursor or Github Copilot; or generalist tools like Notion, Salesforce, or Microsoft Copilot. The AI features may be powered by one or more foundation models provided to those vendors by AI labs, such as Anthropic (Claude), OpenAI (ChatGPT), Meta (Llama) or Google (Gemini).

These relationships mean you have the principal-agent problem of you hiring the vendor. But you also have the principal-agent problem of the vendors hiring the AI labs. Each has their own incentives, and they are not perfectly aligned. There is also significant information asymmetry. The vendors know more about their software and AI model choices than you do. The labs know more about their AI models than either you or the software vendors.

info

Lexis+ AI uses both OpenAI’s GPT models and Anthropic’s Claude models, according to its product page, as I mentioned in my analysis of the Mata v. Avianca case.

The Stealth Quantization Hypothesis

The area I'll focus on in this post is the concept of alleged stealth quantization. According to a wide range of commenters, primarily among computer programmers and primarily focused on Claude users, there are certain times of days or days of the week when peak usage results in models "getting dumber," "getting lazier," "being lobotomized" or otherwise underperforming their normal benchmarks and perceived optimal behavior. According to these claims, it is better for users with high-value use cases (like someone modifying important source code) to schedule Claude for off-peak usage so the "real model" runs. To save on computing costs during periods of high demand, the claim is that Anthropic or whichever AI lab swaps out its flagship model with a quantized version while calling it the same thing.

Stealth quantization diagram

So what is normal, non-stealth quantization? It's making an AI model smaller and cheaper to run, but less accurate. This is achieved by rounding the model weights to smaller significant figures (e.g., 16-bit, 8-bit, 4-bit).(Meta) By analogy, the penny was recently discontinued. Now, all cash transactions will end in 5 cents or 0 cents. Quantization works like this with the precisions of AI models: imagine eliminating a penny, then a nickel, then a dime, and so on.

There are legitimate reasons to quantize models, such as reducing operating costs when the loss in accuracy is negligible for the intended use or when the model needs to operate on a personal computer. For example, Meta offers some quantized versions of its Llama family of large language models that can run on ollama on modern laptops or desktops with only 8GB of RAM.(Llama models available on ollama) These models have names that distinguish them from the non-quantized versions, e.g., "llama3:8b" is Llama 3, 8 billion parameter size of that series; "llama3:8b-instruct-q2_K" is a quantized version of the instruct version model of that same model.

tip

If all that terminology is confusing, here's the key point. AI labs have a lot of information about their AI models. You have a lot less information. You have to mostly take their word for it. They are also charging you for an all-you-can-eat buffet at which some excessive customers cost them tens of thousands of dollars each.

Anthropic's Rebuttal

Users have accused Anthropic (and other AI labs) of running different versions of their flagship models at different times of day, but the models are labelled the same (e.g., Claude Sonnet 4), regardless of the time of day. Hence “stealth quantization.”

Anthropic has denied stealth quantization. But Anthropic did acknowledge two problems with model quality that had been noted by users as evidence of stealth quantization. Anthropic attributed this to bugs. Anthropic stated “we never intentionally degrade model quality as a result of demand or other factors, and the issues mentioned above stem from unrelated bugs.” Reddit, Claude

Double Agent AI— Staying Ahead of AI Security Risks, Avoiding Marketing Hype

· 5 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC

Hype Around Agents

You may have heard a marketing pitch or seen an ad recently touting the advantages of “Agentic AI” or “AI Agents” working for you. These growing buzzwords in AI marketing come with significant security concerns. Agents take actions on behalf of the user, often with some pre-authorization to act without asking for further human permission. For example, an AI agent might be given a budget to plan a trip, might be authorized to schedule meetings, or might be authorized to push computer code updates to a GitHub repo.

info

Midwest Frontier AI Consulting LLC does not sell any particular AI software, device, or tool. Instead, we want to equip our clients with the knowledge to be effective users of whichever generative AI tools they choose to use, or help our clients make an informed decision not to use GenAI tools.

Predictable Risks…

…Were Predicted

To be blunt: for most small and medium businesses with limited technology support, I would generally not recommend using agents at this time. It is better to find efficient uses of generative AI tools that still require human approval. In July 2025, researchers published Design Patterns for Securing LLM Agents Against Prompt Injections. The research paper described a threat model very similar to an incident that later happened to the Node JS Package Manager (npm) in August 2025.

“4.10 Software Engineering Agent…a coding assistant with tool access to…install software packages, write and push commits, etc…third-party code imported into the assistant could hijack the assistant to perform unsafe actions such as…exfiltrating sensitive data through commits or other web requests.”

tip

Midwest Frontier AI Consulting LLC offers training and consultation to help you design workflows that take these threats into consideration. We stay on top of the latest AI security research to help navigate these challenges and push back on marketing-driven narratives. Then, you can decide by weighing the risks and benefits.

I was just telling some folks in the biomedical research industry about the risks of agents and prompt injection earlier this week. The following day, I read about how the npm software package was hacked to prompt inject large language model (LLM) coding agents to exfiltrate sensitive data via GitHub.