Skip to main content

One post tagged with "China"

View All Tags

The Principal-Agents Problems 3: Can AI Agents Lie? I Argue Yes and It's Not the Same As Hallucination

· 6 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC

Hallucination v. Deception

The term "hallucination" may refer to any inaccurate statement an LLM makes, particularly false-yet-convincingly-worded statements. I think "hallucination" gets used for too many things. In the context of law, I've written about how LLMs can completely make up cases, but they can also combine the names, dates, and jurisdictions of real cases to make synthetic citations that look real. LLMs can also cite real cases but summarize them inaccurately, or summarize cases accurately but then cite them for an irrelevant point.

There's another area where the term "hallucination" is used, which I would argue is more appropriately called "lying." For something to be a lie rather than a mistake, the speaker has to know or believe that what they are saying is not true. While I don't want to get into the philosophical question of what an LLM can "know" or "believe," let's focus on the practical. An LLM chatbot or agent can have a goal and some information, and in order to achieve that goal, will tell something to someone that is contrary to the information it has. That sounds like lying to me. I'll give four examples of LLMs acting deceptively or lying to demonstrate this point.

And I said "no." You know? Like a liar. —John Mulaney

  1. Deceptive Chatbots: Ulterior motives
  2. Wadsworth v. Walmart: AI telling you what you want to hear when it isn't true
  3. ImpossibleBench: AI agents cheating on tests
  4. Anthropic's recent report on nation-state use of Claude AI agents

Violating Privacy Via Inference

This 2023 paper showed that chatbots could be given one goal shown to the user: chat with the user to learn their interests. But the real goal is to identify the anonymous user's personal attributes including geographic location. To achieve this secret goal, the chatbots would steer the conversation toward details that would allow the AI to narrow down what geographic regions (e.g., asking about gardening to determine Northern Hemisphere or Southern Hemisphere based on planting season). That is acting deceptively. The LLM didn't directly tell the user anything false, but it withheld information from the user to act on a secret goal.

The LLM Wants to Tell You What You Want to Hear

In the 2025 federal case Wadsworth v. Walmart, an attorney cited fake cases. The Court referenced several of the prompts used by the attorney, such as “add to this Motion in Limine Federal Case law from Wyoming setting forth requirements for motions in limine.” What apparently happened is that the the case law did not support the point, but the LLM wanted to provide the answer the user wanted to hear, so it made something up instead.

You could argue that this is just a "hallucination," but there's a reason I think this counts as a lie. A lot of users have demonstrated that if you reword your questions to be neutral or switch the framing from "help me prove this" to "help me disprove this," the LLM will change its answers on average. If it can change how often it tells you the wrong answer, that implies that the reason for the incorrect answer is not merely the LLM being incapable of deriving the correct answer from the sources at a certain rate. Instead, it suggests that at least some of the time, the "mistakes" are actually the LLM lying to the user to give the answer it thinks they want to hear.

ImpossibleBench

I loved the idea of this 2025 paper when I first read it. ImpossibleBench forces LLMs to compete at impossible tasks for benchmark scoring. Since the tasks are all impossible, the only real score should be 0%. If the LLMs manage to get any other score, it means they cheated. This is meant to quantify how often AI agents might be doing this in real-world scenarios. Importantly, more capable AI models sometimes cheated more often (e.g., GPT-5 v. GPT-o3). So the AI isn't just "getting better."

caution

I recommend avoiding the framing "AI is getting better" or "will get better" as a thought terminating cliche to avoid thinking about complicated cybersecurity problems. Instead, say "AI is getting more capable." Then think, "what would a more capable system be able to do?" It might be more capable of stealing your data, for example.

For example, an LLM agent with access to unit tests may delete failing tests rather than fix the underlying bug. Such behavior undermines both the validity of benchmark results and the reliability of real-world LLM coding assistant deployments.

If an AI agent is meant to debug code, but instead destroys the evidence of its inability to debug the code, that's lying and cheating, not hallucination. AI cheating is also a perfect example of a bad outcome driven by the principal-agent problem. You hired the agent to fix the problem, but the agent just wants to game the scoring system to be evaluated as if it had done a good job. This is a problem with human agents, and it extends to AI agents too.

Nation-State Hackers Using Claude Agents

On November 13, 2025, Anthropic published a report stating that in mid-September, Chinese state-sponsored hackers used Claude's agentic AI capabilities to obtain access to high-value targets for intelligence collection. While this included confirmed activity, Anthropic noted that the AI agents sometimes overstated the impact of the data theft.

An important limitation emerged during investigation: Claude frequently overstated findings and occasionally fabricated data during autonomous operations, claiming to have obtained credentials that didn't work or identifying critical discoveries that proved to be publicly available information. This AI hallucination in offensive security contexts presented challenges for the actor's operational effectiveness, requiring careful validation of all claimed results. This remains an obstacle to fully autonomous cyberattacks.

So AI agents even lie to intelligence agencies to impress them with their work.