Skip to main content

2 posts tagged with "AI Agents"

Discussion of AI systems that can take actions on behalf of the user.

View All Tags

Hiring With AI? It's All Flan and Games Until Someone Gets Hired

· 7 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC

What's the worst that could happen?

The thing about using generative AI workflows is you always have to genuinely ask yourself: “what's the worst thing that could happen?” Sometimes, the worst thing isn’t that bad and the AI will actually save you time. Sometimes it’s embarrassing. But it could be something worse.

Viral Flan Prompt Injection…not a new band name

A LinkedIn profile went viral this week when a user shared screenshots on X of indirect prompt injection. The instructions on the LinkedIn profile tricked what appeared to be an AI recruiting “agent” into including a flan recipe in the cold contact message. That’s funny and maybe embarrassing for the recruiting company, but hardly the worst-case scenario for AI hiring agents. Flan prompt injection styled as an early 2000's hipster band T-shirt

Actual Risks

Worst-Case: North Korean (DPRK) Remote IT Workers

With generative AI, worst case realistically, a hiring process for a remote position could result in hiring a remote North Korean IT worker, a growing problem in recent years. That would be a huge problem for your business.

  • You would be paying a worker working for a foreign government that is sanctioned and an adversary of the U.S.
  • You would have an insider threat trying to collect all kinds of exploitable information on your company.
  • You would have a seat filled by someone definitely not trying to do their actual job.

AI for HR

With those risks in mind, would you want to use AI to help hire? Well, it might possibly be appropriate for the early phases of hiring with human-in-the-loop oversight. But if we’re in a world where everyone starts using AI recruiter agents, it's naive to think that there won't be an arms race with escalating use of these AI mitigation like indirect prompt injection in LinkedIn profiles. Even if it's just to mess around because they're annoyed with getting cold contacts.

ChatGPT for HR

Now, a smaller company might use generative AI in a very simple way. Rather than agents, something like: hey ChatGPT, summarize this person's cover letter and resume and compare it to these three job requirements. tell me if they are minimally qualified for the position or take all these ten candidates and rank them in order of who would be the best fit and eliminate anyone who's completely unqualified or write a cold contact recruiting email to this person Or things of that nature. So basically using consumer ChatGPT, Claude, or Gemini to do HR functions. Not a dedicated HR tool, but using it for HR purposes. That would be one thing. According to Anthropic’s research on how users are using Claude, 1.9% of API usage is for processing business and recruitment data, suggesting that “AI is being deployed not just for direct production of goods and services but also for talent acquisition…”Anthropic Economic Index report

Flan Injection: Part 2

So back to the viral LinkedIn post that was going around a few days ago. The guy who included prompt injection in his LinkedIn byline basically told any AI-enabled recruiters to include a recipe for flan in a cold contact message. Then received, according to a screenshot posted later, an email from a recruiter that included a flan recipe, which indicated that the email was likely drafted by a generative AI tool or, in fact, possibly by a generative AI agent without a human in the loop at all.

HR Agents

That AI agent was affected by the indirect prompted injection included in the LinkedIn byline. This is very easy to do. Does not take any complex technical skill. Indirect prompt injection is very difficult to mitigate, and it's one of the reasons why I do not recommend that people use AI agents. I think that “agents” are a big marketing buzzword right now but that for many of the advertised Use cases, it’s not ready for prime time for exactly this reason.

Now, you may disagree with me. Maybe you feel strongly that I'm wrong. But if you do disagree with me, you had better have a strong argument as to why your business is using it, rather than falling for FOMO over marketing buzzwords and jargon. Instead, you should actually explain the use case and your acceptance of the security risks. I would advise a client not to use these agentic tools that interact with untrusted external content without having a human review the content before taking additional actions. But if clients are going to use agentic tools, I would provide my best advice on how to mitigate the risks associated with those tools and to understand what risks my clients are accepting when they're putting those tools to use.

Double Agent AI— Staying Ahead of AI Security Risks, Avoiding Marketing Hype

· 5 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC

Hype Around Agents

You may have heard a marketing pitch or seen an ad recently touting the advantages of “Agentic AI” or “AI Agents” working for you. These growing buzzwords in AI marketing come with significant security concerns. Agents take actions on behalf of the user, often with some pre-authorization to act without asking for further human permission. For example, an AI agent might be given a budget to plan a trip, might be authorized to schedule meetings, or might be authorized to push computer code updates to a GitHub repo.

info

Midwest Frontier AI Consulting LLC does not sell any particular AI software, device, or tool. Instead, we want to equip our clients with the knowledge to be effective users of whichever generative AI tools they choose to use, or help our clients make an informed decision not to use GenAI tools.

Predictable Risks…

…Were Predicted

To be blunt: for most small and medium businesses with limited technology support, I would generally not recommend using agents at this time. It is better to find efficient uses of generative AI tools that still require human approval. In July 2025, researchers published Design Patterns for Securing LLM Agents Against Prompt Injections. The research paper described a threat model very similar to an incident that later happened to the Node JS Package Manager (npm) in August 2025.

“4.10 Software Engineering Agent…a coding assistant with tool access to…install software packages, write and push commits, etc…third-party code imported into the assistant could hijack the assistant to perform unsafe actions such as…exfiltrating sensitive data through commits or other web requests.”

tip

Midwest Frontier AI Consulting LLC offers training and consultation to help you design workflows that take these threats into consideration. We stay on top of the latest AI security research to help navigate these challenges and push back on marketing-driven narratives. Then, you can decide by weighing the risks and benefits.

I was just telling some folks in the biomedical research industry about the risks of agents and prompt injection earlier this week. The following day, I read about how the npm software package was hacked to prompt inject large language model (LLM) coding agents to exfiltrate sensitive data via GitHub.