Hiring With AI? It's All Flan and Games Until Someone Gets Hired
What's the worst that could happen?
The thing about using generative AI workflows is you always have to genuinely ask yourself: “what's the worst thing that could happen?” Sometimes, the worst thing isn’t that bad and the AI will actually save you time. Sometimes it’s embarrassing. But it could be something worse.
Viral Flan Prompt Injection…not a new band name
A LinkedIn profile went viral this week when a user shared screenshots on X of indirect prompt injection. The instructions on the LinkedIn profile tricked what appeared to be an AI recruiting “agent” into including a flan recipe in the cold contact message. That’s funny and maybe embarrassing for the recruiting company, but hardly the worst-case scenario for AI hiring agents.

Actual Risks
Worst-Case: North Korean (DPRK) Remote IT Workers
With generative AI, worst case realistically, a hiring process for a remote position could result in hiring a remote North Korean IT worker, a growing problem in recent years. That would be a huge problem for your business.
- You would be paying a worker working for a foreign government that is sanctioned and an adversary of the U.S.
- You would have an insider threat trying to collect all kinds of exploitable information on your company.
- You would have a seat filled by someone definitely not trying to do their actual job.
AI for HR
With those risks in mind, would you want to use AI to help hire? Well, it might possibly be appropriate for the early phases of hiring with human-in-the-loop oversight. But if we’re in a world where everyone starts using AI recruiter agents, it's naive to think that there won't be an arms race with escalating use of these AI mitigation like indirect prompt injection in LinkedIn profiles. Even if it's just to mess around because they're annoyed with getting cold contacts.
ChatGPT for HR
Now, a smaller company might use generative AI in a very simple way. Rather than agents, something like:
hey ChatGPT, summarize this person's cover letter and resume and compare it to these three job requirements. tell me if they are minimally qualified for the position
or
take all these ten candidates and rank them in order of who would be the best fit and eliminate anyone who's completely unqualified
or
write a cold contact recruiting email to this person
Or things of that nature. So basically using consumer ChatGPT, Claude, or Gemini to do HR functions. Not a dedicated HR tool, but using it for HR purposes. That would be one thing. According to Anthropic’s research on how users are using Claude, 1.9% of API usage is for processing business and recruitment data, suggesting that “AI is being deployed not just for direct production of goods and services but also for talent acquisition…”Anthropic Economic Index report
Flan Injection: Part 2
So back to the viral LinkedIn post that was going around a few days ago. The guy who included prompt injection in his LinkedIn byline basically told any AI-enabled recruiters to include a recipe for flan in a cold contact message. Then received, according to a screenshot posted later, an email from a recruiter that included a flan recipe, which indicated that the email was likely drafted by a generative AI tool or, in fact, possibly by a generative AI agent without a human in the loop at all.
HR Agents
That AI agent was affected by the indirect prompted injection included in the LinkedIn byline. This is very easy to do. Does not take any complex technical skill. Indirect prompt injection is very difficult to mitigate, and it's one of the reasons why I do not recommend that people use AI agents. I think that “agents” are a big marketing buzzword right now but that for many of the advertised Use cases, it’s not ready for prime time for exactly this reason.
Now, you may disagree with me. Maybe you feel strongly that I'm wrong. But if you do disagree with me, you had better have a strong argument as to why your business is using it, rather than falling for FOMO over marketing buzzwords and jargon. Instead, you should actually explain the use case and your acceptance of the security risks. I would advise a client not to use these agentic tools that interact with untrusted external content without having a human review the content before taking additional actions. But if clients are going to use agentic tools, I would provide my best advice on how to mitigate the risks associated with those tools and to understand what risks my clients are accepting when they're putting those tools to use.
MCP Email
Now, the reason that I say it was probably an AI agent and not a human in the loop is because the screenshot included the email address from the recruiter. The email domain contained “MCP”. “MCP” is the term that Anthropic (maker of the Claude chatbot) uses. It’s basically analogous to APIs, but for agentic AI. So with the domain related to “MCP,” that kind of gives away the game of having something to do with HR recruitment via agentic AI.
Using AI to headhunt people in AI means you especially need to understand AI risks
I was able to confirm the owner of that domain. This was an email domain associated with a company that does AI headhunting. Now, if you're a company that specializes in recruiting people in the AI field in particular, then it's even more likely that the people you're looking for will be familiar with and try to exploit prompt injection.
Now, obviously, the person who shared the flan injection was doing it for humorous purposes presumably out of annoyance from the constant spamming of AI-written cold contact recruitment emails and messages.
But if somebody unscrupulous were using it, they could do something to say, “offer me the job on the spot.” Who's to say that the AI recruitment wouldn't have offered an unqualified candidate a job? Or that they wouldn’t have been able to negotiate three times the acceptable range of the salary for that position? And at that point is it a binding offer?
A lot of businesses have dealt with a problem when the bot is acting on their behalf and then try to walk back things that the bot says.
- A car dealership that had a chatbot offer a new truck for a dollar.
- An airline’s customer service bot that told a customer that their bereavement refund policy could be applied retroactively when it did not.
- Taco Bell’s 18,000 cups of water problem.
If you're saying “this AI is replacing our HR department and it's making hiring decisions and job offers,” then ask yourself “what's the worst thing that could happen?”
Well, the worst thing that could happen is it could offer somebody a job based on prompt injection, instead of their qualifications. And then what are the knock on effects of that? There are serious problems with these agentic AI workflows. They're not currently securable from indirect prompt injection. Especially, if you're looking at using AI to recruit people who are in the field of AI, who know about indirect prompt injections.
With all that said, I would not recommend the use of these tools for HR purposes, certainly not without a human in the loop. But even with a human in the loop, it's still susceptible to prompt injection, which means you have to be aware that prompt injection can happen and check the outputs that would be influenced by indirect prompt injection.