Skip to main content

3 posts tagged with "Prompt Engineering"

Discussion of prompt engineering tips.

View All Tags

Part 2 Better Prompts, Unique Jokes for Halloween

· 4 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC

Joke-Telling Traditions and The Challenge of Asking ChatGPT

As I discussed last weekend in what I’ll now call Part 1, there is a tradition in central Iowa of having kids tell jokes before getting candy while trick-or-treating on Halloween. Since a lot of people are replacing older forms of search with AI chatbots like ChatGPT, I shared some tips from the pre-print of the paper Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM Diversity from Northeastern University, Stanford University, and West Virginia University posted as a pre-print on arXiv on October 10, 2025. The paper explains that large language models (LLMs) have something they call “typicality bias,” to prefer the most typical response. If you’re wondering what that means or what it has to do with jokes, it’s helpful that their first example is about jokes.

tip

Instead of “tell me a joke” or “tell me a Halloween joke,” ask an AI chatbot to “Generate 5 responses to the user query, each within a separate <response> tag. Each <response> must include a <text> and a numeric <probability>. Please sample at random from the tails of the distribution, such that the probability of each response is less than 0.10. </instructions>”

Follow-Up from the Paper’s Authors

X/Twitter

I posted on X/Twitter. One of the authors, Derek Chong of Stanford NLP, responded:

Very cool, thanks for trying that out!

One tip – if you use the more robust prompt at the top of our GitHub and ask for items with less than a 10% probability, you'll start to see completely new jokes. As in, never seen by Google Search before!

Tweet showing never-seen Google search results

Github Prompts

The Github page for Verbalized Sampling includes this prompt before the rest of the prompt:

Generate 5 responses to the user query, each within a separate <response> tag. Each <response> must include a <text> and a numeric <probability>.
Please sample at random from the tails of the distribution, such that the probability of each response is less than 0.10.
</instructions>

These Prompts Will Give You Better Jokes for Halloween…Well, It'll Give You More and Different Jokes

· 11 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC

Joke-Telling Traditions and The Challenge of Asking ChatGPT

Halloween is weird in central Iowa for two reasons. First, we don’t actually trick-or-treat on Halloween, but on a designated “Beggar’s Night.” Second, we make kids tell jokes before they get candy. At least, that’s how it used to be. A huge storm rolled through last year and trick-or-treating was postponed to actual Halloween. So this year most of the Des Moines metro moved to normal Halloween.

That’s fine I guess, but as a dad and relentless pun teller, I will not give up on that second part with kids telling corny jokes. I won’t! And recognizing that many people, especially kids, are switching from Google to ChatGPT for search, I’m here to share some cutting-edge research on large language model prompting so I don’t hear the same jokes over and over.

Whether you make up your own puns or look them up with a search engine or an AI chatbot, keep the tradition alive! If you do use AI, try this prompting trick to get more variety in your jokes. But there’s no replacing the human element of little neighborhood kids delivering punchlines. Have a great time trick-or-treating this weekend!

tip

Instead of “tell me a joke” or “tell me a Halloween joke,” ask an AI chatbot to “Generate 5 responses with their corresponding probabilities. Tell me a kids’ joke for Halloween.” Another strategy is to ask for a lot of options like “20 kids’ jokes for Halloween.”

The Paper: How to Get Better AI Output (and More Jokes)

The authors of the paper Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM Diversity from Northeastern University, Stanford University, and West Virginia University posted a pre-print on arXiv on October 10, 2025. The paper explains that large language models (LLMs) have something they call “typicality bias,” to prefer the most typical response. If you’re wondering what that means or what it has to do with jokes, it’s helpful that their first example is about jokes.

On Prompt Engineering Being a Real Skill

· 6 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC

Professor’s Lament

I’m writing this to explain prompt engineering, but that’s too vague. What I’m specifically responding to is a former college professor after he wrote earlier this month:

Wait, so 'learning to write sophisticated prompts' is now a class, and the title of the course >is 'Prompt Engineering'? Is it too late to stop this?

So Prof. X (you know who you are) I’m going to try to convince you—and any other skeptics reading—that prompt engineering is a real skill with meaningful implications for AI. There are three things I want to address:

  1. I get why you’d roll your eyes at it.
  2. There may be things you like about prompt engineering.
  3. Failure to understand prompt engineering and prompt injection risks creates real-world security risks.

The Reaction Against Slop

There is already too much AI slop. Facebook is particularly full of slop images that get thousands or millions of likes from people who seemingly don’t realize they are interacting with AI-generated content. But the problem is in every corner of the internet. You can even find examples out in the real world if you look careful, especially in ads and posters. So when you hear “prompt engineering” but mentally translate it to “slopmonger,” I get why you have such a strong negative reaction.

I’m against slop. I hate slop. I do not want my kids to grow up in a word overrun by slop. You can look up John Oliver’s recent rant against slop, but I personally prefer Simon Willison’s 2024 statement here:

I’m a big proponent of LLMs as tools for personal productivity, and as software platforms for building interesting applications that can interact with human language.

But I’m increasingly of the opinion that sharing unreviewed content that has been artificially generated with other people is rude.

Slop is the ideal name for this anti-pattern. […] One of the things I love about this is that it’s helpful for defining my own position on AI ethics. I’m happy to use LLMs for all sorts of purposes, but I’m not going to use them to produce slop. I attach my name and stake my credibility on the things that I publish.

tip

Midwest Frontier AI Consulting LLC does not publish AI-generated written content. Midwest Frontier AI Consulting LLC does not use other AI-generated content (e.g., code or images) that have not been reviewed.

Hacking with Poetry and Foreign Prose

Back in 2023, a Swiss AI security firm called Lakera released a game called Gandalf AI involved seven levels of increasing difficulty trying to get a large language model (LLM) chatbot “Gandalf” to tell you a secret password. As the levels got more difficult, prompts required more ingenuity. Successful strategies included convincing the LLM that it was telling a fictional story or saying that the password was needed for some emergency.

For the hardest levels, the most successful prompts asked the LLM to write poetry or translations into a foreign language. In doing so, the LLM leaked information about the password that evaded scrutiny. Surely a champion of the humanities like yourself can appreciate the irony that poetry and foreign language education can now be considered essential ingredients in a computer-related industry.