Skip to main content

4 posts tagged with "Gemini"

Discussion of Google's chatbot and LLM Gemini.

View All Tags

How to Set Up Google Gemini Privacy

· 7 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC

Data training opt-outs and other settings as of October 1, 2025

General Set Up for Lawyers

I will be providing guides on how to configure the privacy settings on three common consumer large language model (LLM) tools: Google Gemini, ChatGPT, and Claude. In this post, I will provide a guide on how to configure a consumer Google Gemini account’s privacy settings based on an attorney conducting legal research. Please note that these instructions are neither a substitute for proper data controls (e.g., proper handling of attorney-client privileged data or personally identifiable information) nor are they are replacement for a generative AI policy for your law firm. This information is current as of October 1, 2025.

You can change the settings on a desktop computer or mobile phone, but the menu options have slightly different names. I will explain using the desktop options with the alternative names for mobile also noted.

Key Point

“Help improve” is a euphemism for “train future models on your data.” This is relevant to both audio and text opt-outs.

This guide assumes you have a Google account signed in to Google Gemini.

Overview

  1. Opt out of training on your audio data. (Euphemistically: “Improve Google services with your audio and Gemini Live recordings.”)
  2. Configure data retention and auto-deletion, which is necessary to avoid training on your conversations with Gemini. (Euphemistically: “your activity…helps improve Google services, including AI models”).
  3. Review a list of “your public links.”
tip

To subscribe to law-focused content, visit the AI & Law Substack by Midwest Frontier AI Consulting.

1. Opt Out of Training on Audio

Risk: Memorization, Conversation Privacy

I strongly advise anyone using generative AI tools, but especially those using it for potentially sensitive work purposes, to opt out of allowing these companies to train future models on your text and audio chats. There are numerous risks for this and no benefit to the individual user.

One risk is private chats (text or voice) being exposed in some way during the data training process. “Human reviewers (including trained reviewers from our service providers) review some of the data we collect for these purposes.

caution

Please don’t enter confidential information that you wouldn’t want a reviewer to see or Google to use to improve our services, including machine-learning technologies” (Gemini Apps Privacy Hub).

Another potential risk is “memorization,” which allows generative AI to re-generate specific pieces of sensitive information. While unlikely for any particular person, the risk remains. For example, researchers in 2023 found that ChatGPT could recreate the email signature of a CEO with their real personal contact information. This is significant, because ChatGPT is not a database (see my discussion of Mata v. Avianca): it would be like writing it down from memory, not looking it up in a phone book.

Screenshot of desktop menu to access Gemini “Activity” menu

Guide: Opting Out of Audio Training

Click the Gear symbol for Settings, then Activity (on mobile, it’s “Gemini Apps Activity”).

UNCHECK the box next to “Improve Google services with your audio and Gemini Live recordings.”

Screenshot of desktop menu to access “Gemini Apps Activity” menu for opting out of audio data training

2. Chat Retention & Deletion

Risk: Security and Privacy v. Recordkeeping

You may want to keep records of the previous searches you have conducted for ongoing research or to revisit what went wrong if there were issues with a citation. However, by choosing to “Keep activity,” Google notes that “your activity…helps improve Google services, including AI models.”

Therefore, it appears that the only way to opt out of training on your text conversations with Google Gemini conversations is to turn off activity. This is different from ChatGPT, which allows you to opt out of training on your conversations, and Claude, which previously did not train on user conversations at all but moved to a policy similar to ChatGPT’s of training on user conversations with opt-out. As an alternative, you could delete only specific conversations.

Guide: Opting Out of Text Training

Click the Gear symbol for Settings, then Activity (on mobile, it’s “Gemini Apps Activity”). Click the dropdown arrow “On/Off” and select “Turn off” or “Turn off and delete activity” if you also want to delete prior activity. It is also possible to delete individual chats in the main chat interface.

Screenshot of desktop menu to access “Gemini Apps Activity” menu for turning off “Keep activity” to opt out of text data training

Guide: Auto-Delete Older Activity

Click the Gear symbol for Settings, then Activity (on mobile, it’s “Gemini Apps Activity”). Click the words “Deleting activity older than [time period]” to adjust the retention period for older conversations. This does not mitigate concerns about Google training on your data, but may protect the data in the event of an account takeover.

Screenshot of desktop menu to access “Gemini Apps Activity” menu for adjusting auto-delete period if “Keep Activity” is left on

Or you can delete recent activity within a certain time period.

Screenshot of desktop menu to access “Gemini Apps Activity” menu for deleting a specific period of recent activity if “Keep Activity” is left on

Risk: Private Conversations on Google

In late July, Fast Company reported that Google was indexing shareable links to ChatGPT conversations created when users shared these conversations. At the time, if ChatGPT users continued the conversation after creating the link, the new content in the chat would also be visible to anyone with access to the link. By contrast, ChatGPT and Anthropic’s Claude now explicitly state that only messages created within the conversation up to the point the link is shared will be visible. Later this year, it was revealed that Google had indexed shareable links to conversations from xAI’s Grok and Anthropic’s Claude.

Click the Gear symbol for Settings, then Your public links (on mobile, click your face or initials, then “Settings,” then “Your public links”).

Screenshot of Google Gemini “Your public links.” Screenshot of Google Gemini “Your public links.”

On my company website, I recently wrote a blog post showing how small businesses could use Google Gemini for image generation. “Need to Create a Wordcloud for Your Blog Post? Use Google Gemini (and a Piece of Paper).” I am now sharing the link to that chat to demonstrate how the public links privacy works in Google Gemini. The chat link is [here](https://g.co/gemini/share/4626a5e02af7.

You can see in the list above that it is my only public link. It includes the title of the chat, the URL, and the date and time created. Above the list are privacy warnings about creating and sharing links to a Gemini conversation. Based on my test of the shared link, chats added to the conversation after the link is shared do not appear, but I did not see this stated in Google’s warning compared to ChatGPT and Anthropic.

Additionally, you can delete all public links or delete just one specific public link.

Need to Create a Wordcloud for Your Blog Post? Use Google Gemini (and a Piece of Paper)

· 3 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC

Paper to photo to Google Gemini

This simple workflow will be faster and give you more control over your output than using ChatGPT image generation. That’s because Google Gemini’s new image model, called “nano banana” (hence the banana emoji next to the image generation option) is a better AI model of editing photos without changing too much. Gemini also generates images more quickly than GPT-5. My rule of thumb for images is: if you want to change some specific, use Gemini; if you want to create something creative from scratch, use ChatGPT.

Step 1: Handwriting

Start by writing the wordcloud you want. For my example, I wrote a bunch of generic terms that popped into my head like “example” and “whatever.” If you can’t think of the words you want, you can always generate a short list with Gemini. Vary the direction and size of the writing to make the final image more visually interesting.

Step 2: Photo of the Handwriting

Take a photo of the piece of paper. Crop out the background. handwritten words in different directions related to generic topic

Step 3: Prompt Gemini

Upload the photo of the handwriting the paper to Gemini with a prompt, such as: Turn these words into the style of a graffiti mural. It should only take a few second to generate the output image. My resulting image was words painted in graffiti in different directions related to generic topic on a brick wall

“Three Ways Customers Learn About Your Business from Google AI (and what you can do about it)"

· 5 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC

If you are a small business owner who wants nothing to do with AI, I appreciate that decision. Midwest Frontier AI Consulting supports business owners who want to use AI responsibly and business owners who want to make an informed decision not to use AI. However, you still need to learn about generative AI, even if only to avoid it and mitigate the negative effects.

Your customers are using AI to learn about your business, often without even realizing they are using AI. “Google” has been a verb for over two decades now according to Wikipedia, but “googling something,” hasn’t stayed the same. AI tools have moved into familiar areas like Google Search and Google Maps. Here are three ways your customers may be using generative AI to learn about your business from Google’s AI tools, and what you can do about it.

Google’s Gemini AI attempts to summarize website information and provide an overview. However, the AI summary can introduce errors ("hallucinations") that mislead customers. For example, a local Missouri pizzeria was inundated with customer complaints about “updated [sic, appears they meant to say ‘outdated’] or false information about our daily specials” described by Google’s AI Overview (Pizzeria’s Facebook Post).

What Not to Do

Don’t call the information “fake” if it is really information taken out of context. For example, the pizzeria’s Facebook page shows they offer a deal for a large pizza for the price of a small pizza, but only on Wednesdays (outdated information). It is still legitimate to criticize the AI and it is still legitimate to tell customers who want the deal on another day of the week that the offer is only valid on Wednesdays. However, claiming the offer is “made up by the AI” will probably not calm down a customer who may then go to the business’s Facebook profile and see several posts about similar deals (but only on Wednesdays).

Don’t simply tell customers “Please don’t use Google AI.” The customers probably do not realize they are using AI at all. The AI Overview appears at the top of Google Search. Most people probably think they are “just googling it” like they always have and don’t realize the AI features have been added in. So warning them not to use something they didn’t opt into and aren’t actively aware of using is not going to help the situation.

What To Do

  • AI-focused solutions. If AI is going to mix things up like this, you can try to: ** Delete old posts about deals that are not active or make temporary posts, so that AI hopefully won’t include the information in summaries later. ** Word posts carefully with AI in mind. Maybe “only on Wednesday” would be better than “EVERY Wednesday.” Spell out something that would be obvious to a human but not necessarily an AI, like “not valid on any other day of the week.”
  • Customer-focused solutions. Ultimately, it is hard to predict how the AI will act, though, so you will need to prepare for potentially angry customers: ** Train staff on how to handle AI-created customer confusion (or think about how you yourself will talk to customers about it). ** Post signs regarding specials and preempt some AI-created confusion.

Double Agent AI— Staying Ahead of AI Security Risks, Avoiding Marketing Hype

· 5 min read
Chad Ratashak
Chad Ratashak
Owner, Midwest Frontier AI Consulting LLC

Hype Around Agents

You may have heard a marketing pitch or seen an ad recently touting the advantages of “Agentic AI” or “AI Agents” working for you. These growing buzzwords in AI marketing come with significant security concerns. Agents take actions on behalf of the user, often with some pre-authorization to act without asking for further human permission. For example, an AI agent might be given a budget to plan a trip, might be authorized to schedule meetings, or might be authorized to push computer code updates to a GitHub repo.

info

Midwest Frontier AI Consulting LLC does not sell any particular AI software, device, or tool. Instead, we want to equip our clients with the knowledge to be effective users of whichever generative AI tools they choose to use, or help our clients make an informed decision not to use GenAI tools.

Predictable Risks…

…Were Predicted

To be blunt: for most small and medium businesses with limited technology support, I would generally not recommend using agents at this time. It is better to find efficient uses of generative AI tools that still require human approval. In July 2025, researchers published Design Patterns for Securing LLM Agents Against Prompt Injections. The research paper described a threat model very similar to an incident that later happened to the Node JS Package Manager (npm) in August 2025.

“4.10 Software Engineering Agent…a coding assistant with tool access to…install software packages, write and push commits, etc…third-party code imported into the assistant could hijack the assistant to perform unsafe actions such as…exfiltrating sensitive data through commits or other web requests.”

tip

Midwest Frontier AI Consulting LLC offers training and consultation to help you design workflows that take these threats into consideration. We stay on top of the latest AI security research to help navigate these challenges and push back on marketing-driven narratives. Then, you can decide by weighing the risks and benefits.

I was just telling some folks in the biomedical research industry about the risks of agents and prompt injection earlier this week. The following day, I read about how the npm software package was hacked to prompt inject large language model (LLM) coding agents to exfiltrate sensitive data via GitHub.