Private AI Chat: A Consumer's Guide
You probably wouldn't say it out loud at a dinner party. You typed it into ChatGPT instead. So where does it actually go?
There's a category of question people ask AI chatbots that they wouldn't ask their best friend, let alone post on Facebook. Symptoms they're worried about. The lump. The tax thing. The fight with their spouse. The job they're scared to quit. The thing they did once that they can't take back.
Three years ago this would have been a Google search. Now it's a paragraph typed into ChatGPT or Claude.ai. The shift happened so fast that most people haven't stopped to ask the obvious follow-up question.
Where does that paragraph go?
Where your AI chats actually go
The honest version is structural, not dramatic. Your chat isn't being actively read by a stranger right now. But it's sitting somewhere, and that "somewhere" creates real risk. Three things to understand.
1. They get stored
On consumer products like ChatGPT, Claude.ai, and Gemini, your chats sit on the company's servers attached to your account, often indefinitely. On Free and Pro plans, they're also used by default to train the next generation of the model unless you've opted out in settings. The chat is data, and the data has a long life.
2. They can be subpoenaed
Your chat history is a record. Records can be subpoenaed. In 2024, OpenAI was ordered by a federal court to preserve every ChatGPT conversation indefinitely as part of the New York Times copyright lawsuit. Including deleted ones. Including the ones in temporary chat. The order has not been lifted at the time of this writing.
3. They can be breached
If a hacker gets your password, your phone number, or your laptop, they get the chat history. If the AI company itself gets breached (it's happened to OpenAI in a 2023 incident), your chats are part of the breach.
The existence of the data on someone else's server is what creates risk. Subpoena, breach, future policy changes, a new owner of the company, a future lawsuit you don't know about yet. Privacy is about whether the data exists at all, not about who's looking at it today.
The full list of what's flagged
Every major AI provider (Anthropic, OpenAI, Google) runs automated safety classifiers on every prompt. This is universal across the industry. The good news: the categories they flag are narrow, specific, and published. Here's the complete list from Anthropic's Usage Policy (other providers' lists are nearly identical):
- Child sexual abuse material (CSAM) and any sexual content involving minors
- Non-consensual sexual content depicting real people
- Weapons of mass destruction: chemical, biological, radiological, nuclear
- Manufacturing instructions for explosives or conventional weapons
- Cyberweapons, malware creation, instructions for exploiting software vulnerabilities
- Attacks on critical infrastructure (power grids, water systems, financial systems, healthcare systems)
- Incitement to violence, terrorism, or genocide
- Human trafficking, exploitation, and forced labor
- Fraud, scams, and large-scale deception schemes
- Doxxing, stalking, and unauthorized surveillance of individuals
- Election manipulation and coordinated political disinformation
- Discrimination in high-stakes decisions (employment, housing, credit, healthcare access)
- Unauthorized impersonation of real people
- Generating defamatory content about identifiable individuals
- Unauthorized practice of regulated professions (medicine, law, financial advice) without disclaimers
Everything outside this list is completely private. The classifiers don't flag your tone, your topics, your politics, your personal questions, or your business. They flag this specific list of harms. Tax questions, medical symptoms, relationship problems, work conflicts, identity stuff, financial decisions, sensitive personal life, all of it goes unflagged. The system exists to find people building bioweapons, not people asking about their lives.
The incognito illusion
Both OpenAI and Anthropic have shipped "private" or "incognito" or "temporary" chat modes. They are real features. They are also genuinely misunderstood.
What incognito modes do: hide the chat from your visible history, opt the chat out of training, and remove cross-chat memory.
What they don't do: stop the AI company from receiving the prompt, encrypt the message end-to-end, or prevent retention during the company's own retention window. See the full breakdown of incognito modes, including which ones actually clear the data and how fast.
If your threat model is "I don't want this chat showing up next time my partner uses my laptop," incognito is fine. If your threat model is "I don't want this conversation discoverable in a lawsuit two years from now," incognito does nothing.
Three real paths to private AI chat
If you actually want the chat to be private, not theatrically private, you have three real options. Each has tradeoffs. Pick the one that matches your situation.
Path 1: Anonymous web tools (cheapest, weakest)
Sites like DuckDuckGo AI Chat, GPTAnon, AnonChatGPT, and Uncensored Chat let you talk to AI without an account. They proxy your message to an underlying model (GPT-4o mini, Claude, Mistral, etc.) and strip identifying info before forwarding.
Best for: one-off questions you don't want associated with your identity. Quick stuff.
Tradeoffs: you have to trust whoever runs the proxy. DuckDuckGo is reputable. The others vary. You generally don't get logged-in features (memory, history, premium models). Model quality is often capped at a free tier.
Path 2: Connection-password chat (best for daily use)
Here's how this works. You make an account directly with Anthropic at console.anthropic.com (it's free to sign up). They give you a connection password, which is a long secret that lets apps talk to Claude on your behalf. You paste that password into a privacy-respecting chat interface like Private Claude. The interface gives you a normal chat experience, but the conversation goes through the developer version of Claude with its stronger privacy rules, not through your Claude.ai account.
For Anthropic specifically: developer-version messages are never used for training, operational logs auto-delete after 7 days, and there's no saved chat history. Claude.ai is the opposite: training on by default, indefinite chat history, full account record.
Best for: people who use AI daily, want frontier models, and want their conversations to not exist as a permanent record anywhere.
Tradeoffs: there's a free tier (50 Haiku + 25 Sonnet messages, no card) for trying it out. Paid tiers are unlimited messaging (you pay only for your own Anthropic usage on top, about $3 to $5 a month if you're not using it every day). Private Claude Basic is $17 a month. Most people don't cancel their ChatGPT or Claude.ai subscription. They add Private Claude on top for the conversations they don't want sitting in their main account. You also don't get chat history on any tier, by design. Close the tab and the conversation is gone.
Path 3: Self-hosted open-source AI (strongest privacy, lower model quality)
Tools like Ollama, Jan, GPT4All, and LM Studio let you run an open-source model on your own computer. The data never leaves your machine. Period. There is no provider.
Best for: people with strong privacy threat models (journalists, activists, lawyers, security researchers), people who already have a decent GPU, or anyone who wants full data isolation as a principle.
Tradeoffs: the open-source models you can run on a laptop are meaningfully less capable than Claude or GPT-4. They're getting better. But for now, if you need the smartest answer, you're paying a real intelligence tax.
Side-by-side
| Tool | What happens to your data | Cost | Model quality | Best for |
|---|---|---|---|---|
| ChatGPT Plus | Stored in your account indefinitely. Trains by default unless opted out. Subject to 2024 court preservation order. | $20/mo | Frontier (GPT-4o, o1) | Convenience, brand recognition |
| Claude Pro | Stored in your account indefinitely. Trains by default unless opted out. | $20/mo | Frontier (Claude 4.6/4.7) | Convenience, best reasoning |
| Proton Lumo | Encrypted in transit, processed on Proton servers, deleted after response. | Free / paid | Mid (open models) | Trusting Proton's brand |
| DuckDuckGo AI Chat | Proxied to model providers without identity. No account, no chat history stored. | Free | Mid (GPT-4o mini, Claude Haiku) | Quick anonymous one-offs |
| GPTAnon / AnonChatGPT | Routed through third-party proxy. Trust depends on operator. | Free | Varies | One-off anonymous, low stakes |
| Private Claude | 7-day Anthropic API logs (tied to key, then auto-deleted), no training, no chat history anywhere. Conversation lives in your browser only. | Free tier (50H+25S) or $17/mo unlimited + API (~$3–5) | Frontier (Claude direct) | Daily use, sensitive personal work |
| Ollama / Jan (local) | Never leaves your machine. | Free | Lower (Llama, Mistral) | Strongest privacy, technical users |
Which is right for you
Most people don't need the strongest possible privacy. They need privacy that matches the kind of stuff they actually ask AI about. Match the tool to the question.
- Daily use, sensitive personal stuff: Private Claude or similar. Real Claude, stronger privacy rules, no chat history saved. Read the Private Claude post.
- One-off anonymous question: DuckDuckGo AI Chat. Reputable proxy, no account, throwaway chat.
- Maximum privacy, willing to pay in model quality: Ollama with Llama 3.3 or Mistral. Local, free, no provider involved.
- You don't actually care, you just want the best model: ChatGPT or Claude Pro. Just opt out of training in settings.
The question isn't "what's the most private tool." The question is "what's the right tool for the kind of stuff I actually ask." Most people end up with two: a daily driver that respects privacy, and an anonymous backup for the occasional question they don't want anywhere near their account.
Frequently asked questions
Is there a truly private AI chat?
Yes, but you have to choose what kind of private. A self-hosted open-source model gives you full data isolation but lower model quality. A connection-password tool like Private Claude gives you the real Claude with much stronger privacy rules. A free anonymous web tool can hide your identity but you don't always know who's running it.
Are my ChatGPT chats private?
No. Standard ChatGPT chats are stored in your account, often indefinitely, and on Free and Plus plans are used by default to train future models unless you opt out in settings. Even deleted chats may be retained under legal hold for ongoing court orders, including the 2024 order that requires OpenAI to preserve every conversation.
What's a connection password?
A connection password (sometimes called an API key) is a secret you generate directly from your AI provider's account, like Anthropic's. You paste it into a privacy-respecting chat tool like Private Claude. The tool uses it to talk to Claude on your behalf. You pay the provider for usage, and your account relationship with them follows the developer-version privacy rules instead of the consumer ones, which means stronger privacy by default.
Is DuckDuckGo AI Chat private?
DuckDuckGo proxies your messages to underlying models (GPT, Claude, Mistral) and strips identifying information before forwarding. The model providers see the text but not who sent it. DuckDuckGo doesn't store your chats. This is genuinely more private than going to ChatGPT or Claude directly, but you don't get login, history, or premium models.
Is a self-hosted AI like Ollama really private?
Yes, in the strongest sense. Ollama, Jan, GPT4All, and similar tools run an open-source model locally on your computer. The data never leaves your machine. The tradeoff is that the open-source models you can run on a laptop are meaningfully less capable than frontier models like Claude or GPT-4.
Can I use ChatGPT or Claude anonymously?
Sort of. ChatGPT lets you use GPT-4o mini without an account, but your IP address is logged temporarily. Claude.ai requires a login. Tools like GPTAnon, AnonChatGPT, and Uncensored Chat let you use various models without an account, but you have to trust whoever runs the proxy. For real privacy with frontier models, using your own connection password through a privacy-respecting tool is usually a better path than anonymous proxies.
Will my AI chats be subpoenaed?
If they're stored on a provider's servers and you can be linked to the account, yes, they can be subpoenaed. In 2024 OpenAI was ordered to preserve all ChatGPT conversations indefinitely, including deleted ones, due to ongoing litigation. The only chats that aren't discoverable are ones that were never stored in the first place.
Is paying for a private AI worth it?
Private Claude has a free tier (50 Haiku + 25 Sonnet messages, no card) so you can try it before paying. Beyond that, Private Claude Basic is $17 a month plus your own Anthropic usage (about $3 to $5 a month if you're not using it every day). Most people don't cancel their ChatGPT or Claude.ai subscription. They add Private Claude on top for the conversations they don't want sitting in their main account. If you use AI for sensitive personal stuff (health, finances, relationships, work conflicts, identity), $17 a month for a separate private space for those conversations is an easy decision. If you only use AI for low-stakes brainstorming, the free tier is fine.
Use Claude. Keep it private.
Use your Anthropic connection password. Start free with 50 Haiku and 25 Sonnet messages. Upgrade to $17/mo for Opus, file uploads, and Markdown exports.
Get startedNo card required · Cancel anytime