Private Claude Chat

Claude is the smartest model out there. It's also the one most people don't realize is keeping a copy of every word.

Sarah is a freelance designer in Brooklyn. Last winter she asked Claude about a tax issue she'd been losing sleep over. Some old freelance income from 2022 that she'd never properly reported. She typed the whole story into Claude.ai. Names, amounts, the works. Got useful advice. Closed the tab.

Three months later, a friend mentioned in passing that Anthropic's consumer plans train on your conversations by default. Sarah spent the next hour digging through settings she'd never looked at. The training opt-out had not been on. The chat was still in her history. Anthropic had a copy of her tax confession.

Nobody at Anthropic was going to do anything bad with it. That's not the point. The point is Sarah didn't know. And almost nobody using Claude.ai knows.

What Claude.ai actually does with your chats

Claude is an excellent product. Anthropic is, as AI labs go, one of the more thoughtful ones about safety and policy. But "thoughtful" is not the same as "private." Here's where your data actually goes every time you type something into the consumer Claude.ai web app or desktop app:

None of this is hidden. Anthropic spells it out in their privacy center. The problem isn't that they're doing something dishonest. The problem is that the gap between "this is technically disclosed in a settings page" and "the average user understands what they've signed up for" is enormous.

Privacy by plan

Anthropic's privacy posture is wildly different depending on which Claude product you're paying for. Worth knowing where you actually sit.

PlanTrains on your chats?Default retentionAccount-level history
Claude FreeYes (opt-out available)30+ daysYes, indefinite
Claude Pro ($20/mo)Yes (opt-out available)30+ daysYes, indefinite
Claude TeamYes (opt-out available)30+ daysYes, indefinite
Claude EnterpriseNoConfigurableConfigurable
Developer version (API)No7 days, then auto-deletedNone

That row at the bottom matters. The developer version is a fundamentally different deal than the website. We'll come back to it.

The Incognito catch

Anthropic shipped an Incognito mode for Claude.ai in 2025. It's a real feature. It's also genuinely misunderstood.

Here's what Incognito mode does:

Here's what Incognito mode does not do:

The honest framing

Incognito mode is private from your account history. It is not private from Anthropic. If you want both, you need a different setup.

There's a developer version of Claude with totally different rules

Anthropic sells Claude two ways. One is the website almost everyone uses (Claude.ai). The other is a "key" that developers and businesses get to plug Claude into their own apps. The privacy rules on the developer version are completely different.

When developers use the key, Anthropic's defaults flip:

That's what Private Claude is built on. We use the developer version, but wrap a normal chat interface around it. So you get the same Claude you'd use on Claude.ai, but with the developer-version privacy rules baked in.

The full list of what's flagged

Every major AI provider (Anthropic, OpenAI, Google) runs automated safety classifiers on every prompt. This is universal and unavoidable. The good news: the categories these classifiers look for are narrow, specific, and published. Here's the complete list of what Anthropic flags, drawn directly from Anthropic's published Usage Policy:

The point

Everything outside this list is completely private. The classifiers don't flag your tone, your topics, your politics, your personal questions, or your business. They flag this specific list of harms. If you're asking Claude about your tax return, your medical symptoms, your work conflict, your relationship, your finances, or anything else from a normal life, none of it gets flagged. The system exists to detect specific harms. It's not built to track ordinary users.

Private Claude in 90 seconds

If Sarah had used Private Claude for that tax question, here's what would have been different:

  1. Her connection password, not ours. She gets her Anthropic connection password from console.anthropic.com (the same way you'd grab a password for any service). It's the secret that lets Private Claude talk to Claude on her behalf. We never see it on our servers, never store it. It lives in her browser session only.
  2. The conversation lives only in her browser tab. Nothing is saved to our servers. Nothing is saved to her device. Close the tab and the conversation is gone. We don't keep chat history because chat history is the thing that creates risk.
  3. The message goes directly to Anthropic. Claude itself sees it because Anthropic runs the model. Anthropic's operational logs hold the request for 7 days for abuse detection, then auto-delete. They don't train on it, and there's no saved chat history at Anthropic. So a week after Sarah's tax conversation, there's no permanent record of it anywhere.
  4. Nothing to subpoena. Nothing to breach. Nothing to expose. Three months later, when Sarah remembered the tax question, there was nothing anywhere for anyone to find. Not in our database, because we don't have one. Not in her account, because she doesn't have a saved history. Not at Anthropic, because the API logs auto-deleted a week after the chat.

That's the whole architecture. There's no clever cryptography trick, because we don't need one. The privacy guarantee is structural: the data doesn't exist after you close the tab.

Honest tradeoffs

Private Claude is not the right tool for everyone. Here's where it costs you something.

The free tier is for trying, not for living on. You get 50 Haiku messages and 25 Sonnet messages, using your own Anthropic connection password. That's enough for two or three real conversations. Once you hit the cap, you upgrade or stop. There's no recurring free quota.

Paid tiers are unlimited. Once you upgrade to Basic ($17) or Pro ($37), there's no PrivateClaude-imposed message cap. You can chat as much as you want, all day. The only limit is what you choose to spend on your own Anthropic account. Most casual users spend about $3 to $5 a month with Anthropic on top of the subscription.

This isn't a replacement for ChatGPT or Claude Pro. It's an addition. Most Private Claude users keep their main ChatGPT or Claude.ai subscription for everyday work and add Private Claude for the conversations they wouldn't put in their main account. The work email and the private journal aren't the same thing. ChatGPT for one. Private Claude for the other. Total cost for that setup is roughly $20 for ChatGPT or Claude Pro, $17 for Private Claude, and another $3 to $5 in API spend. About $40 a month for everything you do, plus a designated private space when you need it.

You don't get chat history. By design. When you close the tab, the conversation ends. No saved threads. No scroll-back to last week's chat. No exporting old conversations on the free tier. If you need to keep an answer, copy it before you close the tab. This is the privacy tradeoff: history is risk, and we don't store it.

You won't get features Anthropic ships first to Claude.ai. Claude.ai sometimes gets new features (like a new model, or a new tool integration) ahead of API availability. You'll usually get them within a week or two. If you need bleeding-edge first-day-of-release access, Claude.ai Pro is going to be a half-step ahead.

Anthropic still runs the model. Private Claude does not give you privacy from Anthropic's model servers. If your threat model is "Anthropic itself is the adversary," you need a self-hosted open-source model, not Private Claude. We cover the self-hosted option here.

Who this is for

You're probably a Private Claude user if any of these are true:

You're probably not a Private Claude user if:

Frequently asked questions

Is Claude.ai private?

Not by default. Free, Pro, and Team plans train on your conversations unless you manually opt out in settings. Chats are stored in your account and retained for at least 30 days. They can be subpoenaed under court order, breached if your account is compromised, and used to improve future models on the consumer plans.

Does Anthropic train Claude on my chats?

On consumer plans (Free, Pro, Team) the default is yes, unless you turn off training in your data controls. On the API and Enterprise plans, Anthropic does not train on your inputs or outputs.

Is Claude Incognito mode actually private?

Incognito chats aren't saved to your visible history and aren't used for training, but Anthropic still retains them for 30 days for safety screening and abuse detection. They can be subpoenaed during that window. Incognito is private from your account history, not from Anthropic itself.

What's the difference between Claude.ai and the developer version of Claude for privacy?

Claude.ai is the website most people use, where chats are stored in your account and used to train future models by default. The developer version of Claude (the one businesses plug into their apps) has different rules: operational logs auto-delete after 7 days, conversations are never used for training, and there's no saved chat history. Private Claude uses the developer version under the hood and gives you a normal chat interface on top.

Can I use Claude without giving Anthropic my conversations?

You can't avoid Anthropic seeing your prompts because Anthropic runs the model. But you can avoid having your conversations stored as a chat history, trained on, or attached to a long-lived consumer account. Using a tool like Private Claude (where you connect with your own Anthropic password) keeps the conversation in your browser only, with no chat history saved anywhere.

How much does Private Claude cost?

Private Claude has a free tier with 50 Haiku messages and 25 Sonnet messages. The Basic plan is $17 a month for full Claude (Opus, Sonnet, Haiku) with unlimited messages, file uploads, and Markdown export. You also pay your own Anthropic for usage (about $3 to $5 a month if you're not using it every day). Pro is $37 a month and adds a saved system prompt that applies to every chat, a saved prompt library, and faster send speed. Most people don't cancel their ChatGPT or Claude.ai subscription. They add Private Claude on top for the conversations they don't want sitting in their main account.

Does Private Claude store my conversations?

No. There is no chat history. Conversations live in your browser tab while it's open. Close the tab and the conversation is gone. We don't keep history because history is the thing that creates risk.

What happens to my messages when I use Private Claude?

Anthropic gets your message to generate Claude's response. Their operational logs hold the request for 7 days for abuse detection, then auto-delete. They never use it to train future models. There's no saved chat history at Anthropic, and Private Claude doesn't keep one either. Once you close the tab, the conversation is gone.

Use Claude. Keep it private.

Use your Anthropic connection password. Start free with 50 Haiku and 25 Sonnet messages. Upgrade to $17/mo for Opus, file uploads, and Markdown exports.

Get started

No card required · Cancel anytime