Does Anthropic Read My Chats?

Plan-by-plan answer with direct policy quotes. Plus a decision tree for "where am I, and what's actually happening to my data?"

The short answer

Humans at Anthropic generally don't read individual chats. There's no team of employees scrolling through conversations for fun, and the question "does someone at Anthropic see what I'm typing?" almost always has a "no" attached to it in the routine case.

But the data does sit on their servers. On the consumer plans, conversations are stored indefinitely, attached to your account, and used to train future models by default unless you opt out. Human review can happen in narrow circumstances: when an automated classifier flags a chat for a specific harm category, when there's an active abuse investigation, or when Anthropic is legally compelled to produce records.

The full answer depends entirely on which Claude product you're paying for. Free, Pro, and Team have one set of rules. Enterprise has another. The API (the developer version that powers tools like Private Claude) has a third. Below is each one, with the actual retention and training policies.

The short version

Nobody at Anthropic is reading your chat right now. But on Free, Pro, and Team, your chat is being stored, indexed, and (by default) fed into the next training run. The privacy posture isn't "humans reading you," it's "your data is on a server you don't control, and a future model is being shaped by it."

What "read" actually means

"Does Anthropic read my chats?" is a fuzzy question because "read" can mean three completely different things. Worth pulling them apart.

1. Automated classification. Every prompt you send to Claude, on every plan, gets scanned by automated safety systems. These look for narrow categories of harm (covered below). The scan happens in milliseconds, no human is in the loop, and the system isn't flagging your tone or your topic. It's looking for a specific list.

2. Human review of flagged content. If a classifier triggers, or if there's an abuse investigation, or if there's a subpoena, a specific Anthropic employee may look at a specific chat. This is rare for normal users and isn't browsing-style review.

3. Training on your data. A different question entirely. This is whether your conversation goes into the dataset that shapes the next version of Claude. No human reads it during training. The data goes through automated pipelines. But it does shape the model, and that's the thing most people are actually asking about when they say "is my chat private?"

Each meaning has different rules per plan. Here's how they break out.

By plan: Free

Claude Free is the most exposed posture of any plan.

If you've used Claude Free for a year and never touched the privacy settings, Anthropic has a year of your conversations on file, and a portion of them have been (or will be) used to train future models.

By plan: Claude Pro and Team

The paid consumer tiers behave the same way Free does, with one difference: you've paid for higher message limits and access to better models. The privacy posture is identical.

This is the part most people miss. Paying $20/month for Claude Pro doesn't change the data deal. The training default is still on. The history is still indefinite. The opt-out is still buried in settings most users have never opened.

By plan: Claude Enterprise

Enterprise is the first tier where the data deal flips.

Enterprise is where Anthropic will sign a real DPA, sometimes a BAA, and usually a custom retention agreement. It's also where the price climbs from $30/seat into "talk to sales" territory.

By plan: API and Workbench (developer version)

The API is the version of Claude that developers use to build apps. It's the strictest privacy posture Anthropic offers, and it's available to anyone, not just enterprise buyers. Per Anthropic's API and data retention docs:

This is the tier Private Claude uses. Because we're built on the API, we inherit the API's privacy posture: no training, 7-day rolling logs, no saved history at Anthropic. We don't store conversations on our side either, which closes the last gap.

The Usage Policy classifier: what actually gets flagged

Every prompt you send, on every plan, is scanned by an automated classifier. People hear "scanned" and assume the worst. The reality is that the categories are narrow, specific, and published. Drawn from Anthropic's published Usage Policy:

The point

Tax questions, health questions, money questions, work conflicts, relationship problems, religious questions, mental health concerns, legal questions about your own life: none of these trigger the classifier. The system flags a specific list of harms. It's not a tone analyzer, it's not a topic tracker, and it's not surveillance of ordinary use. If your conversation is about your life, your business, or your problems, no human at Anthropic is going to see it because of the classifier.

A decision tree: where am I and what's happening?

Find your row. The right column tells you what's happening to your data right now.

If you're using...Then your data is...
Claude Free Stored indefinitely on your account. Used to train future Claude models by default. 30+ day retention even on deleted chats. Subject to legal subpoena. Could be compelled in court.
Claude Pro ($20/mo) Same as Free. The subscription buys you message limits and model access, not stronger privacy.
Claude Team ($30/seat/mo) Same as Free. Workspace admin can see member chats. Otherwise identical data posture.
Claude.ai Incognito mode Not in your visible history, not used for training, but Anthropic still retains for 30 days for safety screening. Private from your account, not from Anthropic. People sometimes confuse this with real incognito; it isn't.
Claude Enterprise Not used for training. Retention configurable per contract. May have human-review controls in DPA. Subject to whatever your IT/legal team negotiated.
API / Workbench (developer version) Operational logs auto-delete after 7 days. Never used for training. No saved chat history at Anthropic. The strongest standard privacy posture Anthropic offers.
Private Claude Built on the API, so inherits all of the above. Plus: no chat history stored on Private Claude's side either. Conversation lives in your browser tab and ends when you close it.

If you want a one-line summary: on the consumer tiers, your data is the product. On the API tier and on Private Claude, it isn't.

Frequently asked questions

Does Anthropic read my Claude chats?

Humans at Anthropic generally don't read individual chats. Automated classifiers scan every prompt for narrow safety categories, and employees may review specific chats during incident response or under legal compulsion. On Free, Pro, and Team plans, your conversations are stored on Anthropic's servers and used to train future models by default unless you opt out. On Enterprise and the API, training is off and retention is much shorter.

Does Anthropic train Claude on my conversations?

On Free, Pro, and Team the default is yes, with an opt-out in settings. On Enterprise and the API, training on customer inputs and outputs is off by contract. Private Claude runs on the API, so no training happens on your conversations.

How long does Anthropic keep my Claude chats?

On Free, Pro, and Team, chats are stored indefinitely as part of your account history, with a minimum 30-day retention even if you delete them. On Enterprise, retention is configurable per contract. On the API, operational logs auto-delete after 7 days and there's no chat history saved at all.

Can a human at Anthropic see my chat?

Not as a general practice. A human only sees a specific chat if it's flagged by automated classifiers for one of the narrow Usage Policy categories, if there's an active incident or abuse investigation, or if Anthropic is legally compelled to produce it under subpoena or court order.

What does Anthropic's safety classifier flag?

Narrow categories drawn from Anthropic's published Usage Policy: child sexual abuse material, non-consensual sexual content of real people, weapons of mass destruction information, weapons manufacturing instructions, malware and exploits, attacks on critical infrastructure, terrorism incitement, human trafficking, large-scale fraud, doxxing and stalking, election manipulation, discrimination in high-stakes decisions, impersonation, defamation, and unauthorized practice of regulated professions. Normal life topics like taxes, health, money, relationships, work, or religion do not trigger flags.

Is Claude.ai Incognito mode actually private from Anthropic?

Incognito hides the chat from your account history and exempts it from training, but Anthropic still receives the message and retains it for 30 days for safety screening. Incognito is private from your account history, not from Anthropic itself.

What's the difference between Claude.ai and the API for privacy?

Claude.ai is the consumer website where chats are stored in your account and used for training by default. The API (developer version) auto-deletes operational logs after 7 days, never trains on your conversations, and has no saved chat history. Private Claude is built on the API, so it inherits the API's privacy posture.

Can my Claude chats be subpoenaed?

Yes, if they exist. On consumer plans where Anthropic stores your full history, a court order can compel production. On the API tier, the 7-day rolling log is the only window where data exists, and there's no long-term chat history to subpoena.

Use Claude. Keep it private.

Use your Anthropic connection password. Start free with 50 Haiku and 25 Sonnet messages. Upgrade to $17/mo for Opus, file uploads, and Markdown exports.

Get started

No card required · Cancel anytime