Can My AI Chats Be Subpoenaed?

In 2026 a federal judge ordered OpenAI to preserve every conversation indefinitely, including the deleted ones. Here's what it means for everyone using AI.

A 2024 ruling changed the answer for ChatGPT

Until 2024, the answer to "can my AI chats be subpoenaed?" was an unsatisfying "probably, if they still exist by the time someone asks." There was a retention window, and outside the window the data was supposed to be gone.

Then the New York Times sued OpenAI. In the discovery phase of that copyright lawsuit, a federal magistrate judge ordered OpenAI to preserve every ChatGPT conversation indefinitely as potential evidence. Not just chats from people who'd been near the lawsuit. Every chat. Every account. Free, Plus, Pro, Team. Including chats users had deleted. Including Temporary Chats that OpenAI's own product page said wouldn't be retained.

The order excluded Enterprise customers and API users with Zero Data Retention agreements. Those are contracts where Anthropic or OpenAI agreed in writing not to keep the data. Everyone else: preserved.

The court's June 2024 preservation order requires OpenAI to retain "all output log data that would otherwise be deleted on a going-forward basis," including data users had affirmatively deleted, until further order from the court.

That order was not lifted at time of writing. So if you've used ChatGPT in the last few years and you're not on an Enterprise contract, your chats are sitting in OpenAI's preservation system right now. Including the ones you thought you'd deleted.

Once data exists in a preservation system, it's reachable. Not just by the New York Times. By anyone with a valid subpoena that hits OpenAI.

Subpoena 101 for AI chats

A subpoena for your AI chats only works if three things are simultaneously true. Break any one and the chain breaks.

  1. The data has to exist. If the host already deleted it under their normal retention policy, there's nothing to produce. "We don't have it anymore" is a complete answer.
  2. The data has to be identifiable to you. Subpoenas don't say "give me every chat about taxes." They say "give me every chat associated with this account, this email, this IP, this payment method." If the host has no way to tie the data to you, they can't comply.
  3. The host has to be able to comply. A US-based AI company served with a US court order will comply. A self-hosted model on your own laptop? There's nobody to serve.

Most privacy advice focuses on the first leg: don't let the data exist. That's why no-history tools matter. They take the data out of the equation before any court ever asks.

Where your chats actually sit

Different products store conversations in very different places, with very different retention rules. This is the table that matters.

ProductWhere storedDefault retentionSubpoena reachable?
ChatGPT (Free / Plus / Pro / Team)OpenAI servers, attached to your accountIndefinite (per 2024 preservation order)Yes, indefinitely
ChatGPT Temporary ChatOpenAI servers (per court order)Indefinite (per 2024 preservation order)Yes, indefinitely
ChatGPT Enterprise / ZDR APIOpenAI infra, no retentionNone (excluded from order)No (data not retained)
Claude.ai (Free / Pro / Team)Anthropic servers, attached to your account30+ days; account history indefiniteYes, while retained
Claude IncognitoAnthropic servers30 daysYes, during 30-day window
Anthropic API (no chat-history feature)Anthropic operational logs7 days, then auto-deletedOnly during 7-day window
Self-hosted (Ollama, Jan, Llama)Your own machineWhatever you chooseNot by host (no host)

For a deeper plan-by-plan breakdown of Anthropic's retention policies, see does Anthropic read my chats.

Real cases of AI chats becoming evidence

The NYT preservation order is the headline case. It's not the only example of stored AI chats becoming a problem.

Samsung's source-code leak (2023)

Samsung engineers pasted internal source code into ChatGPT to debug it. The code went onto OpenAI's servers. Samsung discovered what was happening, banned ChatGPT internally, and started building their own internal AI tool. The IP wasn't necessarily exposed to the public, but it was no longer under Samsung's control. Critically: it was now on a server that could be subpoenaed in any future lawsuit involving Samsung.

Messaging app history in litigation

You don't have to look hard for examples of stored chat history showing up as evidence. iMessage backups in iCloud have been pulled into divorce cases. WhatsApp messages have surfaced in employment lawsuits. Slack DMs are routinely subpoenaed in IP disputes and harassment cases. AI chats are the same category of record. They're text conversations sitting in a third-party host's database, attached to your identity.

The legal system already knows how to subpoena chat logs. AI chats are just the newest variant of an old type of evidence.

What survives a subpoena

People assume that deleting a chat protects them. It doesn't, on most platforms.

When you delete a chat from your visible history on Claude.ai or ChatGPT, the chat moves to a different state in the host's database. It's flagged for removal but typically retained for 30 days for safety processing. During those 30 days, it remains subpoena-able. On ChatGPT today, under the 2024 preservation order, "deleted" doesn't even mean "queued for removal." It means "retained indefinitely as evidence."

Backups are the other layer most users don't think about. Server backups exist for disaster recovery. Companies don't advertise their backup retention windows because they don't have to. Your "deleted" chat may exist in a backup tape for months or years.

The only consumer-facing setup that genuinely breaks this chain is a Zero Data Retention agreement. ZDR is a contractual commitment that the provider will not retain the data after generating the response. It's standard for Enterprise contracts and for some API customers. It's not available on consumer Claude.ai or consumer ChatGPT.

The asymmetry

Consumer plans give you a delete button that doesn't really delete. Enterprise plans give you a contract that does. Same product, different legal architecture. The button is for your peace of mind. The contract is for the lawyers.

The Anthropic API picture

Here's the part most consumer privacy writeups miss. Anthropic sells Claude two ways: the consumer website (Claude.ai) and the developer API. They have completely different retention rules.

On the API, operational logs auto-delete after 7 days. There is no chat-history feature. Each API call is a stateless request. Anthropic isn't building a "your past conversations" view because the product isn't designed for that.

So a chat from a year ago, run through the API, has no copy anywhere by design. Not in your account (no account-level history). Not in operational logs (auto-deleted after 7 days). Not in backups (the data wasn't preserved long enough to back up).

If a subpoena lands at Anthropic in May for an API conversation that happened in March, there's nothing to produce. The answer is "we don't have it." That answer is structural, not promotional.

Practical playbook

Five concrete steps any AI user can take today, ranked from easiest to most thorough.

  1. Opt out of training on consumer plans. On Claude.ai and ChatGPT, turn off the "use my chats to train models" setting. Defaults are usually on. This doesn't change retention, but it stops your conversations from leaving the retention bucket and entering the training bucket.
  2. Delete chat history regularly. Imperfect on ChatGPT (the preservation order overrides), but on Claude.ai it does start the 30-day removal clock. The principle: don't accumulate. The longer your account history, the bigger the surface.
  3. For sensitive conversations, use the API or a no-history tool. Don't put your tax mess, medical questions, or work conflict into the same account that has two years of recipe ideas. Use a no-history product like Private Claude, or use the Anthropic API directly through a developer tool.
  4. For HIPAA or regulated work, get a BAA-backed deployment. Consumer plans can't sign Business Associate Agreements. If you're a clinician or handling protected health information, you need an Enterprise or BAA-eligible setup. The contract is the protection.
  5. Use ZDR or API-tier when available. Zero Data Retention contracts are the strongest commercial setup. If your employer has Enterprise access with ZDR, use that for sensitive work over your personal ChatGPT account.

What Private Claude does about this

Private Claude is built on the API, not on Claude.ai. That's the architectural choice that fixes the subpoena problem at the foundation.

So the math is simple. A subpoena for a Private Claude conversation arriving 8 days after the chat finds nothing on our servers (we never had it), nothing in your account (no account-level history exists), and nothing at Anthropic (the operational log auto-deleted). There's no record left to produce.

This isn't a promise we have to keep. It's a property of the system. Data we don't store can't be subpoenaed out of us.

Frequently asked questions

Can my ChatGPT or Claude chats be subpoenaed?

Yes. AI chats are stored records on a third party's servers, and stored records can be subpoenaed. In 2024 a federal judge in the New York Times vs OpenAI lawsuit ordered OpenAI to preserve every ChatGPT conversation indefinitely as potential evidence, including Free, Plus, Pro, and Team accounts, including deleted chats and Temporary Chats. The order has not been lifted at time of writing.

Does deleting a chat remove it from a subpoena's reach?

Not on consumer ChatGPT. The 2024 preservation order requires OpenAI to keep deleted chats and Temporary Chats indefinitely. On Claude.ai, deleting removes the chat from your visible history but Anthropic typically retains a copy for at least 30 days for safety processing, during which it remains subpoena-able.

What about Temporary Chats and Incognito mode?

These hide the chat from your account history and prevent training. They do not delete the chat from the host's servers. ChatGPT Temporary Chats are explicitly covered by the 2024 preservation order. Claude Incognito chats sit on Anthropic's servers for 30 days. Both are reachable by subpoena during their retention window.

Are API and Enterprise accounts safer?

Generally yes. Enterprise customers with Zero Data Retention agreements were excluded from the 2024 preservation order. The Anthropic API auto-deletes operational logs after 7 days and has no chat-history feature, so there's almost nothing for a subpoena to reach a week after the conversation.

What does a subpoena actually need to work?

Three things. The data has to exist somewhere. The data has to be identifiable to you (account, IP, payment method, device). And the host has to be able to comply with a court order. Break any one of those and the chain breaks.

Can law enforcement read my chats without a subpoena?

Not legally, in most cases, without a subpoena, warrant, or court order appropriate to the request. AI providers publish transparency reports on government data requests. The bigger risk for most users isn't unauthorized access by law enforcement. It's civil litigation (divorce, employment, IP disputes) where opposing counsel issues a subpoena and the host complies.

How does Private Claude reduce subpoena risk?

Private Claude doesn't store chat history. Conversations live in your browser tab while it's open and are gone when you close it. The chat goes to Anthropic via the developer API, where operational logs auto-delete after 7 days. So 7 days after the conversation, there's no copy on Private Claude's servers, no copy in your account, and no copy at Anthropic. There's nothing left to subpoena.

Should I just stop using AI for sensitive topics?

You don't have to. You just need to match the tool to the topic. Use Claude.ai for low-stakes work where chat history is useful. Use a no-history tool like Private Claude or a self-hosted model for sensitive topics where you'd rather the conversation didn't outlive the moment. Same way you'd use email for one kind of message and Signal for another.

Use Claude. Keep it private.

Use your Anthropic connection password. Start free with 50 Haiku and 25 Sonnet messages. Upgrade to $17/mo for Opus, file uploads, and Markdown exports.

Get started

No card required · Cancel anytime