Zero-Retention AI for Regulated Teams

ZDR defined plainly. The exceptions vendors don't talk about. A buying checklist for compliance officers who can't afford to learn the hard way.

Every compliance officer evaluating AI in 2026 has a copy of the same vendor pitch deck on their desk. Slide three says "zero data retention" in a confident sans-serif. The footnote at the bottom of slide twelve, in 8-point grey, says "subject to operational logging and abuse-detection requirements."

That gap, between the confident slide and the grey footnote, is where every meaningful conversation about AI privacy now lives. ZDR has become a buying requirement for healthcare, finance, legal, and any other regulated industry. It's also one of the most slippery terms in the AI vendor stack. Here's what it actually means, where the exceptions live, and what to ask before you sign.

What ZDR (Zero Data Retention) actually means

ZDR is a contract term. It means the vendor agrees not to store your inputs (the prompts you send) or outputs (the responses generated) beyond what's needed to deliver the response back to your application. Once the response is delivered, the content is gone.

That's the clean version. The less clean version: ZDR addresses content storage. It doesn't, by itself, address every other class of data the vendor's infrastructure produces. Operational logs for abuse detection, billing records, request metadata, sub-processor logs at the CDN and load-balancer level. These can persist even when ZDR is in force, often for short windows, sometimes longer.

The mental model worth holding: ZDR is a contractual posture about content. It is not a magical absence of all data about your usage. Vendors who say "we don't keep anything" are either using shorthand or oversimplifying. Vendors who say "we contractually agree not to retain inputs or outputs beyond response generation, with these specific exceptions" are telling you the truth.

Why every regulated team is asking about it now

ZDR existed as a concept for years. It became a buying requirement in 2024.

In May 2024, a federal magistrate judge in The New York Times v. OpenAI and Microsoft ordered OpenAI to preserve all ChatGPT logs indefinitely as part of discovery in the copyright lawsuit. The order applied to consumer ChatGPT, Plus, Team, and standard API users. It did not apply to Enterprise customers who had a ZDR clause explicitly written into their contract. (We covered the broader implications in Can My AI Chats Be Subpoenaed.)

The compliance officer's reaction was immediate and uniform. If a court order can override a vendor's stated retention policy and force indefinite preservation, then the only thing standing between your client data and a discovery production is a contract clause. Either you have one, or you don't.

That order rewrote the buying criteria for every regulated team in the country. ZDR went from a feature to a gate. No clause, no procurement.

The three layers of retention to ask about

When a vendor says "we have ZDR," the right next question is "across which layers?" There are three, and they're worth separating.

Layer 1: Input and output content. The actual text of the prompt and the response. This is what most people picture when they hear "retention." A real ZDR contract addresses this layer cleanly: content isn't stored beyond response generation.

Layer 2: Operational and abuse-detection logs. Most vendors run automated classifiers on prompts to detect abuse (CSAM, weapons synthesis, malware generation, etc.). Those classifiers produce flags, scores, and short-term log entries. Some vendors keep a rolling window of the input itself for human review when a flag fires. This is the layer where "ZDR" gets fuzzy. Anthropic defaults to 7 days. OpenAI defaults to 30 days. Azure lets you opt out with approval.

Layer 3: Metadata logs. Who hit the API, with which key, against which model, at what time, with what token count, from what IP. This isn't your content. It's a record of your usage pattern. Even under aggressive ZDR, metadata logs almost always exist for billing, capacity planning, and security forensics. The retention window for these can be much longer (90 days, a year, or more).

True ZDR addresses all three layers explicitly. A contract that's silent on layers 2 and 3 isn't full ZDR. It's content-only ZDR.

Anthropic's specific posture

Anthropic publishes its API retention defaults clearly. From the official documentation:

Anthropic's posture is one of the cleaner ones in the market. The 7-day operational window is short by industry standards, the no-training default is firm, and the documentation doesn't bury the exceptions in legalese.

OpenAI's specific posture

OpenAI's posture varies dramatically by tier, which is the source of most compliance confusion.

If your team is using OpenAI through anything other than Enterprise, you don't have ZDR. That's the practical bottom line, and it's worth saying plainly because the marketing language tends to blur it.

Microsoft Azure OpenAI

Many regulated enterprises route their OpenAI traffic through Azure rather than going to OpenAI directly. Azure has its own retention posture, distinct from OpenAI's, and it's worth understanding.

Azure OpenAI defaults to abuse-monitoring logs similar to OpenAI's. The difference: Azure offers a path to opt out of abuse monitoring entirely, which gets you to a true ZDR posture. The catch is that opt-out requires Microsoft's approval. You apply for the modified data handling, justify the use case (typically healthcare, financial services, legal, or government), and Microsoft reviews.

For approved customers, the result is the strictest ZDR posture in the OpenAI family: no abuse logs, no content retention, no metadata beyond billing. For everyone else on Azure, retention sits closer to OpenAI's standard API.

If you're routing through Azure for compliance reasons, the abuse-monitoring opt-out is the configuration that actually delivers ZDR. Without it, you have a slightly different version of OpenAI's standard retention.

The exceptions vendors don't lead with

Even the cleanest ZDR contract has carve-outs. They aren't hidden, but they aren't on the front of the pitch deck either. The honest list:

ZDR exceptions to expect
  • Abuse detection logs. Almost every vendor reserves the right to retain flagged prompts (those that hit a classifier) for human review. Window varies from 7 to 90 days.
  • Legal hold. Court orders, subpoenas, and law enforcement requests override ZDR. The contract typically requires the vendor to preserve relevant data when served, regardless of stated retention.
  • Billing and usage records. Token counts, API call counts, model selection, timestamps. This isn't content, but it's metadata that can identify usage patterns. Retained for as long as the billing relationship plus statute of limitations.
  • Sub-processor caching. CDN and load-balancer logs at the edge. Short-lived (typically minutes to hours) but real. Vendors disclose these in their sub-processor list.
  • Performance caches. Some vendors cache response prefixes for latency optimization. Anthropic's prompt caching is opt-in and customer-controlled. Other vendors do this less transparently.
  • Aggregated telemetry. Anonymized request volume, error rates, latency. Not individually identifying, but worth knowing it exists.

None of these are deal-breakers. They're operational realities. The question for compliance is whether the vendor names them explicitly in the contract or buries them in vague language about "necessary operations." Named exceptions are auditable. Vague exceptions are not.

Vendor checklist for ZDR

Ten questions to put in front of any AI vendor before you sign. If they can't answer cleanly, that's the answer.

The 10-question ZDR audit
  • 1. Is there an explicit ZDR clause in the BAA, MSA, or DPA we'll sign? Not just a marketing page. A clause with section number and signature page reference.
  • 2. List every category of data retained, by name, with retention windows. Inputs, outputs, abuse logs, metadata, billing, sub-processor logs, telemetry. Each one named, each one with a number of days.
  • 3. What's the operational log lifecycle? When is data written, when is it auto-deleted, who can access it during the window, and what's the audit trail of that access.
  • 4. What sub-processors retain data, and for how long? CDN, infrastructure providers, monitoring vendors. The vendor's ZDR doesn't bind their sub-processors automatically.
  • 5. Where is the data physically located? US, EU, multi-region. Data residency matters for GDPR, HIPAA, and state-level regulations.
  • 6. What's the deletion-on-demand SLA? If a customer requests deletion, how fast does it happen. Hours, days, weeks. In writing.
  • 7. What's the breach notification SLA? 24 hours, 72 hours, "without undue delay." HIPAA requires 60 days max. Many state laws require shorter.
  • 8. Who at the vendor can access our data, and is that access logged? Engineer access, support access, abuse-review access. Each one with an audit log you can request.
  • 9. What's the abuse-investigation carve-out scope? When can the vendor look at flagged content, what's the legal basis, who at the vendor reviews it, and what happens after.
  • 10. Show me the vendor's own ZDR proof. SOC 2 Type II, ISO 27001, HITRUST, or third-party audit. Self-attestation isn't proof.

This is the level of specificity a compliance officer should expect. Vendors that are serious about ZDR can answer all ten in a discovery call. Vendors that aren't, will deflect to marketing language or "let me get back to you."

Vendor posture comparison at a glance

Vendor / TierZDR AvailableDefault RetentionTraining on inputs
Anthropic APIYes (default)7-day operational logsNo
Anthropic EnterpriseYes (negotiable)ConfigurableNo
OpenAI Standard APINo30 daysNo (default)
OpenAI EnterpriseYes (with clause)ConfigurableNo
OpenAI Consumer ChatGPTNoIndefinite (court order)Yes (opt-out available)
Azure OpenAI (standard)Partial30 days abuse logsNo
Azure OpenAI (opt-out approved)YesNone (content)No

Where Private Claude fits

Private Claude is built on the Anthropic API, which means we inherit Anthropic's retention defaults at the model layer. The 7-day operational rolling log applies to traffic going through us, the same as it would for any direct Anthropic API customer. No training, no input/output retention beyond response generation.

On top of that, the Private Claude application has zero chat history of its own. There's no database storing conversations. Once you close the tab, the conversation is gone from our side. We don't have a copy to subpoena because we never had one in the first place. (More on that posture in HIPAA-Compliant AI Chat.)

For Business customers with stricter requirements:

The pitch isn't that Private Claude has invented a new privacy primitive. It's that we've assembled the existing primitives (Anthropic's API posture, no-history application architecture, optional VPC deployment) into a configuration regulated teams can actually buy and run, without spending six months building it themselves.

Frequently asked questions

What does zero data retention (ZDR) actually mean?

ZDR is a contract term where the vendor agrees not to store your inputs or outputs beyond what's needed to deliver the response. It doesn't mean no data exists at any point. Operational logs for abuse detection, billing records, and metadata typically still exist for short windows. ZDR is a contractual posture, not a magical absence of all data.

Why did ZDR become urgent for compliance teams in 2024?

In May 2024, a federal magistrate ordered OpenAI to preserve all ChatGPT logs indefinitely as part of the New York Times copyright lawsuit. The only carve-out was Enterprise customers with an explicit ZDR clause in their contract. Overnight, ZDR went from a nice-to-have to the difference between deletable and indefinitely preserved.

What's Anthropic's API retention policy?

By default, Anthropic's API keeps operational logs for 7 days for abuse detection, then auto-deletes them. Inputs and outputs aren't retained beyond what's needed to generate the response, and they aren't used for training. Enterprise customers can negotiate further, including stricter ZDR terms in a BAA or MSA.

Does OpenAI offer ZDR?

Yes, but only on Enterprise tier with an explicit ZDR clause. Standard API has 30-day retention by default. Consumer ChatGPT is currently under court order to preserve logs indefinitely. If you're on a non-Enterprise OpenAI plan, you don't have ZDR.

Can I get ZDR through Microsoft Azure OpenAI?

Yes. Azure OpenAI offers ZDR by opting out of abuse-monitoring logs, but it requires Microsoft's approval. You apply for the modified data handling, justify the use case, and Microsoft reviews. Many regulated enterprises route through Azure specifically for this approval-gated ZDR posture.

What retention exceptions do AI vendors typically carve out?

Even under ZDR, vendors usually retain: abuse-detection logs (often 30 days), legal hold preservation, billing and usage records, sub-processor logs (CDN, infrastructure), and short-term performance caches. Content isn't stored, but metadata that identifies usage patterns often is.

What should be in a ZDR clause for a regulated team?

Look for explicit language on input/output non-retention, named operational log lifecycles, sub-processor retention disclosures, geographic data residency, deletion-on-demand SLA, breach notification SLA, audit log of vendor employee access, the abuse-investigation carve-out scope, and proof of the vendor's own ZDR posture (SOC 2, ISO, third-party audit).

Where does Private Claude fit on the ZDR spectrum?

Private Claude inherits Anthropic's 7-day operational log default because we're built on the Anthropic API. The Private Claude application itself has zero chat history, no database storing conversations. For Business customers with stricter requirements, we can deploy in your VPC, sign a BAA, and configure for the hardest ZDR posture available.

Private Claude for regulated teams.

BAA available. Zero data retention. Self-serve or deploy in your VPC. Talk to us about your compliance requirements.

Contact sales