Anthropic DPA Explained

What's actually in Anthropic's Data Processing Addendum, the 7-day default vs the 30-day opt-in retention, and how to obtain it for your business.

If you're rolling out Claude inside a regulated business, your privacy team is going to ask one question first: "Where's the DPA?" The DPA is the contract that turns Anthropic's marketing copy about privacy into something legally enforceable. It's also the document that determines whether your deployment can pass a SOC 2, GDPR, or HIPAA audit.

Here's what's in it, in plain English. No legalese, no marketing.

What a DPA is

DPA stands for either "Data Processing Addendum" or "Data Protection Addendum," depending on the vendor. Same document, slightly different naming convention. It's a contract that hangs off a vendor's main commercial terms and addresses one specific topic: how the vendor handles personal data on behalf of its customer.

Under GDPR and similar privacy regimes (CCPA, UK GDPR, Brazil's LGPD), there's a specific role split:

The DPA codifies that relationship. It defines what the processor can and can't do with the data, how long it keeps it, who it shares it with, what happens when something goes wrong, and how a controller can audit or terminate the arrangement. Every serious vendor that touches personal data has one. It's table stakes for B2B, not a premium feature.

What's in Anthropic's DPA at a high level

Anthropic's DPA covers the standard ground you'd expect from any mature processor. It references the commercial terms as the parent contract and layers data-protection-specific obligations on top.

SectionWhat it covers
Categories of dataAnything customers submit through the API. Treated as customer data subject to confidentiality.
Purposes of processingGenerating model outputs, abuse detection, providing the service. Nothing else.
Retention7-day default for operational logs. 30-day extension available on request.
TrainingAPI inputs and outputs are not used to train models. Contractual, not opt-in.
SecuritySOC 2 Type II, encryption in transit and at rest, access controls, incident response.
Sub-processorsPublic list maintained at anthropic.com/legal/sub-processors. AWS is primary.
Audit rightsSOC 2 reports made available under NDA. Customer audits available for Enterprise.
Cross-border transfersEU Standard Contractual Clauses (SCCs) and UK addendum incorporated.
DeletionCustomer data deleted within a defined window upon termination or request.

If you've signed DPAs with AWS, Snowflake, or any major SaaS vendor, the structure will look familiar. Anthropic's isn't unusual. The interesting parts are the retention and training clauses, which we'll dig into below.

The 7-day default retention

Every API call generates an operational log on Anthropic's side. That log includes the request, the response, and metadata used for billing and abuse detection. By default, those logs auto-delete after 7 days.

This applies to every API customer, automatically. You don't have to opt in. You don't have to ask for it. You don't need a custom DPA. Sign up, hit the API, and you've got 7-day retention by default.

That number isn't arbitrary. Seven days is long enough for Anthropic's trust and safety team to investigate flagged requests and short enough that the data exposure window stays narrow. After day 7, the log is gone. There's no archive, no cold storage, no recovery process.

Key terms quick reference
  • 7-day retention. API operational logs auto-delete. Default for every customer.
  • No training on API data. Contractual, not a setting. Same for every customer.
  • 30-day extension. Available on request via DPA amendment for compliance audit windows.
  • Zero retention. Available for qualifying Enterprise customers with vetted use cases.
  • BAA. HIPAA-covered teams, Enterprise tier only, combined with DPA.

The 30-day opt-in extension

Some teams want longer log retention, not shorter. Sounds counterintuitive until you've sat through a security incident review.

If a regulated customer detects suspicious activity (an internal user exfiltrating data, a compromised credential, a prompt injection attempt) they need a window to investigate. If the logs are already gone, the trail is cold. Seven days isn't always enough to detect, escalate, and get forensic eyes on the data.

For teams in that bucket, Anthropic offers a 30-day retention extension. It's documented in a DPA amendment, signed once, and applies going forward. The mechanics are the same: operational logs, abuse detection only, no training, no marketing analytics. Just a longer window to look back.

The opposite request also exists. Zero retention is available for qualifying Enterprise customers whose threat model can't tolerate even 7 days of vendor-side logs. That's a different conversation, with stricter eligibility, and it lives outside the standard DPA.

The training clause

This is the clause every privacy-aware buyer wants to see, and Anthropic's version is unambiguous. The commercial terms state that Anthropic will not use API inputs or outputs to train its models.

Two things worth noting:

If you're evaluating Claude for any workflow involving customer data, employee data, or trade secrets, this clause is the load-bearing one. It's the difference between "your prompts could end up training a future model" and "your prompts categorically can't."

Sub-processor list

Anthropic doesn't run on bare metal. It runs on cloud infrastructure, like every modern AI provider, and that means there's a chain of vendors behind the scenes who also touch your data. The DPA requires Anthropic to disclose them, and the public list lives at anthropic.com/legal/sub-processors.

The headline name is AWS, which provides the primary infrastructure (compute, storage, networking) that Claude runs on. AWS has its own DPA, its own SOC 2, its own GDPR posture, and Anthropic's contract with them flows down those obligations to your customer data. Other sub-processors handle specific functions: customer support tooling, billing, analytics, log aggregation. The list is the authoritative reference and gets updated when the vendor stack changes.

Under GDPR, you have the right to be notified before a new sub-processor is added so you can object. Anthropic's DPA includes that notification mechanism. For most customers, sub-processor changes are routine. For regulated buyers, the notification window is what lets you re-evaluate the chain before new vendors get access.

How to actually get the DPA signed

There are three paths, depending on your tier.

Standard API customers

If you're a developer or small team using the API on the standard commercial terms, the DPA is incorporated by reference into Anthropic's commercial terms. You don't sign a separate document. Creating an API account and accepting the terms binds Anthropic to the DPA. The 7-day retention, the no-training clause, the SCCs, all of it applies automatically.

For most B2B procurement processes, this is sufficient. Your privacy team will want to read the DPA text, confirm it covers their requirements, and file it alongside your other vendor DPAs. Done.

Enterprise customers

On the Enterprise tier, you get a dedicated DPA negotiated through account management. This is where you'd amend the retention window (to 30 days, or to zero), add specific data residency requirements, negotiate audit rights, or layer on industry-specific compliance terms. The base text is similar, but the customizations live here.

Enterprise is also where you get human contacts. A named account team, a security questionnaire response process, and the ability to pull SOC 2 reports under NDA. If you're going through a vendor security review with a procurement department, this tier is what lets you pass it.

HIPAA-covered teams

HIPAA needs a Business Associate Agreement (BAA), which is a separate document from the DPA. Anthropic offers a combined BAA plus DPA on the Enterprise tier. The BAA covers the HIPAA-specific obligations (PHI handling, breach notification timelines, the 60-day rule), and the DPA covers everything else.

You can't get a BAA on the standard API tier. That's a structural limitation. If you're processing PHI and you need it covered, Enterprise is the path.

What changes if you use Private Claude

Private Claude is a chat interface that uses the Anthropic API on your behalf. When you bring your own connection password, your usage flows through Anthropic on the API tier, which means Anthropic's DPA already applies to the underlying calls. That part is unchanged.

What's added is a second layer: Private Claude itself processes your data when you use the chat. We have our own DPA at privateclaude.ai/dpa that covers what we do with messages, account metadata, and any logs we generate.

The chain looks like this:

Practically, this means a regulated customer evaluating Private Claude needs to review two DPAs: ours and Anthropic's. Both are publicly available. Both incorporate SCCs for cross-border transfers. The combined posture (7-day API retention plus our zero-server-side-history architecture) usually ends up tighter than running directly against the API with default settings, because we don't store chat history at all on our side.

If you need a BAA on top of all of this, that's available on our business tier and bridges to Anthropic's Enterprise BAA underneath. Talk to us.

Frequently asked questions

What is a DPA?

A Data Processing Addendum (sometimes called a Data Protection Addendum) is a contract attached to a vendor's main terms that defines how the vendor handles personal data on behalf of its customer. Under GDPR and similar regimes, the customer is the data controller and the vendor is the data processor. The DPA spells out categories of data processed, retention rules, security measures, sub-processors, audit rights, and cross-border transfer mechanisms.

What's Anthropic's default API retention?

Operational logs auto-delete after 7 days for standard API usage. This is the default for every API customer, no opt-in or DPA amendment required. Inputs and outputs are not used for model training under the standard commercial terms.

Does Anthropic train on API customer data?

No. Anthropic's commercial terms state that inputs and outputs from API usage are not used to train models. This is a contractual commitment, not a togglable setting. Consumer Claude.ai plans have different rules, but the API does not train on customer data.

Can I extend the 7-day retention if I need longer logs for compliance?

Yes. Some teams want longer log retention so they can investigate incidents after the fact. Anthropic offers a 30-day extension on request through a DPA amendment. This is common for security-sensitive enterprises that need a longer audit window.

Who are Anthropic's sub-processors?

Anthropic publishes a sub-processor list at anthropic.com/legal/sub-processors. AWS is the primary infrastructure sub-processor. The list is the authoritative reference and gets updated as the vendor stack changes.

How do I actually get the Anthropic DPA signed?

For standard API customers, the DPA terms are incorporated into Anthropic's commercial terms by reference. For Enterprise customers, a dedicated DPA is signed through account management. For HIPAA-covered teams, Anthropic offers a combined BAA plus DPA available on the Enterprise tier.

Is HIPAA covered by the standard DPA?

No. The standard DPA addresses GDPR and CCPA-style requirements but does not include a Business Associate Agreement. To process PHI under HIPAA, you need the BAA, which Anthropic offers on the Enterprise tier.

If I use Private Claude, whose DPA applies?

Both. Private Claude has its own DPA at privateclaude.ai/dpa that covers our processing. Anthropic's DPA still applies to the underlying API, with Anthropic acting as a sub-processor in the chain. Your business is the controller, Private Claude is the processor, and Anthropic is the sub-processor.

Private Claude for regulated teams.

BAA available. Zero data retention. Self-serve or deploy in your VPC. Talk to us about your compliance requirements.

Contact sales