Private Claude for Business
Why Claude is the right model for regulated work, what Claude.ai Team is missing, and how to deploy Private Claude without giving up control.
What Private Claude for Business actually is
Private Claude for Business is a deployment of Claude built for teams that have to answer for where the data went. The short version: it's full Claude (Opus 4.7, Sonnet 4.6, Haiku 4.5) sitting behind a chat interface that doesn't store conversation history at the application layer, with a Business Associate Agreement on the table and SSO, audit logs, and configurable retention switched on.
The longer version is worth pinning down because vendors in this space love to wave the word "compliant" around without saying what's underneath. Here's what's underneath.
- BAA-backed. A Business Associate Agreement covers the Private Claude application layer. For HIPAA-covered teams, that's the contract that lets PHI flow through the system at all.
- BYOK or VPC deployment. Bring your own Anthropic key on the hosted tier, or run the entire Private Claude stack inside your AWS, Azure, or GCP environment if you need to keep the application layer in your tenancy.
- Zero application-layer chat history. By default, the Private Claude app does not persist conversations. When the session ends, the conversation is gone from our systems. Audit logs are separate and configurable.
- Anthropic API operational logs auto-delete in 7 days. The model layer keeps a short abuse-detection window, then it's gone. No training on your inputs or outputs.
- SSO, audit logs, team controls. Google Workspace and Microsoft Entra (formerly Azure AD) are supported out of the box. Every prompt and response is logged per user with timestamps. Roles and seat management are admin-controlled.
That's the architecture. The rest of this post is why it's shaped that way.
Why Claude (the model) for regulated work
Let's be honest about this part. Picking Claude over GPT for regulated work is a model preference call. The GPT-4 family is viable for many of the same use cases, and any team telling you "you must use Claude for compliance" is overselling. What we'll say is why we picked Claude for the teams we work with, and you can decide whether the same reasons apply to you.
Refusal behavior. Claude is meaningfully more conservative on edge cases that matter in healthcare and law. Ask both models to draft language that's borderline diagnostic or borderline legal advice, and Claude is more likely to add the disclaimers you actually wanted in the output. That saves a review pass.
Long context. Claude's effective context window holds up well past the marketing number. You can drop a full intake packet, a 60-page contract, or a year of client correspondence into one prompt and get coherent output that references the early pages. For regulated work, this matters because the alternative is chunking, and chunking is where context gets lost and errors creep in.
Instruction-following on stylistic constraints. "Use HIPAA-safe phrasing." "Frame this as attorney work product." "Don't refer to the client by name in the output." Claude tracks these constraints across long generations more reliably than the GPT family in our testing. That's not a benchmark, it's a felt difference in how much rework comes back from a partner or compliance officer.
If you're already standardized on GPT and the workflows are working, don't change models because of a blog post. If you're starting fresh, or you're rebuilding around a compliance posture, the reasons above are why we'd point you at Claude.
What Claude.ai Team is missing for compliance
If you've been looking at Claude.ai Team as a candidate for regulated workflows, the gaps are worth naming directly.
No BAA on Team. Anthropic offers a BAA on the Enterprise tier. Not on Team. That alone disqualifies Claude.ai Team for any HIPAA-covered workflow, regardless of what your settings look like.
Chat history retained indefinitely. Conversations are stored in your account on Anthropic's servers with no automatic expiration. For regulated teams with record-retention policies, "indefinite" is a problem either direction: too long for some categories of data, and not subject to your destruction schedule for others.
Training on by default with per-user opt-out. The Team tier defaults to using conversations to train future models, with each individual user expected to find the privacy settings page and toggle it off. That's not a posture most compliance officers will sign off on, because it depends on every employee doing the right thing every time.
No per-user audit log. There's an admin dashboard that shows seat-level usage, but there's no exportable record of what was asked and what was answered, indexed by user, with timestamps. That's the audit trail compliance work actually needs.
No SSO on lower tiers. SSO appears on Enterprise. On Team, you're managing seats by email invite. For organizations with central identity management, that's not workable.
None of this means Claude.ai Team is a bad product. It means it's built for general productivity teams, not regulated ones. For a deeper breakdown of the API tier vs Team tier on compliance, see our writeup on Claude API vs Claude.ai Team for compliance.
The three deployment options
"Private Claude for Business" isn't a single product. It's a small family of deployments shaped around how much control you need over the application layer. Here's how they line up.
| Option | Application layer | Model layer | BAA | Best for |
|---|---|---|---|---|
| Hosted Business tier | Run by Private Claude | Anthropic API (managed by us) | Yes, with us | Small to mid-sized regulated teams that want fast setup and don't need data residency |
| Customer VPC | Runs in your AWS, Azure, or GCP | Anthropic, Bedrock, or Vertex (your call) | Yes, scoped to your tenancy | Regulated teams with data residency requirements or existing cloud commitments |
| Direct Anthropic API + your client | Built and run by you | Direct contract with Anthropic | Direct between you and Anthropic | Larger orgs with engineering capacity who want to own the chat layer end to end |
The hosted Business tier is what most teams pick. The VPC option is what regulated teams pick when their security review insists the application layer can't leave their cloud. The direct-API path is what teams pick when they have engineers who want to build something custom on top of Claude and just need the BAA and the model. For a deeper read on the self-hosted question, see self-hosted vs BYOK cloud for regulated teams.
What's actually included in the Business tier
For the hosted Business tier specifically, here's what's in the box.
- Full Claude. Opus 4.7, Sonnet 4.6, Haiku 4.5. No throttling on which model your team can pick.
- Unlimited messages. No per-seat quota. Spend is governed at the Anthropic API layer, which means you have actual visibility into cost rather than guessing how heavily a team is using a flat-rate seat.
- File uploads. PDF, DOCX, TXT, CSV, images. Files don't persist on our servers past the session.
- Prompt library. Shared team prompts so your intake-summary template, your contract-review template, and your client-memo template all live in one place.
- SSO. Google Workspace and Microsoft Entra. SAML and OIDC supported.
- Audit logs. Every prompt-response pair logged per authenticated user, exportable in JSON and CSV.
- Configurable retention. 7 days by default. Optional 30, 90, or longer windows to match your record-retention policy.
- BAA. Signed before any PHI touches the system.
- Dedicated support. Direct line, not a ticket queue. Same-day response for compliance and security questions.
Use cases by industry
The deployment is the same. What changes is the workflow on top of it.
Healthcare clinic
The two workflows we see most often are intake summarization and patient-comms drafting. A clinic dumps a new-patient intake form into Claude and gets a structured summary the clinician can scan in 30 seconds. Or a front-desk lead drafts a patient follow-up message in Claude, then reviews and sends. PHI is in the prompt, and that's exactly why the BAA matters. For a deeper writeup, see HIPAA-compliant AI chat.
Small law firm
Research, drafting, and review. Drop a deposition transcript and ask for a timeline of contradictions. Drop a client's contract and ask for a redline against their stated objectives. Draft a client update letter from a set of bullet points. The work product still gets reviewed by an attorney, but the first pass takes minutes instead of hours.
Registered investment advisor
Client memo drafting and market research. RIAs have to be careful about anything that looks like personalized advice in writing, and Claude's instruction-following on framing constraints helps here. "Frame this as general education, not personalized advice." "Cite the source for every claim about returns." Output is consistent.
Consulting practice
Deliverable drafting and client work review. A consulting team uses Claude as a faster first draft on client decks, scopes, and analyses, then reviews. The conversations sometimes contain client-confidential material, which is why the application-layer no-history posture matters even outside HIPAA.
The compliance posture in detail
For the security-review reader, here's the posture in one block.
BAA in place before any PHI flows. Zero data retention via Anthropic API (7-day operational logs, no training on inputs or outputs). Zero application history at the chat layer by default. Encryption in transit via TLS 1.3, encryption at rest via AES-256. SSO via Google Workspace and Microsoft Entra (SAML/OIDC). Per-user audit log of every prompt-response pair, exportable JSON or CSV. Configurable retention, 7 days default, with options for 30, 90, or custom windows to match your record-retention policy.
A few notes on what's not in that summary, because the gaps matter.
We don't claim the model layer doesn't see your prompt. Anthropic runs the model. Your prompt has to reach a Claude inference server to get a response, and that means it traverses Anthropic's infrastructure. The BAA, the no-training contract, and the 7-day auto-delete are what cover that path. If your threat model is "Anthropic itself is the adversary," no managed-Claude product is going to satisfy it. You'd want a self-hosted open-source model, with all of the capability tradeoffs that come with it.
We also don't claim ourselves into a compliance posture we can't back. SOC 2 Type II is in progress. HITRUST is a longer roadmap item that we'll publish on when it ships. If your security review needs a specific certification today, ask us before signing, and we'll tell you straight whether we're there yet.
Pricing structure (transparent)
Pricing is published. The numbers don't change because you asked nicely.
- White Label Business tier: $1,449 a year. Flat. Includes BAA, SSO, audit logs, configurable retention, full Claude with unlimited messages and file uploads, prompt library, and dedicated support. Your Anthropic API spend is separate and billed directly to you, so you have direct visibility into model usage.
- Customer-VPC deployments: custom. Priced per environment based on team size, cloud, and data residency requirements. Get in touch.
- Single-seat Pro: $37 a month. The Business tier is built for teams. If you're a solo professional, Pro is usually the right fit, and it includes the same no-history chat layer and BYOK posture without the team-management overhead.
For comparison, Anthropic's own Claude.ai Enterprise tier (which is what you'd buy if you wanted the BAA directly from Anthropic) is custom-priced and typically lands in five-figure annual commitments. Our Business tier is meaningfully cheaper at the team sizes most regulated SMBs operate at.
How to evaluate vs alternatives
If you're shopping this category seriously, you're looking at three or four real options. Here's how we'd think about the comparison.
vs Claude.ai Enterprise (Anthropic's own offering). Anthropic's Enterprise tier is the official option, and at sufficient scale it's the right call. You get the BAA directly from the model maker, native Claude.ai features, and Anthropic's own roadmap. The tradeoffs are price (custom annual commitments, typically much higher than our $1,449), setup complexity (procurement, security review, contract negotiation), and the application layer (you get Anthropic's chat experience as-is, with whatever history and admin posture they ship). Our Business tier is cheaper at small team sizes and easier to set up, but Anthropic's offering is the official one and worth evaluating if you're a larger organization.
vs Hathr.AI. Hathr offers a similar BAA-backed Claude wrapper for regulated teams. The compliance shape is comparable. The differences are deployment options (we offer VPC), the chat experience itself, and pricing transparency. If you're evaluating both, ask each vendor for a sample audit log export and the actual text of the BAA. That tells you more than the marketing pages will.
vs Microsoft Copilot for Azure (or Azure OpenAI). If you're already on Azure, running OpenAI models through Azure with a BAA is a real option. The compliance posture is solid, and your data stays in your Azure tenancy. The downside is you're on the GPT family, not Claude, and the setup involves more compliance overhead than a hosted Business tier. If your org is Azure-native and your team is comfortable with GPT, this is a credible path. If you specifically want Claude, you're back to one of the three deployment options above.
The right way to evaluate any of these is to run a two-week pilot with two or three of your real workflows. Put real (de-identified, if needed) intake docs, contracts, or client memos through each candidate and see what comes out. Pricing pages don't show you which one fits.
Frequently asked questions
What is Private Claude for Business?
Private Claude for Business is a BAA-backed deployment of Claude built for regulated teams. It runs on the Anthropic API (7-day operational logs, no training), adds zero application-layer chat history, and ships with SSO, audit logs, and configurable retention. You can self-serve the hosted Business tier, deploy Private Claude in your own VPC, or contract directly with Anthropic and bring your own client.
Does Private Claude sign a BAA?
Yes. The Business tier includes a Business Associate Agreement covering the Private Claude application layer. The BAA with Anthropic for the underlying model is held either by Private Claude (on the hosted Business tier) or directly between your organization and Anthropic (on VPC and direct-API deployments). Either way, the chain of BAAs covers the full path of the data.
Why use Claude instead of GPT for regulated work?
It's a model preference call, and GPT-4 family is viable for many of the same use cases. We pick Claude for three reasons specific to compliance work: refusal behavior (Claude is more conservative on edge cases that matter in healthcare and law), longer effective context window (you can drop full intake packets or contracts into one prompt), and stronger instruction-following on stylistic constraints like HIPAA-safe phrasing or attorney-client framing.
What's wrong with Claude.ai Team for compliance teams?
Three things. First, no BAA is available on the Team tier (only on Enterprise). Second, training is on by default with a per-user opt-out, which is not a posture most compliance officers will sign off on. Third, chat history is retained indefinitely on Anthropic servers, with no per-user audit log and no SSO on lower tiers. The Team tier is built for general productivity, not regulated workflows.
Can I deploy Private Claude in my own VPC?
Yes. The Customer-VPC option runs the Private Claude application stack inside your AWS, Azure, or GCP environment. The model still runs at Anthropic (or via Anthropic on Bedrock or Vertex), but the chat layer, audit logs, SSO integration, and any configured retention all stay in your tenancy. This is the right shape for organizations that need data residency or want full control of the application layer.
What does Private Claude for Business cost?
The White Label Business tier is $1,449 per year. That includes the BAA, SSO, audit logs, configurable retention, dedicated support, and full Claude (Opus 4.7, Sonnet 4.6, Haiku 4.5) with unlimited messages and file uploads. Customer-VPC deployments are priced per environment based on team size and data residency requirements. Solo professionals are usually better served by the $37 a month Pro plan.
Is the audit log per-user?
Yes. Every prompt and response is logged with the authenticated user, the timestamp, the model, and the session. Logs are exportable in JSON and CSV, and retention is configurable (7 days default, with options for 30, 90, or longer to match your record-retention policy). This is the audit trail compliance officers actually want, not just a vendor-side metric dashboard.
How does Private Claude for Business compare to Hathr.AI?
Both offer a BAA-backed Claude wrapper for regulated teams, and the compliance posture is similar in shape. The differences come down to deployment options (we offer VPC), pricing model (we publish a flat $1,449 a year for the Business tier), and the chat experience itself. If you're evaluating both, ask each vendor for a sample audit log and the exact text of the BAA. That tells you more than the marketing pages.
Private Claude for regulated teams.
BAA available. Zero data retention. Self-serve or deploy in your VPC. Talk to us about your compliance requirements.
Contact sales