HIPAA-Compliant AI Chat

What HIPAA actually requires of an AI tool, why standard ChatGPT and Claude.ai fail the test, and the 30-minute compliance setup for a small practice.

Most "HIPAA-compliant AI" marketing is wrong. Vendors slap the phrase on a landing page after enabling TLS and call it done. Real compliance is a contract plus a set of technical and administrative safeguards, and the contract has to be signed before a single piece of PHI hits the wire. This piece walks through what HIPAA requires of an AI chat tool, which vendors actually qualify in May 2026, and how a small practice can get to a defensible posture in about thirty minutes.

What HIPAA actually requires of an AI tool

HIPAA is the Health Insurance Portability and Accountability Act of 1996. The pieces that matter for AI sit inside three rule families published by HHS:

Two roles to keep straight. A covered entity is a healthcare provider, health plan, or healthcare clearinghouse. A business associate is any vendor that creates, receives, maintains, or transmits PHI on behalf of a covered entity. An AI chat vendor that processes PHI is a business associate, full stop.

The technical bar from the Security Rule (45 CFR 164.312) breaks down to:

Miss any one of these and the tool is non-compliant. There's no partial credit.

The minimum bar

Signed BAA, encryption in transit (TLS 1.2+), encryption at rest (AES-256), audit logs of every PHI access, unique user IDs, role-based access, integrity controls, and a documented breach notification process. All of them. Together. Before any PHI touches the system.

Why ChatGPT and Claude.ai fail by default

Consumer ChatGPT, ChatGPT Plus, ChatGPT Team, Claude Free, Claude Pro, and Claude Team are all the same story for HIPAA purposes. Three problems, any one of which is disqualifying.

No BAA is offered on these tiers. OpenAI and Anthropic both publish this clearly. Without the contract, the vendor isn't a business associate, which means a covered entity sending PHI to the tool is sending it to a third party with no HIPAA obligations. That's a violation on day one.

For a deeper breakdown of how ChatGPT specifically fails, see Is ChatGPT HIPAA Compliant?. The short version: consumer tiers train on conversations by default, store chats indefinitely, and offer no BAA. ChatGPT Enterprise and the OpenAI API can be made compliant because OpenAI offers a BAA there, but that's a different product.

Conversations are stored on vendor servers under terms that allow training. Even if a clinician opts out of training, the data still sits in account-level chat history, accessible to anyone with the password. SIM swap, phishing email, shared device, and the entire conversation log is exposed.

Anything PHI typed into a non-BAA tool is technically a breach. Under the Breach Notification Rule, an impermissible disclosure of PHI to a non-business-associate vendor is a reportable breach unless the covered entity can demonstrate a low probability of compromise. Pasting a patient's name and diagnosis into ChatGPT Plus is the textbook example of what not to do.

The BAA: what it is, who signs it

A Business Associate Agreement is the contract that turns a vendor into a HIPAA-bound business associate. Without one, even an "encrypted" tool with great security can't legally hold PHI for a covered entity. The BAA does several things:

For a closer look at how BAAs work in the AI context, including what the contract should and shouldn't include, see BAA-Backed AI Chat.

The signature has to happen before any PHI is sent. A retroactive BAA doesn't fix prior disclosures. Most vendors make their BAA available on request from sales, sometimes with a short legal review. PrivateClaude Business sends a standard BAA the same day, and most enterprise AI vendors will turn one around in a week or two.

The current vendor list (May 2026)

Here's the accurate landscape of who actually offers a BAA for an AI chat product, as far as can be verified from public documentation as of May 2026. Tier matters. The same vendor often sells a HIPAA-eligible product and a non-compliant consumer product side by side.

Vendor / ProductBAA?Notes
OpenAI ChatGPT EnterpriseYesBAA on Enterprise tier. Consumer ChatGPT, Plus, and Team are not eligible.
OpenAI APIYesBAA available with Zero Data Retention configuration.
Anthropic EnterpriseYesBAA on Enterprise tier of Claude. Free, Pro, and Team are not eligible.
Anthropic APIYesBAA available. 7-day operational log retention. No training on inputs/outputs.
Microsoft Azure OpenAIYesBAA covered under standard Microsoft enterprise terms. Often the path of least resistance for organizations already on Microsoft.
Google Cloud Vertex AIYesBAA available. Wide model selection including Gemini and partner models.
AWS BedrockYesHIPAA-eligible. BAA under standard AWS terms. Multi-model.
PrivateClaude BusinessYesBAA at the application layer. Runs on Anthropic API. 7-day log retention inherited. No chat history stored at the application layer.
Hathr.AIYesMarkets as BAA-ready Claude wrapper. Verify current BAA terms with sales.
BastionGPTYesBAA-backed AI chat targeted at healthcare. Verify scope and sub-processor list.
CompliantChatGPTYesBAA-backed wrapper. Verify what model sits underneath and whether the underlying provider's BAA flows through.

A few honest caveats. "BAA available" doesn't mean "automatically BAA-covered." Most vendors require a request and signed agreement before PHI can flow. Pricing on HIPAA-eligible tiers is usually higher than the consumer equivalent. And smaller wrappers (the last three rows) deserve extra scrutiny: ask for their sub-processor list, confirm the underlying model provider's BAA flows through, and read the breach notification SLA.

The technical safeguards beyond the BAA

The BAA is the legal foundation. The Security Rule lists what the technical implementation has to look like. For an AI chat tool, the practical checklist:

Most enterprise AI vendors handle the infrastructure side. The covered entity still has to configure access roles, train staff, and maintain the audit trail. Compliance is a shared responsibility.

The retention question

HIPAA doesn't mandate a deletion timeline. It requires that PHI be protected for as long as it's held and disposed of properly when it's no longer needed. The practical principle: the more retention, the more breach surface. Less retention is almost always safer.

How the major AI providers stack up on retention:

For a small practice, less retention means less data sitting around to be breached, subpoenaed, or accidentally exposed by a sub-processor incident. The 7-day API logs at Anthropic are about as low as the major commercial AI providers go without going self-hosted.

The "we encrypt" red flag

If a vendor's HIPAA pitch is "we use encryption," walk away. Encryption is one safeguard out of many. It's necessary but nowhere near sufficient.

A tool that encrypts data in transit and at rest but offers no BAA is still non-compliant. A tool with a BAA but no audit logs is non-compliant. A tool with a BAA and audit logs but shared accounts and no role-based access is non-compliant. Every safeguard is required. Marketing copy that fixates on encryption alone is a tell that the vendor either doesn't understand HIPAA or hopes the buyer doesn't.

Red flag checklist

"HIPAA-friendly" without naming a BAA. "Bank-grade encryption" as the headline claim. No published sub-processor list. No breach notification SLA. No mention of audit logs or access controls. Any of these on a vendor's compliance page is reason to ask harder questions before signing anything.

The same logic applies to "private" or "secure" claims. Private isn't HIPAA-compliant. Secure isn't HIPAA-compliant. The contract and the safeguards together are what qualifies. Tools like consumer Signal, ProtonMail, or DuckDuckGo's AI chat are private in a meaningful sense, but none of them sign BAAs, so none of them can hold PHI for a covered entity.

A 30-minute compliance setup for a small practice

For a solo therapist, a small dental office, a chiropractic clinic, or any practice with a handful of staff, getting to a defensible HIPAA posture for AI chat is concrete and fast. The path:

  1. Pick a BAA-backed vendor (5 min). From the list above, pick the one that fits the workflow. Microsoft and Google are easy if the practice is already on those platforms. PrivateClaude Business is a good fit if the team specifically wants Claude with low retention. Hathr.AI, BastionGPT, and CompliantChatGPT are healthcare-focused wrappers worth considering.
  2. Request and sign the BAA before sending any PHI (5 min to request, 1 to 14 days to receive countersigned). Don't pilot with real PHI. Use synthetic or de-identified data until the BAA is signed by both parties.
  3. Train staff (15 min). Document what the AI tool can be used for, what it can't, and what to do if PHI is sent to a non-BAA tool by accident. The Office for Civil Rights expects workforce training; for a small practice, a one-page policy and a 15-minute team huddle qualify.
  4. Enable audit logs. Most enterprise AI tools have audit logs available, sometimes off by default. Turn them on. Decide who reviews them and how often.
  5. Document the workflow. Two paragraphs in the practice's HIPAA policies about what AI tools are approved, what data goes in, who has access, and how it's monitored.
  6. Name a privacy officer. HIPAA requires every covered entity to designate one. For a solo practitioner, that's the practitioner. For a small practice, it's typically the office manager or the practice owner.

Therapists and counselors have an extra layer of scrutiny because of the sensitivity of mental health notes. HIPAA AI for Therapists & Counselors covers the specifics, including how psychotherapy notes are treated differently under the Privacy Rule.

What can and can't go in a HIPAA-compliant AI chat

Once a BAA is in place and the technical safeguards are working, the chat tool becomes a regular workflow tool. The categories below are the practical sort.

Yes, with a signed BAA

With caution

Never

Vendor evaluation checklist

Twelve points to walk through with any AI chat vendor before signing a BAA. If a vendor can't answer in writing on more than a couple of these, that's a sign to keep looking.

#QuestionWhat to look for
1Do you offer a BAA?Yes, and they can send the standard text on request before pricing discussion.
2Do you provide audit logs of PHI access?Yes, with user-level granularity, exportable, retained per BAA terms.
3What encryption is used in transit and at rest?TLS 1.2+ in transit (1.3 preferred), AES-256 at rest, with key management documented.
4What access controls do you support?SSO/SAML, role-based access, MFA enforcement, automatic logoff, unique user IDs.
5What's the retention policy?Specific number of days, written in the BAA or DPA. Shorter is better. ZDR/no-retention options preferred.
6Do you train models on customer data?No, contractually, with that clause in the BAA or master agreement.
7What's the breach notification SLA?Stated in hours or days. 24 to 72 hours is ideal; 60 days is the legal maximum.
8Where is data stored (data residency)?US regions only for US covered entities, or contractually documented if multi-region.
9Who are your sub-processors?Published list, updated when changes occur, BAAs flowed down to each.
10What's the support tier for compliance issues?Named contact, dedicated channel, response SLA in writing.
11What's the pricing structure?Clear seat or usage pricing, no surprise overage on PHI volume.
12How is data deletion handled on request?Defined process, timeline, and certificate of destruction available.

Print this, walk through it with the vendor's compliance team, and keep the responses on file. That document, plus the signed BAA, plus the staff training record, plus the audit log review schedule, is the core of a defensible HIPAA posture for AI chat. It's not glamorous. It does mean that when the next OCR audit cycle comes through, the practice has receipts.

Frequently asked questions

What makes an AI chat tool HIPAA-compliant?

Three things together: a signed Business Associate Agreement with the vendor, technical safeguards (encryption in transit and at rest, access controls, audit logs, integrity controls), and administrative safeguards on your side (workforce training, a named privacy officer, documented policies). Encryption alone isn't compliance. A BAA alone isn't compliance. You need the full set.

Is ChatGPT HIPAA-compliant?

Consumer ChatGPT and ChatGPT Plus are not. There's no BAA available on those plans, conversations are stored under terms that allow training, and the vendor isn't bound as a business associate. ChatGPT Enterprise and the OpenAI API can be made compliant because OpenAI offers a BAA on those tiers, but a BAA has to be signed before any PHI is sent. Full breakdown here.

Is Claude.ai HIPAA-compliant?

Claude Free, Pro, and Team are not HIPAA-compliant. Anthropic offers a BAA on Claude Enterprise and on the Anthropic API. Anthropic API has 7-day operational log retention and never trains on inputs or outputs. PrivateClaude Business runs on the Anthropic API and offers a BAA at the application layer, with no chat history stored.

What is a BAA and who needs to sign one?

A Business Associate Agreement is the contract that legally binds a vendor to HIPAA's rules when that vendor creates, receives, maintains, or transmits PHI on behalf of a covered entity. Healthcare providers, health plans, and healthcare clearinghouses (covered entities) must have a BAA on file with any vendor that touches PHI. Without a BAA, sending PHI to that vendor is a violation.

What's the minimum technical bar for HIPAA-compliant AI?

TLS 1.2 or higher for data in transit, AES-256 for data at rest, audit logs that record every PHI access (who, what, when), unique user IDs and automatic logoff, role-based access controls, data backup, and integrity controls that detect tampering. The Security Rule lists these as required and addressable safeguards under 45 CFR 164.312.

Can I use a free AI tool with PHI if I just don't put names in?

Removing names doesn't necessarily de-identify data. HIPAA's Safe Harbor method requires removal of 18 specific identifiers, including names, dates more granular than year, geographic identifiers smaller than state, account numbers, biometric identifiers, and more. Even then, if combined data could identify a patient, it's still PHI. The safer path is to use a BAA-backed tool, not to try to anonymize on the fly.

How fast does a HIPAA-compliant vendor have to notify me of a breach?

Under the Breach Notification Rule, a business associate must notify the covered entity without unreasonable delay and no later than 60 days after discovery of a breach. The covered entity then has 60 days from discovery to notify affected individuals, HHS, and (for breaches affecting 500+) the media. Many BAAs negotiate shorter SLAs, often 24 to 72 hours.

What can a small practice realistically use AI chat for under HIPAA?

Drafting clinical notes from rough notes, summarizing intake forms, drafting patient communications (after clinician review), suggesting billing codes for review, internal documentation, policy drafting, and operational analysis. The chat tool isn't a clinician. It assists with paperwork. Final clinical decisions stay with the licensed provider, and the AI's output should be reviewed before it touches a patient.

Private Claude for regulated teams.

BAA available. Zero data retention. Self-serve or deploy in your VPC. Talk to us about your compliance requirements.

Contact sales