HIPAA-Compliant AI Chat
What HIPAA actually requires of an AI tool, why standard ChatGPT and Claude.ai fail the test, and the 30-minute compliance setup for a small practice.
Most "HIPAA-compliant AI" marketing is wrong. Vendors slap the phrase on a landing page after enabling TLS and call it done. Real compliance is a contract plus a set of technical and administrative safeguards, and the contract has to be signed before a single piece of PHI hits the wire. This piece walks through what HIPAA requires of an AI chat tool, which vendors actually qualify in May 2026, and how a small practice can get to a defensible posture in about thirty minutes.
What HIPAA actually requires of an AI tool
HIPAA is the Health Insurance Portability and Accountability Act of 1996. The pieces that matter for AI sit inside three rule families published by HHS:
- Privacy Rule. Governs the uses and disclosures of protected health information (PHI). Sets minimum necessary standards and patient rights of access.
- Security Rule. Sets administrative, physical, and technical safeguards for electronic PHI (ePHI). This is where the technical bar lives.
- Breach Notification Rule. Requires notification within 60 days of discovery of a breach involving unsecured PHI, with covered entities notifying patients, HHS, and (for breaches of 500+ records) the media.
Two roles to keep straight. A covered entity is a healthcare provider, health plan, or healthcare clearinghouse. A business associate is any vendor that creates, receives, maintains, or transmits PHI on behalf of a covered entity. An AI chat vendor that processes PHI is a business associate, full stop.
The technical bar from the Security Rule (45 CFR 164.312) breaks down to:
- Access controls (unique user IDs, automatic logoff, emergency access procedures, role-based access)
- Audit logs that record every access to ePHI (who accessed what, when)
- Integrity controls that detect improper alteration or destruction of ePHI
- Transmission security: encryption in transit (TLS 1.2+)
- Encryption at rest (AES-256 is the standard)
- A signed Business Associate Agreement with every vendor that touches PHI
Miss any one of these and the tool is non-compliant. There's no partial credit.
Signed BAA, encryption in transit (TLS 1.2+), encryption at rest (AES-256), audit logs of every PHI access, unique user IDs, role-based access, integrity controls, and a documented breach notification process. All of them. Together. Before any PHI touches the system.
Why ChatGPT and Claude.ai fail by default
Consumer ChatGPT, ChatGPT Plus, ChatGPT Team, Claude Free, Claude Pro, and Claude Team are all the same story for HIPAA purposes. Three problems, any one of which is disqualifying.
No BAA is offered on these tiers. OpenAI and Anthropic both publish this clearly. Without the contract, the vendor isn't a business associate, which means a covered entity sending PHI to the tool is sending it to a third party with no HIPAA obligations. That's a violation on day one.
For a deeper breakdown of how ChatGPT specifically fails, see Is ChatGPT HIPAA Compliant?. The short version: consumer tiers train on conversations by default, store chats indefinitely, and offer no BAA. ChatGPT Enterprise and the OpenAI API can be made compliant because OpenAI offers a BAA there, but that's a different product.
Conversations are stored on vendor servers under terms that allow training. Even if a clinician opts out of training, the data still sits in account-level chat history, accessible to anyone with the password. SIM swap, phishing email, shared device, and the entire conversation log is exposed.
Anything PHI typed into a non-BAA tool is technically a breach. Under the Breach Notification Rule, an impermissible disclosure of PHI to a non-business-associate vendor is a reportable breach unless the covered entity can demonstrate a low probability of compromise. Pasting a patient's name and diagnosis into ChatGPT Plus is the textbook example of what not to do.
The BAA: what it is, who signs it
A Business Associate Agreement is the contract that turns a vendor into a HIPAA-bound business associate. Without one, even an "encrypted" tool with great security can't legally hold PHI for a covered entity. The BAA does several things:
- Binds the vendor to the same Privacy and Security Rule obligations that apply to the covered entity
- Specifies permitted uses and disclosures of PHI
- Requires the vendor to implement appropriate safeguards
- Requires breach notification to the covered entity, with a stated timeline
- Requires the vendor to flow down BAA terms to its own subcontractors (sub-business associates)
- Specifies what happens to PHI when the contract ends (return or destruction)
For a closer look at how BAAs work in the AI context, including what the contract should and shouldn't include, see BAA-Backed AI Chat.
The signature has to happen before any PHI is sent. A retroactive BAA doesn't fix prior disclosures. Most vendors make their BAA available on request from sales, sometimes with a short legal review. PrivateClaude Business sends a standard BAA the same day, and most enterprise AI vendors will turn one around in a week or two.
The current vendor list (May 2026)
Here's the accurate landscape of who actually offers a BAA for an AI chat product, as far as can be verified from public documentation as of May 2026. Tier matters. The same vendor often sells a HIPAA-eligible product and a non-compliant consumer product side by side.
| Vendor / Product | BAA? | Notes |
|---|---|---|
| OpenAI ChatGPT Enterprise | Yes | BAA on Enterprise tier. Consumer ChatGPT, Plus, and Team are not eligible. |
| OpenAI API | Yes | BAA available with Zero Data Retention configuration. |
| Anthropic Enterprise | Yes | BAA on Enterprise tier of Claude. Free, Pro, and Team are not eligible. |
| Anthropic API | Yes | BAA available. 7-day operational log retention. No training on inputs/outputs. |
| Microsoft Azure OpenAI | Yes | BAA covered under standard Microsoft enterprise terms. Often the path of least resistance for organizations already on Microsoft. |
| Google Cloud Vertex AI | Yes | BAA available. Wide model selection including Gemini and partner models. |
| AWS Bedrock | Yes | HIPAA-eligible. BAA under standard AWS terms. Multi-model. |
| PrivateClaude Business | Yes | BAA at the application layer. Runs on Anthropic API. 7-day log retention inherited. No chat history stored at the application layer. |
| Hathr.AI | Yes | Markets as BAA-ready Claude wrapper. Verify current BAA terms with sales. |
| BastionGPT | Yes | BAA-backed AI chat targeted at healthcare. Verify scope and sub-processor list. |
| CompliantChatGPT | Yes | BAA-backed wrapper. Verify what model sits underneath and whether the underlying provider's BAA flows through. |
A few honest caveats. "BAA available" doesn't mean "automatically BAA-covered." Most vendors require a request and signed agreement before PHI can flow. Pricing on HIPAA-eligible tiers is usually higher than the consumer equivalent. And smaller wrappers (the last three rows) deserve extra scrutiny: ask for their sub-processor list, confirm the underlying model provider's BAA flows through, and read the breach notification SLA.
The technical safeguards beyond the BAA
The BAA is the legal foundation. The Security Rule lists what the technical implementation has to look like. For an AI chat tool, the practical checklist:
- Encryption in transit. TLS 1.2 or higher on every API call and every browser request. TLS 1.3 preferred.
- Encryption at rest. AES-256 on every storage layer (databases, log files, backups, object storage). HHS treats AES-256-encrypted data as a "safe harbor" that can avoid breach notification if keys aren't compromised.
- Audit logs. Every access to PHI logged with user ID, timestamp, action, and resource. Logs themselves protected from tampering. Retained per the BAA.
- Access controls. Unique user IDs (no shared accounts), automatic logoff after a period of inactivity, emergency access procedures for break-glass scenarios, and role-based access so a billing clerk doesn't see clinical notes by default.
- Integrity controls. Cryptographic checksums or equivalent that detect when stored ePHI has been altered or destroyed in an unauthorized way.
- Data backup. Recoverable copies of ePHI, encrypted, with a documented restoration procedure.
- Authentication. Multi-factor authentication for any account that can access PHI.
Most enterprise AI vendors handle the infrastructure side. The covered entity still has to configure access roles, train staff, and maintain the audit trail. Compliance is a shared responsibility.
The retention question
HIPAA doesn't mandate a deletion timeline. It requires that PHI be protected for as long as it's held and disposed of properly when it's no longer needed. The practical principle: the more retention, the more breach surface. Less retention is almost always safer.
How the major AI providers stack up on retention:
- Anthropic API. 7-day operational log auto-delete by default. No training on inputs or outputs. No saved chat history.
- OpenAI Enterprise / API. 30-day default abuse monitoring retention, with a Zero Data Retention (ZDR) option available on request for eligible customers. ZDR drops the retention to zero days at the API layer.
- PrivateClaude Business. 7-day rolling logs inherited from the Anthropic API. No chat history at the PrivateClaude application layer. Configurable audit logs for the customer's own compliance records.
- Azure OpenAI Service. 30-day abuse monitoring retention, with the option to apply for abuse monitoring opt-out which removes that storage.
For a small practice, less retention means less data sitting around to be breached, subpoenaed, or accidentally exposed by a sub-processor incident. The 7-day API logs at Anthropic are about as low as the major commercial AI providers go without going self-hosted.
The "we encrypt" red flag
If a vendor's HIPAA pitch is "we use encryption," walk away. Encryption is one safeguard out of many. It's necessary but nowhere near sufficient.
A tool that encrypts data in transit and at rest but offers no BAA is still non-compliant. A tool with a BAA but no audit logs is non-compliant. A tool with a BAA and audit logs but shared accounts and no role-based access is non-compliant. Every safeguard is required. Marketing copy that fixates on encryption alone is a tell that the vendor either doesn't understand HIPAA or hopes the buyer doesn't.
"HIPAA-friendly" without naming a BAA. "Bank-grade encryption" as the headline claim. No published sub-processor list. No breach notification SLA. No mention of audit logs or access controls. Any of these on a vendor's compliance page is reason to ask harder questions before signing anything.
The same logic applies to "private" or "secure" claims. Private isn't HIPAA-compliant. Secure isn't HIPAA-compliant. The contract and the safeguards together are what qualifies. Tools like consumer Signal, ProtonMail, or DuckDuckGo's AI chat are private in a meaningful sense, but none of them sign BAAs, so none of them can hold PHI for a covered entity.
A 30-minute compliance setup for a small practice
For a solo therapist, a small dental office, a chiropractic clinic, or any practice with a handful of staff, getting to a defensible HIPAA posture for AI chat is concrete and fast. The path:
- Pick a BAA-backed vendor (5 min). From the list above, pick the one that fits the workflow. Microsoft and Google are easy if the practice is already on those platforms. PrivateClaude Business is a good fit if the team specifically wants Claude with low retention. Hathr.AI, BastionGPT, and CompliantChatGPT are healthcare-focused wrappers worth considering.
- Request and sign the BAA before sending any PHI (5 min to request, 1 to 14 days to receive countersigned). Don't pilot with real PHI. Use synthetic or de-identified data until the BAA is signed by both parties.
- Train staff (15 min). Document what the AI tool can be used for, what it can't, and what to do if PHI is sent to a non-BAA tool by accident. The Office for Civil Rights expects workforce training; for a small practice, a one-page policy and a 15-minute team huddle qualify.
- Enable audit logs. Most enterprise AI tools have audit logs available, sometimes off by default. Turn them on. Decide who reviews them and how often.
- Document the workflow. Two paragraphs in the practice's HIPAA policies about what AI tools are approved, what data goes in, who has access, and how it's monitored.
- Name a privacy officer. HIPAA requires every covered entity to designate one. For a solo practitioner, that's the practitioner. For a small practice, it's typically the office manager or the practice owner.
Therapists and counselors have an extra layer of scrutiny because of the sensitivity of mental health notes. HIPAA AI for Therapists & Counselors covers the specifics, including how psychotherapy notes are treated differently under the Privacy Rule.
What can and can't go in a HIPAA-compliant AI chat
Once a BAA is in place and the technical safeguards are working, the chat tool becomes a regular workflow tool. The categories below are the practical sort.
Yes, with a signed BAA
- Drafting clinical notes from rough notes or dictation
- Summarizing patient intake forms
- Drafting patient communications (with clinician review before sending)
- Suggesting billing and procedure codes for review
- Internal documentation: policies, training materials, meeting notes that reference patients
- Operational analysis: cohort patterns, scheduling efficiency, claim denial trends
- Translation drafts of patient-facing material (with bilingual review)
With caution
- Anything that combines PHI with clinical decision-making. The AI is not a clinician. Output should be reviewed by the licensed provider before it influences treatment.
- Differential diagnosis aids. Useful for brainstorming, not for autonomous decisions.
- Patient-facing chatbots. The bot itself becomes a covered system that needs the same controls.
- Research with identifiable data. Often needs IRB review on top of HIPAA.
Never
- Untrained staff using the tool. Workforce training isn't optional.
- Personal email addresses for the chat account. PHI in a personal Gmail or iCloud account is a breach.
- Screenshots of chats shared to non-BAA tools (Slack workspaces without a BAA, personal phones, group texts).
- Pasting PHI into the consumer ChatGPT or Claude.ai by mistake. If it happens, it's a reportable breach in most cases. Document, assess, and notify per the practice's breach response procedure.
- Skipping the BAA "just for testing" with real patient data.
Vendor evaluation checklist
Twelve points to walk through with any AI chat vendor before signing a BAA. If a vendor can't answer in writing on more than a couple of these, that's a sign to keep looking.
| # | Question | What to look for |
|---|---|---|
| 1 | Do you offer a BAA? | Yes, and they can send the standard text on request before pricing discussion. |
| 2 | Do you provide audit logs of PHI access? | Yes, with user-level granularity, exportable, retained per BAA terms. |
| 3 | What encryption is used in transit and at rest? | TLS 1.2+ in transit (1.3 preferred), AES-256 at rest, with key management documented. |
| 4 | What access controls do you support? | SSO/SAML, role-based access, MFA enforcement, automatic logoff, unique user IDs. |
| 5 | What's the retention policy? | Specific number of days, written in the BAA or DPA. Shorter is better. ZDR/no-retention options preferred. |
| 6 | Do you train models on customer data? | No, contractually, with that clause in the BAA or master agreement. |
| 7 | What's the breach notification SLA? | Stated in hours or days. 24 to 72 hours is ideal; 60 days is the legal maximum. |
| 8 | Where is data stored (data residency)? | US regions only for US covered entities, or contractually documented if multi-region. |
| 9 | Who are your sub-processors? | Published list, updated when changes occur, BAAs flowed down to each. |
| 10 | What's the support tier for compliance issues? | Named contact, dedicated channel, response SLA in writing. |
| 11 | What's the pricing structure? | Clear seat or usage pricing, no surprise overage on PHI volume. |
| 12 | How is data deletion handled on request? | Defined process, timeline, and certificate of destruction available. |
Print this, walk through it with the vendor's compliance team, and keep the responses on file. That document, plus the signed BAA, plus the staff training record, plus the audit log review schedule, is the core of a defensible HIPAA posture for AI chat. It's not glamorous. It does mean that when the next OCR audit cycle comes through, the practice has receipts.
Frequently asked questions
What makes an AI chat tool HIPAA-compliant?
Three things together: a signed Business Associate Agreement with the vendor, technical safeguards (encryption in transit and at rest, access controls, audit logs, integrity controls), and administrative safeguards on your side (workforce training, a named privacy officer, documented policies). Encryption alone isn't compliance. A BAA alone isn't compliance. You need the full set.
Is ChatGPT HIPAA-compliant?
Consumer ChatGPT and ChatGPT Plus are not. There's no BAA available on those plans, conversations are stored under terms that allow training, and the vendor isn't bound as a business associate. ChatGPT Enterprise and the OpenAI API can be made compliant because OpenAI offers a BAA on those tiers, but a BAA has to be signed before any PHI is sent. Full breakdown here.
Is Claude.ai HIPAA-compliant?
Claude Free, Pro, and Team are not HIPAA-compliant. Anthropic offers a BAA on Claude Enterprise and on the Anthropic API. Anthropic API has 7-day operational log retention and never trains on inputs or outputs. PrivateClaude Business runs on the Anthropic API and offers a BAA at the application layer, with no chat history stored.
What is a BAA and who needs to sign one?
A Business Associate Agreement is the contract that legally binds a vendor to HIPAA's rules when that vendor creates, receives, maintains, or transmits PHI on behalf of a covered entity. Healthcare providers, health plans, and healthcare clearinghouses (covered entities) must have a BAA on file with any vendor that touches PHI. Without a BAA, sending PHI to that vendor is a violation.
What's the minimum technical bar for HIPAA-compliant AI?
TLS 1.2 or higher for data in transit, AES-256 for data at rest, audit logs that record every PHI access (who, what, when), unique user IDs and automatic logoff, role-based access controls, data backup, and integrity controls that detect tampering. The Security Rule lists these as required and addressable safeguards under 45 CFR 164.312.
Can I use a free AI tool with PHI if I just don't put names in?
Removing names doesn't necessarily de-identify data. HIPAA's Safe Harbor method requires removal of 18 specific identifiers, including names, dates more granular than year, geographic identifiers smaller than state, account numbers, biometric identifiers, and more. Even then, if combined data could identify a patient, it's still PHI. The safer path is to use a BAA-backed tool, not to try to anonymize on the fly.
How fast does a HIPAA-compliant vendor have to notify me of a breach?
Under the Breach Notification Rule, a business associate must notify the covered entity without unreasonable delay and no later than 60 days after discovery of a breach. The covered entity then has 60 days from discovery to notify affected individuals, HHS, and (for breaches affecting 500+) the media. Many BAAs negotiate shorter SLAs, often 24 to 72 hours.
What can a small practice realistically use AI chat for under HIPAA?
Drafting clinical notes from rough notes, summarizing intake forms, drafting patient communications (after clinician review), suggesting billing codes for review, internal documentation, policy drafting, and operational analysis. The chat tool isn't a clinician. It assists with paperwork. Final clinical decisions stay with the licensed provider, and the AI's output should be reviewed before it touches a patient.
Private Claude for regulated teams.
BAA available. Zero data retention. Self-serve or deploy in your VPC. Talk to us about your compliance requirements.
Contact sales