Private AI for Therapy & Mental Health
Why most AI therapy apps still keep your chats, what's actually private, and how to use Claude as a support tool that doesn't follow you home.
Let's start with what AI is not. AI is not a therapist. It can't diagnose, can't prescribe, can't sit with you across a real human relationship over years. It doesn't have a license, and the times it sounds like it does are the times to be most careful.
What it can do is be a place to talk at 2am when no human is awake. A place to articulate something you don't have words for yet. A place to draft what you actually want to say to your therapist on Tuesday. That's a real thing. It's also a thing most people are doing without realizing the conversation is being kept.
This piece is about how to use AI as mental health support without leaving a permanent record of your worst nights in someone else's database.
An AI is not the right tool. Real humans are standing by:
- 988 — US Suicide & Crisis Lifeline (call or text)
- Crisis Text Line — text HOME to 741741
- 911 — for an immediate emergency
- Outside the US: see findahelpline.com for your country's equivalent.
A small story
Devin is a 34-year-old project manager in Denver. Last fall his marriage was falling apart and he wasn't sleeping. Around 2am most nights he'd open one of those popular AI therapy apps on his phone and type for an hour. Stuff he hadn't said out loud to anyone. The chatbot was patient, asked good questions, helped him calm down enough to sleep.
Six months later, after the divorce was final, he was poking through the app's settings and found his entire history. Every panic spiral. Every late-night confession. Every name. All sitting there in a list, neatly timestamped, ready to be exported. The app had told him in onboarding that it took privacy seriously. He'd believed that. He hadn't pictured this.
Nobody at the company was going to do anything bad with Devin's chats. That isn't really the point. The point is that the worst nights of his life had become a database row, and he hadn't understood the deal he'd made.
The big therapy and wellness AI apps
Let's look at what's actually under the hood of the popular apps. Most of them are not training their own models. They're wrapping a large language model from OpenAI or Anthropic, layering a coaching framework on top (CBT prompts, mood tracking, journaling), and selling that as the product. That's fine. It's also worth knowing, because the privacy situation depends as much on what the wrapper does with your data as on the underlying model.
| App | Underlying model | Stores chats? | Trains on chats? | BAA / FDA |
|---|---|---|---|---|
| Woebot | Proprietary + LLM rules engine | Yes, on their servers | Per privacy policy, used to improve service | FDA breakthrough device designation, not BAA-default |
| Wysa | LLM-backed (proprietary stack) | Yes | De-identified data used per policy | Has enterprise BAA option, not consumer-default |
| Replika | GPT-based (historically) | Yes, indefinitely | Per privacy policy at time of writing | No |
| Youper | LLM-backed coaching layer | Yes | Used to improve service per policy | No consumer BAA |
| Claude.ai (consumer) | Claude (Anthropic) | Yes, indefinitely by default | Yes by default, opt-out available | No consumer BAA |
| Private Claude | Claude via API | No history saved | No, ever | BAA on business plan |
A few notes on this table. All of it is based on each company's published privacy policy at time of writing, so check the live policy before relying on any of it. Companies update their terms. Marketing copy on the homepage and the actual policy don't always match. Read the policy.
The pattern across the consumer apps: your chats are stored on their servers, often indefinitely, often used in some form to improve the product. "Improve the product" can mean a lot of things, from anonymized analytics to direct training data. Different apps draw the line in different places.
Why "HIPAA-compliant" claims are tricky
HIPAA gets thrown around a lot in mental health app marketing. It's worth slowing down on what it actually means.
HIPAA applies to covered entities (doctors, hospitals, insurance companies) and their business associates. It does not apply automatically to consumer software just because the software is about health. A meditation app collecting your journal entries is generally not a HIPAA covered entity. A chatbot you're using on your own dime is generally not a HIPAA covered entity. So when one of these apps says "we're HIPAA-compliant," what they often mean is one of these three things:
- They have a BAA with their cloud hosting provider. This is a real thing, but it just means AWS or Google Cloud has signed paperwork promising to handle data in HIPAA-compatible ways. It says almost nothing about what the app itself does with the data.
- They have an enterprise tier that's HIPAA-eligible for hospitals and employers, which is different from the consumer app you downloaded.
- They're using HIPAA as a vibe. Compliance language as a trust signal. The privacy policy itself never actually invokes HIPAA protections for individual users.
If HIPAA matters to you, look for the BAA itself. Not the marketing page. The actual contract. If the app won't sign a BAA with you (or with your therapist's practice), it's not HIPAA-covered for your use of it. That's not a scandal. It's just the legal reality. Read it knowing what it is.
The apps with real human counselors
Apps like BetterHelp and Talkspace are a different category. They're not AI chatbots. They connect you to licensed human therapists over text, video, and phone. Those therapists are bound by their state licensure rules and ethical codes, which provide a real layer of protection.
That layer doesn't extend to everything that happens in the app, though. The platform itself collects data: your intake forms, your matching answers, your scheduling, your usage patterns. That platform-level data lives in the company's database and follows whatever the privacy policy says, not the licensure code.
In 2023, the FTC settled with BetterHelp for $7.8 million over allegations that BetterHelp had shared users' mental health information with Facebook, Snapchat, Pinterest, and Criteo for advertising purposes, despite promising users their information would be kept private. The settlement is a matter of public record. It's not a one-off footnote either, it's a fairly clean illustration of how "mental health platform" doesn't automatically translate to "your data is protected." Read the privacy policy of any platform that's holding your mental health information. Marketing copy is not policy.
Using Claude as a support tool (what it's good at, what it isn't)
Claude wasn't built to be a therapist. It's a general-purpose AI. That turns out to be a feature, not a bug, for the specific use case of "I want to think through something hard at 11pm." It doesn't have a CBT script it's trying to push you through. It doesn't have a coaching personality. It just listens carefully and reflects back what it's hearing, which is sometimes exactly what you need.
Things Claude is genuinely useful for:
- Articulating something you don't have words for yet. Type a stream-of-consciousness mess and ask it to play back what it's hearing. Surprisingly often, that play-back is the first time the feeling becomes legible.
- Organizing your thoughts before a therapy appointment. If you've ever sat down in your therapist's office and forgotten the three things you wanted to talk about, this is huge. Spend 15 minutes with Claude beforehand. Walk in with a list.
- Journaling about feelings. Some people write better in dialogue than in a blank notebook. There's overlap here with our piece on private AI journaling.
- Getting unstuck on a hard conversation. Drafting what you want to say to your spouse, your boss, your parent. Trying out the wording. Hearing how it lands.
- Working through a 2am spiral. Not as a substitute for help, but as a way to move from racing thoughts to written ones, which often takes the edge off enough to sleep.
Things Claude is not the right tool for:
- Active crisis. If you're thinking about hurting yourself, call 988. Don't open a chat tab.
- Diagnosis. Claude will sometimes float possibilities. Treat them as starting points for a conversation with a real clinician, not as conclusions.
- Medication advice. Don't.
- Replacing a real therapeutic relationship. If you have access to a human therapist and you're using Claude instead because it's easier, the easier thing is probably also the less helpful thing.
For broader context on using AI for sensitive topics generally, see our piece on asking AI about sensitive personal stuff.
Why no history is the right architecture for this
Here's the structural problem with most AI therapy apps. They want to be helpful over time, so they keep your chat history. They want to remember your name and your patterns and what you talked about last week. That's a reasonable product instinct. It's also the thing that creates almost all the privacy risk.
Once a conversation is stored, it can be subpoenaed in a divorce or custody dispute. It can be breached. It can be sold or shared if the company changes hands or changes its policy. It can be accessed by someone who gets into your account. The chat from the worst night of your life is a database row that lives forever in a system you don't control.
Private Claude takes the opposite position. There's no chat history. The conversation lives only in the browser tab while it's open. Close the tab and it's gone. Anthropic's API logs hold the request for 7 days for abuse detection, then auto-delete. Nothing is ever used to train future models. There's nothing in your account to look back on, by design.
For mental health, "no history" isn't a missing feature. It's the appropriate level of forgetting. The thing you said at 2am is allowed to not exist tomorrow. That's how a private journal works. It's how a real conversation with a friend works. The conversation served its purpose and then it ended. That's the architecture mental health support actually wants.
If you want the deeper architecture details on how this works, we wrote it up in Private Claude Chat.
A practical setup for using AI as mental health support
If you're going to do this, do it right. Five concrete habits.
- Keep crisis numbers visible. Save 988 in your phone. Save the Crisis Text Line shortcut (text HOME to 741741). If you're in another country, save the equivalent from findahelpline.com. The whole premise of using AI as support is that it's there at 2am, which is also when you need to know exactly which other number to call if things tilt. Make that decision while you're calm, not while you're not.
- Use a private interface. If you're going to type your hardest stuff into an AI, don't type it into one that keeps a copy. That means either Private Claude (no history, BYOK to Anthropic) or a local model running on your own machine. Don't use the consumer Claude.ai or ChatGPT default for this and assume the "incognito" toggle is doing what you think it's doing.
- Treat AI as a thinking partner, not a clinician. Useful framing: "help me articulate what I'm feeling" is good. "tell me what's wrong with me" is not the question to ask. The first uses the AI for what it's good at. The second is asking it to do something it isn't qualified to do, and it will sometimes answer anyway, which is the actual hazard.
- Keep the relationship with a human professional, if you have one. AI is a supplement, not a substitute. If you're seeing a therapist, keep seeing them. If you're not but you should be, the AI conversation is a useful place to figure out what you'd want to say in a first session, not a reason to skip the first session.
- Don't paste the chat into your therapist's portal. This one trips people up. You had a great AI conversation, you want to share it with your therapist. The instinct is good. The execution is wrong. Once the raw transcript is in their portal or their email, it's part of the therapy record, which is its own kind of permanent. Instead, share the insight you got out of it. "I realized this week that I keep doing X. I want to talk about why." That's the version your therapist actually needs.
The summary, if you want one sentence: AI can be a useful late-night thinking partner for hard feelings, but only if the conversation doesn't outlive the night. Pick the interface accordingly.
Frequently asked questions
Is AI therapy actually private?
Most consumer AI therapy apps store your conversations indefinitely on their servers and have privacy policies that allow some form of analytics, third-party processing, and in some cases data sharing with advertisers. Treat the chat as if a stranger could read it. If you want privacy, you need an interface where the conversation doesn't survive the session.
Can I use Claude as a therapist?
No. Claude is not a therapist, can't diagnose, can't prescribe, and can't replace clinical care. What it can do is be a private space to articulate feelings, organize your thoughts, or talk through a hard moment when no human is available. For a clinical relationship, see a licensed professional. If you're in crisis, call 988 in the US.
Are AI therapy apps HIPAA compliant?
Most consumer AI therapy apps are not covered entities under HIPAA, so HIPAA generally doesn't apply to them in the way people assume. Some claim compliance because they have a BAA with their hosting provider, which is a different thing than the app itself being a HIPAA-covered service. Read the privacy policy, not the marketing page.
What happened with BetterHelp and the FTC?
In 2023 the FTC settled with BetterHelp for $7.8 million over allegations that BetterHelp shared users' mental health information with Facebook, Snapchat, Pinterest, and Criteo for advertising, despite promising users their information would be kept private. It's a useful reminder that "mental health platform" doesn't automatically mean "data stays private."
What should I do if I'm in crisis?
In the US, call or text 988 for the Suicide & Crisis Lifeline, or text HOME to 741741 for the Crisis Text Line. For an immediate emergency, call 911. Outside the US, see findahelpline.com for your country's equivalent. AI is not the right tool for an active crisis.
Can my therapist read my AI chats?
Not unless you give them access. Don't paste full AI chat transcripts into your therapist's portal or email. Share the insight you got out of the conversation, not the raw transcript. The transcript usually says more than you mean to share, and once it's in the therapy record it's part of the record.
Does Private Claude store my mental health conversations?
No. Private Claude has no chat history. Conversations live in the browser tab while it's open. Close the tab and the conversation is gone. Anthropic's API logs hold the request for 7 days for abuse detection, then auto-delete. There's no permanent record anywhere by design.
Is it safe to journal about trauma with an AI?
It can be useful for articulation and getting unstuck, with two caveats. First, AI is not equipped to handle a trauma response in real time. If you start spiraling, stop and call a human. Second, do it in an interface that doesn't keep a copy. A trauma journal stored in a third-party database for years isn't a journal, it's a liability. Use a private interface or write it locally.
Use Claude. Keep it private.
Use your Anthropic connection password. Start free with 50 Haiku and 25 Sonnet messages. Upgrade to $17/mo for Opus, file uploads, and Markdown exports.
Get startedNo card required · Cancel anytime