AI Chat for Law Firms
Privilege risk with consumer LLMs, ABA Formal Opinion 512, and what every firm should require from any AI vendor before sending a single client matter through it.
The privilege problem in one paragraph
Attorney-client privilege protects communications between lawyer and client that are kept in confidence for the purpose of legal advice. Confidentiality is the load-bearing word. When privileged content is pasted into a consumer AI tool that stores the conversation on third-party servers, under terms that permit training on user inputs and indefinite retention, the communication has been disclosed to a third party who is not the lawyer's agent under any conventional reading. That disclosure can waive privilege. Most consumer AI products fall in this bucket by default: ChatGPT Free, Plus, Pro, and Team, and Claude.ai Free, Pro, and Team all store conversations on vendor servers under terms that, absent specific opt-outs, allow training and lengthy retention. That's the risk in one paragraph. Everything else in this piece is what to do about it.
ABA Formal Opinion 512 (2024) on generative AI
In July 2024 the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512, the first ABA-level ethics guidance specifically addressing lawyers' use of generative AI tools. The opinion does not create new rules. It applies the existing Model Rules of Professional Conduct to a new technology and tells lawyers what those rules already require.
The points worth memorizing:
- Competence (Rule 1.1). A lawyer using generative AI must understand, at a level appropriate to the use, what the technology does, how it can fail, and how it handles client data. Ignorance of the tool is not a defense.
- Confidentiality (Rule 1.6). Lawyers must take reasonable steps to prevent unauthorized disclosure of client information. That obligation extends to any third party the lawyer transmits client information to, including an AI vendor.
- Supervision (Rules 5.1 and 5.3). AI is treated analogously to non-lawyer staff. The lawyer is responsible for reviewing AI output, verifying accuracy, and ensuring the tool is used consistent with the rules.
- Candor to the tribunal (Rule 3.3). Hallucinated case citations and fabricated quotes filed with a court are sanctionable. The lawyer signs the brief; the tool does not.
- Reasonable fees (Rule 1.5). A lawyer cannot bill a client for hours saved by AI as if the work had been done manually, and must consider whether costs of AI tools are properly passed through.
- Client communication and informed consent. In some matters, particularly where client information is being sent to a third-party AI tool, the opinion contemplates that informed consent may be appropriate.
The Model Rules already cover AI. A lawyer who deploys generative AI without understanding how the tool handles client data, without supervising its output, and without considering whether the data flow waives confidentiality, is exposed under rules that already exist.
State-level guidance
The ABA opinion is persuasive. State bars set the rules that bind individual lawyers. California, New York, Florida, and several other state bars have issued their own formal opinions or practical guidance on generative AI, and more are in progress. The state-level opinions track the ABA framework closely on the core points: confidentiality, competence, supervision, billing. Some go further on specific topics, such as disclosure to clients or use of AI in litigation filings.
Practical takeaway for managing partners and risk officers: pull the most recent ethics opinion from every jurisdiction your firm practices in before approving an AI tool firm-wide. The substance is largely consistent, but the specifics on disclosure, billing, and acceptable use can vary.
What competence under Rule 1.1 requires
Comment 8 to Model Rule 1.1, which explicitly ties competence to keeping abreast of relevant technology, has been adopted in over 40 states. It used to be the e-discovery comment. It's now the AI comment too. ABA 512 puts that beyond reasonable dispute.
What this looks like in practice:
- A lawyer using AI on client matters should be able to describe, in plain English, what the tool does with the prompts they send it.
- The firm should have a written AI policy that names approved tools, prohibited uses, and the supervision standard for AI-assisted work product.
- Associates and staff using AI should be trained, not just told the rules.
- Client-facing engagements should consider whether AI use is disclosed in the engagement letter.
The flip side also applies. A lawyer who refuses to consider AI on principle, where a competent practitioner would use it to a client's benefit, is also exposed under Rule 1.1. The duty cuts both ways.
The data flow that keeps privilege intact
Privileged communications must be kept in confidence. There are three lawful paths for using AI on privileged work:
- The vendor is your sub-agent under the engagement. Treat the AI vendor the way you'd treat a court reporter, an outside copy service, or a contract paralegal. There is a written agreement in place that designates the vendor as a confidential service provider working under your supervision, with appropriate confidentiality and data-handling terms. Privilege travels with the agency relationship.
- The AI is a local tool that doesn't transmit data to a third party. An on-device model that runs entirely on firm-controlled hardware is the cleanest version of this. No third-party transmission, no confidentiality question.
- The data sent to the AI is stripped of privileged content. Names, matter identifiers, and facts that would identify the client are removed before the prompt is sent. The AI sees a generic legal question with no client connection. This works in theory and is workable in practice for some uses, but redaction at scale is hard, and one slip undoes the protection.
Most firms find option 1 is the realistic path for general AI assistance, with option 3 layered on top for specific high-sensitivity matters. Option 2 is appropriate for the most sensitive work but typically requires more infrastructure than a small or mid-size firm wants to maintain.
Why Claude.ai Team and ChatGPT Team don't qualify by default
Team plans are an improvement over consumer plans. They typically remove training on inputs by contract, give administrators some user-management capability, and add basic compliance posture. They are not, by default, suitable for a law firm sending privileged matter through them.
Specifically:
- No legal-industry-specific written agreement. A standard commercial Team agreement is not an engagement-style confidential vendor contract. It does not designate the vendor as the firm's sub-agent for privileged work.
- Training opt-in posture varies. Some Team contracts default to no training; some require explicit opt-out; some still permit certain forms of usage analysis. Read the actual contract.
- Indefinite chat history under the firm's account. Chats are stored on vendor servers, attached to the firm's account, with no defined deletion timeline matched to engagement closure.
- No audit log per matter. When a partner needs to know what associates put into the AI on a specific matter, the Team admin console rarely answers that question cleanly.
- No commitment to insurance, breach SLA, or sub-processor disclosure at the level a risk officer would expect from a service provider handling privileged material.
What a compliant AI vendor looks like for a firm
Before sending a single client matter through any AI tool, the firm should require the following from the vendor in writing. Use this as a checklist for procurement.
| Requirement | Why it matters |
|---|---|
| Signed written agreement treating vendor as confidential service provider | Establishes the agency relationship that lets privilege travel with the data. Not a click-through ToS. |
| No training on firm inputs or outputs | Eliminates the most direct waiver risk. Should be unconditional, not opt-out. |
| Defined retention with short default | The shorter, the better. Anthropic's API standard is 7-day operational logs, then auto-delete. That's a reasonable benchmark. |
| Audit logs accessible to firm administrators | Lets the firm answer the "what was sent into the tool on matter X" question without depending on the vendor. |
| Breach notification SLA | Defined timeline (24 to 72 hours) and contact protocol. Required by state bars in many breach scenarios. |
| Sub-processor disclosure | The vendor's vendors. The firm needs to know who else touches the data. |
| Deletion on demand | When an engagement closes or a client requests it, the data must be deletable on a defined timeline. |
| Encryption in transit and at rest | Baseline. TLS in transit, AES-256 at rest is the standard expectation. |
| Conflict-of-interest representation | Does the vendor serve opposing counsel, opposing parties, or government adversaries on the same matters? The firm should know. |
| Insurance coverage | Cyber liability and professional liability at coverage levels appropriate to the firm's exposure. Certificate of insurance on file. |
If a vendor cannot meet this list, that doesn't mean the tool is useless. It means the tool should not see privileged material. Use it on the redacted-data path only, or not at all.
Use cases that work well
Even before any vendor agreement is signed, there are AI uses that pose minimal privilege risk because the inputs are not privileged in the first place. These are the natural starting point for any firm.
- Legal research on public law. "What does the Second Circuit say about X" is not privileged content.
- Contract redlining with redaction. Strip party names and identifying terms; ask the AI to flag risky clauses against a standard.
- First-draft generation. Motion templates, demand letters, discovery responses, where the prompt is structural rather than fact-specific.
- Document summarization of public filings. SEC documents, public court filings, regulations.
- Deposition prep questions. Generated against a fact pattern that has been redacted of identifying detail.
- Jury instruction drafting. Working from published model instructions.
- Internal training and brainstorming. Hypotheticals with no real-client tie.
Use cases to avoid or handle carefully
The other side of the line. These are uses that should not happen on a tool without an appropriate confidential vendor agreement, and should be approached carefully even with one.
Do not paste full client matters with PII into a consumer AI tool. Do not upload deposition transcripts directly. Do not ask AI to assess settlement value with the full case file attached. Do not use a free-tier consumer chatbot to draft anything that names a real client. If you wouldn't email it to a stranger, don't paste it into ChatGPT Free.
- Pasting a full matter file with client names, opposing parties, and identifying facts. This is the canonical waiver scenario.
- Uploading deposition transcripts in full. Even with redaction, transcript context often re-identifies parties.
- Asking AI to value a settlement using the full case posture. Strategy work product plus client identifying detail is the most sensitive material the firm holds.
- Drafting communications to opposing counsel using AI fed the full client position. Use template-based drafting instead.
- Storing AI-generated work product in the AI tool's chat history as the system of record. Move it to the firm's document management system and clear the AI history.
What Private Claude offers a firm
Private Claude is a chat interface for Claude built on the Anthropic API rather than the Claude.ai consumer product. The structural differences matter for legal practice.
- No training on inputs. Built into Anthropic's API contract, not a setting to find.
- 7-day operational log auto-delete at Anthropic. No indefinite retention.
- No chat history at the PrivateClaude application layer. Conversations live in the browser session. Close the tab, the conversation is gone. Audit logs of access are kept separately for firm administrators on Business plans.
- Business tier with confidential vendor agreement available. A written agreement appropriate for legal-industry use, designating the vendor relationship as a confidential service provider.
- Audit logs accessible to firm admins. Who accessed the tool, when, with what models. Without storing the conversation content itself.
- $1449/year White Label for small firms. Branded with the firm's name. Sized for solo and small-firm budgets.
For a deeper look at how the data flow works, see our breakdown of BAA-backed AI chat (the BAA structure for healthcare maps closely to the confidential vendor structure for legal). For the broader business product overview, see Private Claude for Business.
Your mileage may vary based on jurisdiction, practice area, and client engagement terms. None of this is legal advice. It's a framework for asking the right questions of any vendor before client matters touch the tool.
Frequently asked questions
Does using ChatGPT or Claude.ai waive attorney-client privilege?
It can. Privilege depends on communications being kept in confidence. When privileged content is sent into a consumer AI tool that stores it on third-party servers under terms permitting training and indefinite retention, the communication is no longer in confidence in the way courts have historically required. Whether a court would find waiver depends on jurisdiction, the specific tool's terms, and how the firm uses it, but the safer reading is that consumer AI tools should not receive client-identifying privileged material.
What is ABA Formal Opinion 512?
ABA Formal Opinion 512, issued by the ABA Standing Committee on Ethics and Professional Responsibility in July 2024, addresses lawyers' use of generative AI tools. It applies existing Model Rules of Professional Conduct to AI: lawyers must understand the technology's capabilities and risks (competence), protect client confidential information, supervise AI output as they would non-lawyer staff, maintain candor to the tribunal, and ensure fees charged for AI-assisted work are reasonable.
Have state bars issued AI guidance?
Several have. California, New York, Florida, and others have issued formal opinions or practical guidance on lawyer use of generative AI. Most track the ABA's framework: confidentiality, competence, supervision, billing. Check your jurisdiction's most recent ethics opinions before deploying any AI tool firm-wide.
Does Rule 1.1 require lawyers to understand AI?
Comment 8 to Model Rule 1.1 ties the duty of competence to keeping abreast of changes in the law and its practice, including the benefits and risks of relevant technology. Over 40 states have adopted the comment. ABA Formal Opinion 512 confirms generative AI falls within that scope. A lawyer who deploys AI without understanding how it handles client data, or who ignores AI entirely when it would benefit a client, is exposed under Rule 1.1.
Can I use a consumer AI tool if I redact the matter first?
Redaction is one of the three lawful paths described in this article. If the data sent to the tool contains no client-identifying information, no privileged communications, and no facts that could be reverse-identified, the privilege concern is reduced. The practical problem is that effective redaction at scale is hard, and one slip undoes the protection. Most firms find it easier to use a vendor with appropriate contractual terms than to redact every prompt.
What is the minimum a firm should require from any AI vendor?
A signed written agreement treating the vendor as a confidential service provider, no use of inputs or outputs for model training, defined short retention with deletion-on-demand, audit logs accessible to the firm, breach notification with a defined SLA, sub-processor disclosure, encryption in transit and at rest, conflict-of-interest representations, and the vendor's professional liability or cyber insurance details. A standard click-through consumer ToS does not meet this bar.
Is Claude.ai Team or ChatGPT Team enough for a law firm?
Not by default. Team plans typically improve on consumer plans by removing training on inputs, but they often retain chat history indefinitely under the firm's account, lack a confidential-vendor written agreement specific to legal practice, do not provide per-matter audit logs, and do not commit to deletion timelines compatible with engagement closure. They can be a starting point, but partners and risk officers should review the actual contract terms before using them on client matters.
What does Private Claude offer law firms?
The Business tier ($1449/year White Label) includes a confidential vendor agreement appropriate for legal-industry use, runs on the Anthropic API with 7-day operational log auto-delete and no training on inputs, has no chat history at the PrivateClaude application layer, and provides audit logs for firm administrators. It is built so that the data flow is structurally compatible with maintaining attorney-client privilege when used appropriately.
Private Claude for regulated teams.
BAA available. Zero data retention. Self-serve or deploy in your VPC. Talk to us about your compliance requirements.
Contact sales