AI Chat for Law Firms

Privilege risk with consumer LLMs, ABA Formal Opinion 512, and what every firm should require from any AI vendor before sending a single client matter through it.

The privilege problem in one paragraph

Attorney-client privilege protects communications between lawyer and client that are kept in confidence for the purpose of legal advice. Confidentiality is the load-bearing word. When privileged content is pasted into a consumer AI tool that stores the conversation on third-party servers, under terms that permit training on user inputs and indefinite retention, the communication has been disclosed to a third party who is not the lawyer's agent under any conventional reading. That disclosure can waive privilege. Most consumer AI products fall in this bucket by default: ChatGPT Free, Plus, Pro, and Team, and Claude.ai Free, Pro, and Team all store conversations on vendor servers under terms that, absent specific opt-outs, allow training and lengthy retention. That's the risk in one paragraph. Everything else in this piece is what to do about it.

ABA Formal Opinion 512 (2024) on generative AI

In July 2024 the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512, the first ABA-level ethics guidance specifically addressing lawyers' use of generative AI tools. The opinion does not create new rules. It applies the existing Model Rules of Professional Conduct to a new technology and tells lawyers what those rules already require.

The points worth memorizing:

ABA 512 in one line

The Model Rules already cover AI. A lawyer who deploys generative AI without understanding how the tool handles client data, without supervising its output, and without considering whether the data flow waives confidentiality, is exposed under rules that already exist.

State-level guidance

The ABA opinion is persuasive. State bars set the rules that bind individual lawyers. California, New York, Florida, and several other state bars have issued their own formal opinions or practical guidance on generative AI, and more are in progress. The state-level opinions track the ABA framework closely on the core points: confidentiality, competence, supervision, billing. Some go further on specific topics, such as disclosure to clients or use of AI in litigation filings.

Practical takeaway for managing partners and risk officers: pull the most recent ethics opinion from every jurisdiction your firm practices in before approving an AI tool firm-wide. The substance is largely consistent, but the specifics on disclosure, billing, and acceptable use can vary.

What competence under Rule 1.1 requires

Comment 8 to Model Rule 1.1, which explicitly ties competence to keeping abreast of relevant technology, has been adopted in over 40 states. It used to be the e-discovery comment. It's now the AI comment too. ABA 512 puts that beyond reasonable dispute.

What this looks like in practice:

The flip side also applies. A lawyer who refuses to consider AI on principle, where a competent practitioner would use it to a client's benefit, is also exposed under Rule 1.1. The duty cuts both ways.

The data flow that keeps privilege intact

Privileged communications must be kept in confidence. There are three lawful paths for using AI on privileged work:

  1. The vendor is your sub-agent under the engagement. Treat the AI vendor the way you'd treat a court reporter, an outside copy service, or a contract paralegal. There is a written agreement in place that designates the vendor as a confidential service provider working under your supervision, with appropriate confidentiality and data-handling terms. Privilege travels with the agency relationship.
  2. The AI is a local tool that doesn't transmit data to a third party. An on-device model that runs entirely on firm-controlled hardware is the cleanest version of this. No third-party transmission, no confidentiality question.
  3. The data sent to the AI is stripped of privileged content. Names, matter identifiers, and facts that would identify the client are removed before the prompt is sent. The AI sees a generic legal question with no client connection. This works in theory and is workable in practice for some uses, but redaction at scale is hard, and one slip undoes the protection.

Most firms find option 1 is the realistic path for general AI assistance, with option 3 layered on top for specific high-sensitivity matters. Option 2 is appropriate for the most sensitive work but typically requires more infrastructure than a small or mid-size firm wants to maintain.

Why Claude.ai Team and ChatGPT Team don't qualify by default

Team plans are an improvement over consumer plans. They typically remove training on inputs by contract, give administrators some user-management capability, and add basic compliance posture. They are not, by default, suitable for a law firm sending privileged matter through them.

Specifically:

What a compliant AI vendor looks like for a firm

Before sending a single client matter through any AI tool, the firm should require the following from the vendor in writing. Use this as a checklist for procurement.

RequirementWhy it matters
Signed written agreement treating vendor as confidential service providerEstablishes the agency relationship that lets privilege travel with the data. Not a click-through ToS.
No training on firm inputs or outputsEliminates the most direct waiver risk. Should be unconditional, not opt-out.
Defined retention with short defaultThe shorter, the better. Anthropic's API standard is 7-day operational logs, then auto-delete. That's a reasonable benchmark.
Audit logs accessible to firm administratorsLets the firm answer the "what was sent into the tool on matter X" question without depending on the vendor.
Breach notification SLADefined timeline (24 to 72 hours) and contact protocol. Required by state bars in many breach scenarios.
Sub-processor disclosureThe vendor's vendors. The firm needs to know who else touches the data.
Deletion on demandWhen an engagement closes or a client requests it, the data must be deletable on a defined timeline.
Encryption in transit and at restBaseline. TLS in transit, AES-256 at rest is the standard expectation.
Conflict-of-interest representationDoes the vendor serve opposing counsel, opposing parties, or government adversaries on the same matters? The firm should know.
Insurance coverageCyber liability and professional liability at coverage levels appropriate to the firm's exposure. Certificate of insurance on file.

If a vendor cannot meet this list, that doesn't mean the tool is useless. It means the tool should not see privileged material. Use it on the redacted-data path only, or not at all.

Use cases that work well

Even before any vendor agreement is signed, there are AI uses that pose minimal privilege risk because the inputs are not privileged in the first place. These are the natural starting point for any firm.

Use cases to avoid or handle carefully

The other side of the line. These are uses that should not happen on a tool without an appropriate confidential vendor agreement, and should be approached carefully even with one.

What not to do

Do not paste full client matters with PII into a consumer AI tool. Do not upload deposition transcripts directly. Do not ask AI to assess settlement value with the full case file attached. Do not use a free-tier consumer chatbot to draft anything that names a real client. If you wouldn't email it to a stranger, don't paste it into ChatGPT Free.

What Private Claude offers a firm

Private Claude is a chat interface for Claude built on the Anthropic API rather than the Claude.ai consumer product. The structural differences matter for legal practice.

For a deeper look at how the data flow works, see our breakdown of BAA-backed AI chat (the BAA structure for healthcare maps closely to the confidential vendor structure for legal). For the broader business product overview, see Private Claude for Business.

Your mileage may vary based on jurisdiction, practice area, and client engagement terms. None of this is legal advice. It's a framework for asking the right questions of any vendor before client matters touch the tool.

Frequently asked questions

Does using ChatGPT or Claude.ai waive attorney-client privilege?

It can. Privilege depends on communications being kept in confidence. When privileged content is sent into a consumer AI tool that stores it on third-party servers under terms permitting training and indefinite retention, the communication is no longer in confidence in the way courts have historically required. Whether a court would find waiver depends on jurisdiction, the specific tool's terms, and how the firm uses it, but the safer reading is that consumer AI tools should not receive client-identifying privileged material.

What is ABA Formal Opinion 512?

ABA Formal Opinion 512, issued by the ABA Standing Committee on Ethics and Professional Responsibility in July 2024, addresses lawyers' use of generative AI tools. It applies existing Model Rules of Professional Conduct to AI: lawyers must understand the technology's capabilities and risks (competence), protect client confidential information, supervise AI output as they would non-lawyer staff, maintain candor to the tribunal, and ensure fees charged for AI-assisted work are reasonable.

Have state bars issued AI guidance?

Several have. California, New York, Florida, and others have issued formal opinions or practical guidance on lawyer use of generative AI. Most track the ABA's framework: confidentiality, competence, supervision, billing. Check your jurisdiction's most recent ethics opinions before deploying any AI tool firm-wide.

Does Rule 1.1 require lawyers to understand AI?

Comment 8 to Model Rule 1.1 ties the duty of competence to keeping abreast of changes in the law and its practice, including the benefits and risks of relevant technology. Over 40 states have adopted the comment. ABA Formal Opinion 512 confirms generative AI falls within that scope. A lawyer who deploys AI without understanding how it handles client data, or who ignores AI entirely when it would benefit a client, is exposed under Rule 1.1.

Can I use a consumer AI tool if I redact the matter first?

Redaction is one of the three lawful paths described in this article. If the data sent to the tool contains no client-identifying information, no privileged communications, and no facts that could be reverse-identified, the privilege concern is reduced. The practical problem is that effective redaction at scale is hard, and one slip undoes the protection. Most firms find it easier to use a vendor with appropriate contractual terms than to redact every prompt.

What is the minimum a firm should require from any AI vendor?

A signed written agreement treating the vendor as a confidential service provider, no use of inputs or outputs for model training, defined short retention with deletion-on-demand, audit logs accessible to the firm, breach notification with a defined SLA, sub-processor disclosure, encryption in transit and at rest, conflict-of-interest representations, and the vendor's professional liability or cyber insurance details. A standard click-through consumer ToS does not meet this bar.

Is Claude.ai Team or ChatGPT Team enough for a law firm?

Not by default. Team plans typically improve on consumer plans by removing training on inputs, but they often retain chat history indefinitely under the firm's account, lack a confidential-vendor written agreement specific to legal practice, do not provide per-matter audit logs, and do not commit to deletion timelines compatible with engagement closure. They can be a starting point, but partners and risk officers should review the actual contract terms before using them on client matters.

What does Private Claude offer law firms?

The Business tier ($1449/year White Label) includes a confidential vendor agreement appropriate for legal-industry use, runs on the Anthropic API with 7-day operational log auto-delete and no training on inputs, has no chat history at the PrivateClaude application layer, and provides audit logs for firm administrators. It is built so that the data flow is structurally compatible with maintaining attorney-client privilege when used appropriately.

Private Claude for regulated teams.

BAA available. Zero data retention. Self-serve or deploy in your VPC. Talk to us about your compliance requirements.

Contact sales