Governance & CompliancePersonal Decisions

Before You Hire a Lawyer: In the US, Your AI Notes No Longer Enjoy Legal Protection

A Badly Underestimated Use Case

Before actually paying for a lawyer, many people do something that looks entirely reasonable: they first walk through the events themselves, think about their own weak spots, how the other side might attack, which clauses cut against them. This used to happen in a Word document or a notebook. Increasingly, it happens in ChatGPT or Claude. AI can turn scattered facts into a timeline, point out legal angles you hadn’t considered, even draft a defense memo for you to chew on.

Buried in this use case is a legal risk most people miss. In February 2026, the US District Court for the Southern District of New York ruled in a securities-fraud case that 31 defense-strategy documents the defendant prepared with Claude were neither protected by attorney-client privilege nor covered by the work-product doctrine, and had to be turned over to prosecutors in full. The single most important line in the opinion reads: handing these AI conversations to a lawyer after the fact cannot turn something that was never protected into something that is. Put differently, the “self-preparation” you did before retaining a lawyer gets no retroactive legal protection just because you later hired one.

For readers unfamiliar with US law, a layer of institutional background is needed first, otherwise the rest of the discussion lands in the wrong place.

What Attorney-Client Privilege Is, and Why This Matters on Its Own

The US concept of attorney-client privilege is not the same thing as what people usually call a lawyer’s “duty of confidentiality.” The duty of confidentiality lives at the level of professional ethics: a lawyer may not, on their own initiative, disclose a client’s affairs. Privilege lives at the level of the law of evidence: in court proceedings or government investigations, communications between client and lawyer that meet specific conditions cannot be compelled from you or your lawyer, and the other side cannot subpoena the lawyer to testify about them; even if those communications are seen by the other side because of a mistake (an email misdirected, a file produced by accident), you can still ask the court to exclude them from evidence, and the privilege itself is not lost by a single accidental leak. In short, the duty of confidentiality constrains what a lawyer should say; the privilege decides whether a judge can force you to say something and whether opposing counsel can use that communication as evidence.

The distinction is invisible in ordinary life and becomes crucial the moment you are drawn into a dispute, and its real power is only visible when the party involved is at fault. Suppose after a car crash you tell your lawyer: “actually that day I glanced at my phone and only looked up once the light turned green.” If that conversation is privileged, opposing counsel cannot drag it out in litigation. Even if they somehow obtained a copy through some other route, the court would refuse to admit it. If it is not privileged, that admission has the same legal status as a WeChat message sent to a friend. It can be demanded, and it can become the evidence that puts full liability on you. The logic of privilege is not “protect the innocent,” it is “even if you really did something wrong, you should be able to discuss how to handle it with your lawyer without holding back,” because only when the client tells the lawyer the truth can the lawyer build a genuinely useful defense. This is an incentive structure the US legal system designed on purpose. US civil litigation has a regime called discovery whose intensity far exceeds that of many other jurisdictions: once litigation begins, both sides are essentially obligated to hand over any written material related to the case, including emails, text messages, cloud-storage documents, chat histories. Privilege is one of only a handful of statutory exceptions in that wide net.

The relationships that can enjoy privilege are limited: attorney-client, doctor-patient, therapist-patient, clergy-penitent, spousal. The core elements are all similar: the relationship itself carries a legally recognized expectation of confidentiality, the content of the conversation serves the professional purpose of that relationship, and the channel of communication has not been unnecessarily exposed to third parties. If any one element fails, the privilege does not attach.

Now put this institutional framework against the use case at the top of this article: using ChatGPT to organize your case before hiring a lawyer. First, Claude or ChatGPT is not a lawyer and does not satisfy the “attorney-client” relationship premise. Second, the content you input is retained by the platform, may be used for training, and may be turned over to third parties under legal process. The expectation of confidentiality does not hold. Third, these conversations happen before you have hired a lawyer, so no attorney direction exists. All three elements fail. Judge Rakoff’s February 2026 opinion wrote this logic into a judicial opinion explicitly for the first time.

One more counterintuitive layer: this cannot be cured by “showing the document to a lawyer.” The lay intuition is that if I later hire a lawyer, show him this AI-drafted memo, and we discuss it together, the memo becomes part of the lawyer’s work, and should be protected. The judge explicitly rejected this intuition. The original text: “Non-privileged communications are not somehow alchemically changed into privileged ones upon being shared with counsel.” In other words, non-privileged communications do not become privileged by being shared with counsel after the fact. The line has been quoted again and again in law-firm client alerts, because it punctures exactly the psychological safety net ordinary people most rely on.

The core judgment of this article in one sentence: in the US legal system, any “legal-flavored” self-preparation done today with a consumer version of ChatGPT or Claude (whether before or after retaining a lawyer) should be treated as written material that may be read by opposing counsel, prosecutors, or regulators in the future. Not because it will definitely be read, but because the legal firewall that would stop others from reading it does not exist.

The Ruling Itself: Facts and a Three-Layer Argument

On February 17, 2026, Judge Jed S. Rakoff of the Southern District of New York issued a written opinion in United States v. Heppner (No. 25-cr-00503-JSR). Full Memorandum The defendant Bradley Heppner, former chairman of GWG Holdings (a now-bankrupt retail bond issuer), was charged with misappropriating more than $150 million of investor funds through shell companies. DOJ case information The indictment was returned by a grand jury on October 28, 2025, unsealed with a summons on November 4, and Heppner was arrested the next day. On the day of his arrest the FBI searched his Dallas home and found 31 conversation logs with the consumer version of Claude on his electronic devices. The content consisted of Heppner asking Claude to help draft defense strategies, discussing “how to mount a defense to the indictment’s facts and what legal arguments could be made.” These conversations occurred after he received the grand jury subpoena but before the formal indictment. His lawyer later confirmed that “counsel did not direct Heppner to run Claude searches,” meaning he went to Claude entirely on his own. BKLW case analysis

This fact pattern is almost a criminal-scale magnifying glass of the “let me ask AI first before hiring a lawyer” scenario at the top of this article. Heppner’s chain of behavior is isomorphic with that of the vast majority of individual AI users: first use consumer AI to organize your thinking, produce a draft you can show a lawyer, then take it to the lawyer. The only difference is that he was already under criminal investigation with nine figures at stake, so the government simply showed up with a search warrant. For ordinary readers trying to understand the reach of this ruling, the point is not the severity of his case but why this chain of behavior fails legally at every step.

Rakoff’s reasoning runs in three layers, each sufficient on its own to defeat privilege.

The first layer is the basic element of attorney-client privilege: the communication must occur between a lawyer and a client. Rakoff wrote directly: “In the absence of an attorney-client relationship, the discussion of legal issues between two non-attorneys is not protected by attorney-client privilege. … Because Claude is not an attorney, that alone disposes of Heppner’s claim of privilege.” (Memorandum, pages 6–7) The Reuters report offered an even sharper rendering: “No attorney-client relationship exists, or could exist, between an AI user and a platform such as Claude.”

The second layer is the expectation-of-confidentiality element. Rakoff specifically invoked the text of Anthropic’s consumer privacy policy, finding that the policy clearly provides that Anthropic collects inputs and outputs, uses them to train models, and reserves the right to disclose them to government regulators and other third parties in matters involving claims, disputes, or litigation. Since the platform’s own terms announce the possibility of disclosure, the user “could have had no reasonable expectation of confidentiality.” This reasoning has nothing to do with the specific search warrant. The policy itself destroys the expectation of confidentiality. Dentons ruling analysis

The third layer is “no after-the-fact cure,” and it is the part that hits ordinary users hardest: “Non-privileged communications are not somehow alchemically changed into privileged ones upon being shared with counsel.” (Quoted in summaries from the Freshfields blog and NatLaw Review.) Under the same logic, the work-product doctrine also fails. That doctrine requires the material to be prepared “by or at the direction of counsel” in anticipation of litigation, and Heppner’s self-initiated actions do not meet the element.

The Judge Left Two Doors Open, But Neither Opens for Ordinary Users

Rakoff explicitly preserved two exceptions in the opinion.

The first door is the Kovel extension doctrine. If a lawyer formally directs a client to use AI as a translator, assistant, or data-analysis tool (by analogy with the accountant retained by the lawyer in United States v. Kovel in 1961), the AI might be treated as the lawyer’s “agent” and preserve the chain of privilege. Rakoff wrote: “Had counsel directed Heppner to use Claude, Claude might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer’s agent within the protection of the attorney-client privilege.” (Memorandum, pages 7–8) This door is useful for lawyers and almost useless for individual users. An ordinary person talking to ChatGPT usually has no lawyer directing anything.

The second door is enterprise versions / zero-data-retention agreements. Rakoff did not rule on this directly, but hinted in dicta that if the contractual terms of an AI service promise no training use, no external disclosure, and zero data retention, the confidentiality element might be satisfied. O’Melveny’s analysis put it more bluntly: “some generative AI tools (including Claude) offer enterprise versions that provide assurances of confidentiality and make clear user inputs will not be used for training—which could at least arguably give rise to a ‘reasonable expectation of confidentiality’.” O’Melveny client alert But this door is essentially closed to individual users as well: OpenAI’s ChatGPT Enterprise, Anthropic’s Claude for Work, and Google Workspace Gemini are all organizational subscription products starting at tens of thousands of dollars per year, and ordinary users do not hold those contracts.

Harvard Law Review commentator Elizabeth Guo identified the contradiction between these two doors: the ruling raises two thresholds at once, requiring both the vendor architecture to pass muster and the lawyer to be formally involved, leaving the daily AI use of ordinary clients caught in the middle with no safe harbor. Harvard Law Review

The Contrary Precedent: It Comes Down to Who Is Holding the Pen

Heppner is not the only precedent. Magistrate Judge Anthony Patti of the Eastern District of Michigan, on February 10, 2026 (hours before Heppner’s bench ruling), reached the opposite conclusion in Warner v. Gilbarco: a pro se plaintiff without legal representation, who drafted her employment-discrimination complaint using ChatGPT on her own, had produced personal work-product that did not have to be turned over to the defendant employer. Patti wrote: “ChatGPT (and other generative AI programs) are tools, not persons, even if they may have administrators somewhere in the background.” Warner v. Gilbarco ruling

Going further back, in the August 2024 Northern District of California case Tremblay v. OpenAI, Judge Martínez-Olguín held that ChatGPT prompts carefully crafted by plaintiffs’ counsel qualified as “opinion work product,” because the prompts embedded the lawyer’s mental impressions and were therefore protected. Tremblay ruling

Baker Donelson distilled the distinction between the two sets of cases into one sentence: the difference lies in who is holding the pen. Lawyer holds the pen, the conversation carries the lawyer’s thinking, work-product may attach. Client holds the pen and consults AI independently, it does not. The split in the cases is not about whether “AI conversations can be protected,” it is about whether “the source of the conversation is the lawyer’s thinking.” For most ordinary people who use ChatGPT as a daily assistant, the answer to that question consistently points one way: it does not attach.

Platform Policies: All Three Vendors Sit on the Same Line

The intuition that “I pay for a subscription, surely I should get more protection” needs to be corrected directly against the privacy policy text.

OpenAI’s privacy policy (February 6, 2026 version) is quite candid: OpenAI Privacy Policy

“If we are legally required to retain your data (for instance, we receive a lawful subpoena) then we may retain it for the duration of the relevant legal or regulatory obligation.”

Its Law Enforcement Policy provides that for non-content data (account information) a subpoena or equivalent instrument is required; for content data (your prompts and outputs), a search warrant or equivalent is required. The transparency report for the second half of 2025 shows OpenAI received 224 non-content requests (146 disclosed, covering 307 accounts) and 75 content requests (62 disclosed, covering 84 accounts). These numbers are not hypothetical, they are already happening.

In August 2025 Anthropic updated its consumer terms to introduce “opt-in training consent”: users who opt in to training see their data retention extended from 30 days to 5 years; users who do not opt in stay at 30 days. Business customers are not used for training by default. Anthropic policy update On government requests, Anthropic’s Law Enforcement Requests page states: “Anthropic PBC discloses account records solely in accordance with our Terms of Service and applicable law.” Valid legal process opens the same door.

Google Gemini’s consumer default retention is 18 months, and conversations used for human review may be kept independently by Google for up to 3 years, even after the user turns off Activity. Gemini Apps Privacy Hub

The policy architecture of all three companies is structurally identical: they all collect prompts and outputs, they all reserve the right to respond to lawful process, and they all treat “notify the user when legally permitted” as a soft commitment. The reasoning in Heppner that invokes Anthropic’s policy would carry over almost word for word onto ChatGPT or Gemini. STACK Cybersecurity summarizes it this way: the basis for the ruling is not the “Claude” brand but the privacy-policy architecture of consumer-grade AI itself; swap the logo, the reasoning holds. STACK analysis

Three Concrete Scenarios for Ordinary Users

The first is the scenario at the top of this article: actually “asking AI first” before hiring a lawyer. This is the scenario Heppner most directly punishes and the one most easily underestimated. Before booking a lawyer, you use ChatGPT to walk through the entire matter from start to finish, ask it to analyze your weak spots, predict the other side’s likely angles of attack, and produce a timeline-based factual statement. The whole process looks like responsible preparation that also saves on billable hours and lets the lawyer engage the case more efficiently.

Legally, the status of this preparation document is no different from any Word document sitting in your cloud storage, and in fact it is more fragile: it exists not only on your end, but also in the AI vendor’s servers. If the dispute later escalates into litigation, opposing counsel has two routes to obtain it. One is to demand in discovery any AI-tool conversation records related to the case. The other is to serve a third-party subpoena on OpenAI or Anthropic. Showing the conversation to your own lawyer, discussing it together, none of that changes the document’s legal character. It remains non-privileged communication.

The more concrete harm lies in the content of the conversation itself. When people organize case facts for an AI, they tend to speak more plainly than they would face to face with a lawyer, because a machine that seems not to judge them is easier to open up to. A party to a contract dispute might type, “I know I signed it a bit impulsively, and I really didn’t read the liquidated damages clause carefully.” This sentence is the exact opposite of the position he needs to hold in court. Once the conversation is produced in discovery, opposing counsel will print that sentence out as a demonstrative exhibit. The irony is that the more seriously someone prepares and the more honestly they feed details to the AI, the more explosives they plant for themselves. Face to face with a professional lawyer, even an honest client would be guided by the lawyer’s “wait, hold that part, let me ask a specific question first” into speaking with more structure. AI currently does not provide that kind of protective steering.

The second is daily AI use in employment and contract disputes. You drafted a resignation letter with ChatGPT, analyzed which clause of a supply contract cut against you, let AI reply to a customer email that made you angry. These seemingly ordinary uses all become discovery targets when labor arbitration, contract breach, or commercial litigation actually arrives. Fisher Phillips wrote directly in its 2026 client alert: “AI-generated ESI, especially from notetakers, meeting summaries, auto-drafted emails, and chat assistants, is becoming a core discovery battlefield in employment cases.” Fisher Phillips alert Ward and Smith’s alert to divorce clients was even blunter: “Receiving a subpoena demanding ‘all communications with AI-based tools, including prompts, inputs, and outputs’ related to your divorce or custody dispute is now a realistic possibility.” Ward and Smith

The third is using AI as an emotional outlet. Sam Altman acknowledged the gap himself on Theo Von’s podcast in July 2025: “People talk about the most personal sh** in their lives to ChatGPT… if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it… And we haven’t figured that out yet for when you talk to ChatGPT.” TechCrunch 2025-07-25 Licensed therapists are covered by therapist-patient privilege, doctors by physician-patient privilege, clergy by clergy privilege, and in most US state laws these privileges strictly protect conversation content. A conversation with ChatGPT falls under none of these privileges. If you later end up in a custody battle, a personal-injury claim, or an insurance dispute, the legal barrier for opposing counsel to obtain your emotional confessions to an AI is very low. And these conversations often contain legally very expensive admissions: “I know I had a little to drink that day so my reactions behind the wheel were slow,” “honestly I have lost it with my kid once or twice.”

New Clauses Lawyers Are Now Writing into Their Contracts

The practicing bar moved fast. On the day of the Heppner ruling, the New York firm Sher Tremonte published a client alert and wrote the following into its engagement letters: “disclosure of privileged communications to a third-party AI platform may constitute a waiver of the attorney-client privilege” (as relayed by Reuters). Kobre & Kim partner Alexandria Gutiérrez Swette’s standard line has already entered the major papers: “We are telling our clients: You should proceed with caution here.”

Debevoise & Plimpton’s operational advice to law-firm clients is that if AI research truly must be done at a lawyer’s direction, write “I am doing this research at the direction of counsel for X litigation.” at the beginning of the conversation, as a written anchor for a future possible Kovel agency argument. Debevoise client alert But it has to be understood that this is a preventive measure lawyers take against uncertainty, not a legally recognized exemption, and its intended audience is represented clients, not ordinary users.

A Judgment Framework You Can Keep in Your Head

The case law will keep evolving. An “AI privilege” may one day be recognized by Congress or at the state level. Peter Swire, a privacy-law scholar at Georgia Tech, has proposed granting privilege only in limited circumstances where “the chatbot explicitly assumes the role of a lawyer or doctor,” rather than extending it wholesale to all AI interactions. Until legislative recognition arrives, the following rules are enough for ordinary users:

Default assumption 1: treat every conversation you have with a consumer version of ChatGPT / Claude / Gemini as written material that may be read by someone else in the future. Not because it will certainly be read, but because you should behave only in ways whose consequences you are willing to live with if it is read. The risk class is email, SMS, WeChat messages, not private diaries or conversations with a lawyer.

Default assumption 2: a paid subscription is not confidentiality. What actually makes conversations harder for a platform to use for training are Enterprise, Edu, and API + ZDR agreements, not individual tiers like Plus / Pro / Team. Organizational subscriptions usually require legal teams to negotiate the contract.

Default assumption 3: deletion is not disappearance. Platforms facing litigation hold orders, regulatory investigations, or criminal search warrants may be required to retain and even produce data the user thought was already deleted. The protective strength of features like Temporary Chat depends on whether the platform is currently under a legal hold, not something you can rely on long term.

Default assumption 4: when you are actually drawn into litigation or an investigation, stop talking to AI independently about the case. Even if you think you’re “just trying to clarify your thinking.” Rakoff’s ruling says this in the plainest possible language: handing it to a lawyer after the fact does not detoxify it. If your lawyer advises you to use AI, have them put “at my direction” in writing; if they don’t, stop.

Default assumption 5: for emotional, therapeutic, and highly private content, be cautious about using AI. Not because the AI will actively leak anything, but because once you enter any legal process (even just an insurance claim or a workers’ comp filing), these contents lack the legal firewall that therapist-patient privilege provides.

What matters about this wave of case law is not how many ordinary people will be dragged into court. Most will not. What matters is that AI is becoming a tool in an intermediate state: “more like a diary than email, but more like evidence than a diary.” The Heppner ruling drops the first anchor on this question: in the eyes of the legal system, it sits closer to the evidence side. This positioning will shape how AI is designed, regulated, and used in daily life over the next decade. Before legislation arrives, adjusting your usage instincts is a lot cheaper than looking up the rules after something happens.