Contact

301 North Lake Ave.
7th Floor
Pasadena, CA 91101

HOW CAN WE HELP YOU?

[contact-form-7 id="328" title="Contact Form"]

Your ChatGPT Conversation Could Be Used Against You in Court: A New Federal Ruling Says Talking to AI Isn’t Like Talking to Your Lawyer

Most people assume that what they type into an AI chatbot stays between them and the machine. A new ruling from a federal judge in Manhattan says otherwise — and it could affect anyone who’s ever asked ChatGPT, Claude, or any other AI tool for help with a legal problem.

What Happened?

On February 10, 2026, Judge Jed Rakoff of the U.S. District Court for the Southern District of New York ruled in United States v. Heppner that documents a defendant created using a non-enterprise consumer version of the AI chatbot Claude are not protected by attorney-client privilege or work-product doctrine — two of the most important legal shields people rely on to keep sensitive information out of the hands of opponents in court. [1]

Bradley Heppner, a Dallas financial executive facing fraud charges, had used Anthropic’s Claude to research legal questions about the government’s investigation into him. He fed information he’d received from his own lawyers at Quinn Emanuel into the chatbot, generated 31 documents of AI prompts and responses, and later handed those documents over to his legal team.

When the FBI seized those documents during a search of his home, Heppner’s lawyers argued they were privileged. The judge disagreed — on every count.

Why Didn’t Privilege Apply?
In everyday terms, “attorney-client privilege” means that private conversations between you and your lawyer can’t be forced into evidence. It’s one of the oldest protections in the legal system. But Judge Rakoff found four reasons it doesn’t extend to AI conversations:

  1. AI is not a lawyer. It has no law license, owes you no duty of loyalty, and can’t form an attorney-client relationship. Legally, asking an AI chatbot for legal guidance is no different from talking through your case with a friend at a coffee shop.
  2. AI doesn’t claim to give legal advice. Anthropic’s own materials say Claude is designed to avoid “giving the impression of giving specific legal advice.” You can’t claim a tool gave you legal advice when the tool itself says it doesn’t do that.
  3. Your conversations aren’t confidential. This is the big one. Anthropic’s terms say user inputs may be disclosed to government authorities and used to train AI models. The same is true for OpenAI’s ChatGPT. Judge Rakoff found there was simply no reasonable expectation of privacy. As he put it, the platform “contains a provision that any information inputted is not confidential.”
  4. You can’t make something privileged after the fact. Heppner created the AI documents on his own, then sent them to his lawyers. But passing unprivileged materials to your attorney doesn’t magically cloak them in privilege. That’s a long-settled legal principle.

The “work product” defense — which protects materials prepared in anticipation of litigation — also failed because Heppner’s lawyers admitted they never directed him to run the AI searches. He did it on his own.

The Bigger Problem: Privilege Waiver
Perhaps the most alarming part of the ruling is what it means for information that was originally privileged. Heppner took things his lawyers told him — genuinely privileged attorney-client communications — and typed them into Claude. The judge agreed with prosecutors that doing so may have waived the privilege over those original conversations entirely.

In other words, by sharing privileged information with an AI chatbot, you might not just lose protection over the AI conversation — you could lose protection over the underlying lawyer-client discussion, too.

Does Paying for a Subscription Help?
Not much. Both Anthropic and OpenAI use conversations from their free and individual paid plans to train their models by default. Users can opt out of training, but opting out doesn’t eliminate the platforms’ rights to share your data with government authorities or in response to legal demands.

Only enterprise-tier agreements — the kind large organizations negotiate with specific contractual confidentiality protections — may change the picture. A $20-a-month subscription does not buy you privilege.

Why This Matters Beyond Criminal Cases
While this case arose in a federal criminal prosecution, the reasoning applies across the board: civil lawsuits, workplace investigations, regulatory inquiries, business disputes. Anytime someone uses AI to analyze a legal problem, evaluate potential liability, or prepare for litigation, they may be creating a trail of discoverable records that the other side can obtain and use.

The Bottom Line
AI chatbots feel private. The conversational interface creates what legal commentators have called a “dangerous illusion of privacy.” But unless you’re operating under a negotiated enterprise agreement with contractual confidentiality protections, every prompt you type is a potential disclosure — and every response is a potentially discoverable document.

The message from this first-of-its-kind ruling is simple: Your AI is not your lawyer. Don’t treat it like one.

For questions about AI privilege or enterprise AI deployment, please contact the Hunt Ortmann team. Stay tuned for continued AI-related insights from Hunt Ortmann.

_________________________________________________________________________________________________

[1] It also builds upon a trend in the same court, where Judge Oetken recently rules that 20 million ChatGPT conversation logs are likely subject to compelled production in the OpenAI copyright litigation, finding that users have a “diminished privacy interest” in their AI conversations.

AUTHORS

Blog Author Image
John D. Darling

Shareholder

Blog Author Image
Patricia J. Wolfe

Shareholder