top of page

Why Using AI for your Legal Case may be a Terrible Idea

  • Writer: Daniel Meyerowitz-Katz
    Daniel Meyerowitz-Katz
  • 12 minutes ago
  • 3 min read

With the recent boom in generative artificail intelligence ("AI") and large language models ("LLMs"), it probably should not come as a surprise that it has become increasingly (and unfortunately) common for me to receive instructions from clients that appear to be generated using consumer AI products such as Open AI's Chat GPT, Anthropic's Claude, Microsoft's Copilot, X.AI's Grok, etc.


A couple of years ago I wrote about the problems with using consumer AI models for legal advice. It does seem like the potential for "hallucinations" is now more widely known, but sadly it has not yet seeped into everyone's consciousness. I recently had to spend hours going through AI-generated documents provided to me by a client, the result of which was a lengthy response from me explaining that the documents were not useful and included a lot of incorrect information. As I told the client, it was a waste of my time and the client's money for the documents to be provided to me.


However, it was not only unnecessary costs that the client was risking by using AI to generate instructions for her lawyer. That was demonstrated by a recent decision in the United States District Court in the Southern District of New York, namely United States v. Heppner, 1/25-cr-00503, (S.D.N.Y. Feb 17, 2026) ECF No. 27.


In that case, the accused, Heppner, had inputted privileged information that he received from his legal counsel into Anthropic's Claude AI model, and used Claude to prepare certain documents in relation to his proposed defence strategy which he intended to share with his lawyers.


Rakoff J held that the Claude documents were not privileged on various grounds. Most significantly, his Honour made the following observations in relation to the confidentiality of the documents:


'the communications memorialized in the AI Documents were not confidential. This is not merely because Heppner communicated with a third-party AI platform but also because the written privacy policy to which users of Claude consent provides that Anthropic collects data on both users' "inputs" and Claude's "outputs," that it uses such data to "train" Claude, and that Anthropic reserves the right to disclose such data to a host of "third parties," including "governmental regulatory authorities." ... The policy clearly puts Claude's users on notice that Anthropic, even in the absence of a subpoena compelling it to do so, may "disclose personal data to third parties in connection with claims, disputes [,] or litigation.'' Id. More generally, as another court in this District recently observed, AI users do not have substantial privacy interests in their "conversations with [another publicly accessible AI platform] which users voluntarily disclosed" to the platform and which the platform "retains in the normal course of its business." In re OpenAI, Inc., Copyright Infringement Litig., No. 25 MD 3143, ECF No. 1021 at 3 (Jan. 5, 2026). For these reasons, Heppner could have had no "reasonable expectation of confidentiality in his communications" with Claude. See Mejia, 655 F.3d at 132-34. And the AI Documents are not like confidential notes that a client prepares with the intent of sharing them with an attorney because Heppner first shared the equivalent of his notes with a third-party, Claude. Cf. United States v. DeFonte, 441 F.3d 92, 95-96 (2d Cir. 2006) (per curiam).'


That decision is a strong warning to anyone (lawyers and clients alike) considering using an AI model to assist them in preparing legal advice, pleadings, evidence, or other documents that may contain privileged information. I am not saying that AI can never be useful for that purpose (which is absolutely not the case), but if you are going to use it then you need to make sure that you read the terms and conditions carefully, use a model with enterprise-grade privacy and data protection, and disable its ability to use your information to "train" the model. Otherwise there is a significant risk that you may inadvertently waive privilege over the information that you are inputting and the documents generated by the LLM.



 
 
 

Recent Posts

See All

Comments


+61 2 8227 4400

University Chambers
Level 9, 167 Macquarie Street
Sydney NSW 2000

  • facebook
  • twitter
  • linkedin

©2018 by Daniel Meyerowitz-Katz. Please do not reproduce without permission from the author. All quotes should be attributed and should include a link to the original. 


Any information on this website in relation to legal matters is general only and does not constitute legal advice. Nothing on this website should be relied on without first contacting the author to confirm how it applies to your individual circumstances.

Liability limited by a scheme approved under professional standards legislation

Created with Wix.com

bottom of page