Why Using AI for your Legal Case may be a Terrible Idea
- Daniel Meyerowitz-Katz

- Feb 19
- 4 min read
Updated: Feb 26
With the recent boom in generative artificail intelligence ("AI") and large language models ("LLMs"), it probably should not come as a surprise that it has become increasingly (and unfortunately) common for me to receive instructions from clients that appear to be generated using consumer AI products such as Open AI's Chat GPT, Anthropic's Claude, Microsoft's Copilot, X.AI's Grok, etc.
A couple of years ago I wrote about the problems with using consumer AI models for legal advice. It does seem like the potential for "hallucinations" is now more widely known, but sadly it has not yet seeped into everyone's consciousness. I recently had to spend hours going through AI-generated documents provided to me by a client, the result of which was a lengthy response from me explaining that the documents were not useful and included a lot of incorrect information. As I told the client, it was a waste of my time and the client's money for the documents to be provided to me.
However, it was not only unnecessary costs that the client was risking by using AI to generate instructions for her lawyer. That was demonstrated by a recent decision in the United States District Court in the Southern District of New York, namely United States v. Heppner, 1/25-cr-00503, (S.D.N.Y. Feb 17, 2026) ECF No. 27.
In that case, the accused, Heppner, had inputted privileged information that he received from his legal counsel into Anthropic's Claude AI model, and used Claude to prepare certain documents in relation to his proposed defence strategy which he intended to share with his lawyers.
Rakoff J held that the Claude documents were not privileged on various grounds. Most significantly, his Honour made the following observations in relation to the confidentiality of the documents:
'the communications memorialized in the AI Documents were not confidential. This is not merely because Heppner communicated with a third-party AI platform but also because the written privacy policy to which users of Claude consent provides that Anthropic collects data on both users' "inputs" and Claude's "outputs," that it uses such data to "train" Claude, and that Anthropic reserves the right to disclose such data to a host of "third parties," including "governmental regulatory authorities." ... The policy clearly puts Claude's users on notice that Anthropic, even in the absence of a subpoena compelling it to do so, may "disclose personal data to third parties in connection with claims, disputes [,] or litigation.'' Id. More generally, as another court in this District recently observed, AI users do not have substantial privacy interests in their "conversations with [another publicly accessible AI platform] which users voluntarily disclosed" to the platform and which the platform "retains in the normal course of its business." In re OpenAI, Inc., Copyright Infringement Litig., No. 25 MD 3143, ECF No. 1021 at 3 (Jan. 5, 2026). For these reasons, Heppner could have had no "reasonable expectation of confidentiality in his communications" with Claude. See Mejia, 655 F.3d at 132-34. And the AI Documents are not like confidential notes that a client prepares with the intent of sharing them with an attorney because Heppner first shared the equivalent of his notes with a third-party, Claude. Cf. United States v. DeFonte, 441 F.3d 92, 95-96 (2d Cir. 2006) (per curiam).'
That decision is a strong warning to anyone (lawyers and clients alike) considering using an AI model to assist them in preparing legal advice, pleadings, evidence, or other documents that may contain privileged information. I am not saying that AI can never be useful for that purpose (which is absolutely not the case), but if you are going to use it then you need to make sure that you read the terms and conditions carefully, use a model with enterprise-grade privacy and data protection, and disable its ability to use your information to "train" the model. Otherwise there is a significant risk that you may inadvertently waive privilege over the information that you are inputting and the documents generated by the LLM.
**Update: 26 February 2026**
Not long after the above post was published, I came across a UK decision that made a similar point. The Upper Tribunal (Immigration and Asylum Chamber) case of UK v Secretary of State for the Home Department (AI hallucinations; supervision; Hamid) [2026] UKUT 81 (IAC) mostly concerned disciplinary consequences for lawyers who submitted court documents containing AI "hallucinations" (imaginary case references etc), but the Tribunal made the following observations at [21]:
"We also observe that to put client letters and decision letters from the Home Office into an open source AI tool, such as ChatGPT, is to place this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege, and thus any regulated legal professional or firm that does so would, in addition to needing to bring this to the attention of their regulator, be advised to consult with the Information Commissioner’s Office. Closed source AI tools which do not place information in the public domain, such as Microsoft Copilot, are available for tasks such as summarising without these risks."
I should add that it may have been imprudent for the Tribunal to make what appears to be an unqualified endorsement for the use of Microsoft Copilot in this context. Copilot may be more or less secure, depending on the version you are using (ie consumer grade or enterprise grade) and your privacy settings. Always make sure to read the data policy applying to the particular AI product you are using, and make sure that you have it set to the maximum available privacy.

Comments