Stories of lawyers submitting AI-hallucinated caselaw to courts have become ubiquitous in 2025. Most lawyers think “that will never be me” because they know not to rely on AI for legal research. But a potentially greater risk lurks in the shadows of this rapidly evolving technology: the use of AI tools in the discovery process.
Lawyers in Hot Water for AI Misuse
Stories of lawyers from small firms to large have become pervasive this year, and judges have repeatedly imposed sanctions when lawyers include either nonexistent or misstated caselaw in their submissions.
Punishments have ranged from admonitions to monetary fines to the more creative, such as requiring counsel to provide a copy of the sanctions order to each of their clients, opposing counsel, and presiding judges in their other cases. Johnson v. Dunn, No. 2:21-CV-1701-AMM, 2025 WL 2086116 (N.D. Ala. July 23, 2025). A California court recently went one step further, criticizing opposing counsel for failing to bring their adversary’s AI-generated caselaw to the court’s attention. Noland v. Land of the Free, L.P., No. B331918, 2025 WL 2629868 (Cal. Ct. App. Sept. 12, 2025).
The Promise of AI Discovery Tools
Using AI for legal research is only one of several pitfalls this new technology has created within the litigation process. Over the past couple of years, many service providers in the electronic discovery space have made available AI tools that carry both enormous promise and risk.
By any measure, AI brings enormous benefits for the discovery process. If you ask litigators (and their clients) what they dislike most about the litigation process many would probably put document discovery high on the list. Document discovery is time consuming and expensive, especially with the ever- expanding scope of what might be included, such as phone data, recorded videoconferences, and now even AI prompts and outputs.
Tools are now readily available allowing practitioners not only the ability to more effectively search for relevant materials, but also the ability to automatically categorize documents, such as for responsiveness or privilege.
Just like lawyers need to proceed carefully if they choose to utilize AI-generated legal research, they must also use caution when using AI discovery tools.
In the words of former Magistrate Judge Andrew Peck, when considering the use of then-emerging computer-assisted review, technology “is not a magic, Staples–Easy–Button, solution appropriate for all cases.” Moore v. Publicis Groupe, 287 F.R.D. 182, 189 (S.D.N.Y. 2012). Rather, “the technology exists and should be used where appropriate, but it is not a case of machine replacing humans.” Id.