Bogus AI-Generated Case Citations Abound.
Posted by Ed Folsom, May 10 2026.
Problems are popping up all around us with generative artificial intelligence (AI) in the legal field. What those problems disclose about law practice doesn’t flatter the profession. At the same time, generative AI presents a novel ethical problem stemming from well-crafted total fabrications of non-human authors. While these non-human authors lack the consciousness to tell lies, some of what they tell us is so entirely untrue that no human could create the same deception without lying.
No human author could create a citation to a non-existent case without consciously choosing to lie, without consciously choosing to fabricate the citation. But when generative AI “hallucinates” a case citation, it is possible for a lawyer to include the bogus citation in a legal document, furthering the falsehood, without engaging in a lie. When an AI-generated deception is included in a legal pleading, who should pay the price of the deception, and what should the price be?
Michael Cohen’s Case.
The first time I recall hearing about bogus case citations being submitted to a court and attributed to AI was back in 2023. Disgraced attorney Michael Cohen was seeking early termination of supervised release on his federal sentence for tax evasion, false statements, and campaign finance violations. Cohen’s early-termination motion contained citations to non-existent cases. The judge was not happy with Cohen’s attorney, David Schwartz, for including the citations in his pleadings in Cohen’s behalf.
Cohen claimed he used Google Bard, an AI service, to do the legal research and then forwarded the results to Schwartz. He claimed he thought Schwartz would check the cites before submitting the materials to the court (I can’t believe Cohen didn’t check the cites himself, to make sure the cases, if they really existed, didn’t cut against him).
Schwartz claimed that he thought another attorney representing Cohen, Danya Perry, had checked the cites, while Perry claimed she had nothing to do with the pleadings and didn’t understand why Schwartz would blame her. This tells us that the primary concern of Cohen, Schwartz, and Perry as individuals was to not have the Court believe they personally fabricated the citations. That’s because intentionally lying to a court is even worse than misleading it through incompetence. Instead, Cohen was happy to lay the falsehood off to AI, and Schwartz was happy to lay it off to AI or Cohen or Perry, while Perry was happy to lay it off to anyone or anything other than herself. Secondarily, by pointing the finger at each other, each attempted to lay the blame on someone else for the fact that totally fabricated falsehoods, regardless of who or what created them, were included in legal pleadings attributable to them.
The Court did not impose any sanction on Schwartz for including the bogus case citations. Instead, the court declared that the mistake did not rise to the level of bad faith, instead representing an embarrassing degree of negligence.
That was back in 2023-2024. It has since become a regular thing for lawyers to submit bogus AI-generated citations to courts all over the world.
Jessica Fuller v. Hyde School: Bogus AI in Maine.
Locally, yesterday I read about a sanctions order in U.S. District Court, D. Maine, in the case of Jessica Fuller v. Hyde School, involving bogus AI-generated citations submitted to the court by Fuller’s attorney, Kelly Guagenty.
Fuller is suing the Hyde School for alleged mistreatment during her time as a resident student at Hyde years ago. In October of 2025, Hyde moved for dismissal of the case for alleged failure to state a claim upon which relief may be granted. Attorney Guagenty filed a response. In turn, Hyde’s attorney filed a response to Guagenty’s response, alleging that certain of the cases Guagenty cited did not stand for the propositions she cited them for, and that certain quoted passages that Guagenty attributed to certain cases did not actually exist in those cases. Guagenty then filed a “Notice of Errata,” as if to point out and correct certain errors in her response to the Plaintiff’s motion to dismiss.
More recently, in March, Guagenty filed a pleading with the Court that attributed the problems to AI-generated inaccuracies that were not detected by multiple attorneys in her firm during the drafting and proof-reading processes. Guagenty took responsibility, expressed remorse, enrolled in a course about the perils of generative AI, and created a law firm policy to guard against repeating the errors.
On May 5, 2026, U.S. District Court Judge Stacey Neumann issued an Order Imposing Sanctions on Guagenty. In that Order, Neumann points out:
“After independently reviewing the sources cited in both the Response and the Notice of Errata, the Court found that several cases were cited for unsupported propositions; some quoted language could not be located in the cited sources; and other quotations contained inaccurate pincites [citations to the quoted passage’s specific location in the case], all consistent with the Defendants’ objections.”
However, Guagenty satisfied the Court that she accepts responsibility, is remorseful, and has taken steps to guard against a repeat performance. The Court did not impose a monetary sanction, instead ordering Guagenty to show proof when she completes the course on generative AI and proof that she in fact has a written policy in place at her firm to prevent such errors in the future. The Court also ordered Guagenty to show her client, Fuller, a copy of the sanctions order.
At least the cases Guagenty cited actually exist. The passages Guagenty quoted might not exist within the cases she cited (whether they exist anywhere outside a generative AI hallucination, we do not know) but at least the cases exist. Assuming that the quoted passages exist within some case, somewhere, even without generative AI it’s still possible that a really lousy lawyer or paralegal could have generated the same slop without actually lying; not by intentionally fabricating from whole cloth but merely by being sloppy and inattentive. In that sense, the AI-generated materials in Jessica Fuller v. Hyde School might not represent a problem that’s entirely new and unique to AI-generated pleadings.
Bogus AI in Georgia: State v. Payne.
But then there’s that other case that was in the news again recently, the Georgia Supreme Court case of State v. Payne. That one, according the Court’s Chief Justice at oral argument, involves a trial court’s order denying a motion for new murder trial that contains:
“at least five citations to cases that don’t exist, and there’s at least five more citations to cases that do not support the proposition for which they’re cited, including three quotations that don’t exist.”
The Hyde School case and Michael Cohen’s case show that lawyers sometimes submit pleadings to courts but don’t read the cases they cite to ensure that the cases (1) exist and (2) say what the lawyer claims they say. The Hyde School and Michael Cohen cases also show us that this sometimes causes bogus, AI-generated case cites, quotations, etc., to slip past multiple layers of corner-cutting lawyers.
Prior to AI, I suspect it was extremely rare to find lawyers simply making up cases or quotations and including them in their pleadings. It must have been rare because there would be little to no advantage to it, given the high likelihood of detection by opposing counsel or the court. But as the Michael Cohen and Hyde School cases demonstrate, it is not rare for bogus citations, once introduced into the process, to pass undetected through multiple layers of lawyers.
State v. Payne adds a couple of new wrinkles. First, it shows us that judges sometimes have prosecutors write court orders denying defense motions in homicide cases. Before Payne, I was unaware of this dangerous practice. We also learn that the slipshod corner-cutting that shuttles bogus citations along through the legal process sometimes extends not just to lawyers who submit pleadings to the court, but to corner-cutting, slipshod judges who also fail to fact-check the case citations or quoted passages.
And as the cherry on top, the bogusness that gets passed through the entire system and into a court order might be an outright fabrication – a citation to a non-existent case or non-existent passage within a cited case. Without AI, such a complete fabrication could only end up contaminating the system if it was fabricated by a person consciously choosing to lie.
In Payne, Assistant District Attorney Deborah Leslie submitted a proposed order to the Court denying the defendant’s motion for new trial. Leslie’s order was drafted using generative AI, hence the 5 citations to non-existent cases, the cited quotations that don’t exist, and the citations to cases that don’t support the propositions they are cited for. It’s possible that another person within the D.A.’s office generated the proposed order for ADA Leslie and Leslie just didn’t check the cites, similar to attorney Guagenty and the lawyers in her firm.
Whether Leslie is the lawyer who worked with AI in the first place or not, she clearly compounded her problems at oral argument. When the Chief Justice asked her if the bogus citations were in the version of the order that she submitted to the trial court, she answered that she did not believe so, was unaware of it, but would “be glad to research that and provide the court with a supplement.”
As a sanction for including the bogus cites in what ultimately became the trial court’s order, the Georgia Supreme Court suspended ADA Leslie from practicing in the Georgia Supreme Court for six months. They also vacated the trial court’s order and ordered the trial court to issue a new order not prepared by the State or the Defendant. Imagine that.
In addition to the Georgia Supreme Court’s sanction, ADA Leslie also now faces a bar ethics complaint and discipline within the D.A.’s Office.
But how could the judge also have failed to check the citations before issuing the court’s order in this homicide case? As to that, the Georgia Supreme Court issued this caution:
“We strongly encourage trial courts to carefully review proposed orders with the understanding that artificial intelligence software, with all of its potential risks and benefits, may have been used to prepare such proposed orders.”
Here’s hoping that the judge faces meaningful discipline for failing to act as a judge, instead of a rubber stamp, in such a serious matter. It’s as if the AI lied and the judge swore to it, except AI isn’t conscious, so it can’t lie. As opposed to AI, the human actors involved in the process are conscious beings and have to be incentivized not to lapse into unconsciousness when they write and review the pleadings and the orders that they sign.
As Judge Neumann, here in Maine, said in her order in Jessica Fuller v. Hyde School, quoting Park v. Kim, 91 F.4th 610, 615 (2d Cir. 2024) [Check out Park v. Kim, here, and see how easy it is to do that]:
“an attorney must, at a minimum, read and thereby confirm the existence and validity of, the legal authorities on which they rely” [internal quotation marks and citation ommitted].
This is not a new proposition.
Who should bear the cost of AI fabrications that would be a lie if they were created by a human, or of misattributions that could equally arise from sloppy lawyering without AI? The lawyer who signs the pleadings must always be held responsible, even if no one else is. What should the price be? A price high enough to ensure that lawyers who sign submissions to the court confirm the existence and validity of the cases they rely on.
Yes, all you postmodern relativists, there is such a thing as truth, and it includes whether the cases you cite exist and whether they say what you quote them as saying. This is no mere construct. That must remain entirely clear.
