The Johnson v. Dunn sanctions order proves that firm policies and experience aren’t enough—only rigorous verification prevents AI citation disasters

[EDRM Editor’s Note: EDRM is proud to publish the advocacy and analysis of the Hon. Ralph Artigliere (ret.). The opinions and positions are those of the Hon. Ralph Artigliere (ret.). © Ralph Artigliere 2025.]
Imagine this: You’re representing a state in high-stakes prison litigation. Since 2020, your firm has received more than $40 million from the client. You have written AI policies, an Artificial Intelligence Committee, and a team of seasoned litigators. Still, two years after Mata v. Avianca made national headlines, firm partners are sanctioned for filing motions in federal court with citations fabricated by ChatGPT.
This isn’t a story about a solo lawyer or an under-resourced firm fumbling with new tech. In Johnson v. Dunn, 2025 U.S. Dist. LEXIS 141805* (N.D. Ala. July 23, 2025), U.S. District Judge Anna Manasco sanctioned three Butler Snow attorneys—William R. Lunsford, Matthew B. Reeves, and William J. Cranford—for submitting hallucinated legal citations, despite firmwide AI policies and warnings.
Why does this case matter when we’ve seen this story before? With at least 144 cases in the United States and 234 worldwide (as of this writing), AI citation blunders have become depressingly routine. See https://www.damiencharlotin.com/hallucinations/. But Johnson v. Dunn marks a critical escalation: Judge Manasco explicitly rejected the fines and public embarrassment imposed in earlier cases as insufficient deterrents. Instead, she imposed severe sanctions targeting individual accountability—disqualification, bar referrals, and mandatory disclosure to all clients and courts.
But Johnson v. Dunn marks a critical escalation: Judge Manasco explicitly rejected the fines and public embarrassment imposed in earlier cases as insufficient deterrents. Instead, she imposed severe sanctions targeting individual accountability—disqualification, bar referrals, and mandatory disclosure to all clients and courts.
Hon. Judge Ralph Artigliere (ret.).
The case demonstrates that the legal profession’s AI citation crisis has moved beyond excuse into an era of serious consequences: career-threatening sanctions, lasting reputational damage, and a potential chilling effect on responsible AI adoption.
The Butler Snow Case: When Institutional Controls Aren’t Enough
The Johnson v. Dunn cases show how AI citation errors can occur even at the highest levels of the profession. Butler Snow is a national law firm with over 400 attorneys. It describes itself as a “dominant force” whose lawyers have achieved national prominence through “sheer, unambiguous quality.” See Butler Snow | About Us. The firm represents the State of Alabama in multiple high-stakes prison litigation matters, and the state has paid it more than $40 million since 2020 according to reporting on state spending records. See, e.g., https://www.msn.com/en-us/news/crime/judge-sanctions-lawyers-defending-alabama-s-prison-system-for-using-fake-chatgpt-cases-in-filings/ar-AA1Jf3y5.
Judge Anna Manasco’s sanctions order should be required reading for any lawyer using generative AI—and for firm leaders developing AI use policies. The case makes clear that institutional rules alone—no matter how well crafted—offer no protection when individual lawyers ignore verification duties. The court’s measured but firm sanctions reflect growing judicial recognition that AI-related misconduct is not just a policy problem; it is a problem of individual accountability.
Case Background and Stakes
The Johnson case arose from high-stakes civil rights litigation over Alabama’s prison conditions, which are cases with serious public interest implications. But the sanctions involved two routine discovery motions: a motion for leave to depose an incarcerated person and a motion to compel interrogatory responses. The two routine filings became a professional responsibility disaster when attorneys failed to verify AI-generated citations.
The Five Citation Failures: A Complete Taxonomy
Johnson v. Dunn presents a comprehensive catalog of how generative AI can fail when producing legal citations. Across two motions, five hallucinated citations illustrate the full spectrum of AI-generated errors:
Type 1 – Wrong Case, Wrong Citation
United States v. Baker was cited as “539 F. App’x 937, 943 (11th Cir. 2013)” to support a discovery-related proposition. While United States v. Baker, 529 F. App’x 987 (11th Cir. 2013) is a real case, it concerns criminal sentencing, not discovery rights; and the citation used in the motion relates to a completely different case.
Type 2 – Fabricated Citation Using a Real Case Name
Kelley v. City of Birmingham was cited as “2021 WL 1118031” for the proposition that courts refuse to delay depositions due to unrelated discovery issues. No such case exists. The only known case with that name is a 1939 Alabama traffic offense decision.
Type 3 – Complete Fabrication
Greer v. Warden, FCC Coleman I, “2020 WL 3060362,” was cited as rejecting an inmate’s deposition delay request. Neither the case nor the citation exists.
Type 4 – Fabricated Citation with Real-Sounding Style
Wilson v. Jackson, “2006 WL 8438651,” was cited for authority on delaying depositions of incarcerated plaintiffs. The citation points to an unrelated maritime injury case that makes no mention of depositions or Rule 30.
Type 5 – Real Case Name, Wrong Citation and Proposition
Williams v. Asplundh Tree Expert Co. was cited as “2006 WL 3343787” for the rule that general objections are disfavored. Though a case with that case style exists, no case with that combination of style and proposition exists.
Together, these examples show how generative AI can fabricate or hallucinate citations, distort legal holdings, misattribute legitimate case names, and invent legal precedent out of whole cloth. These weren’t subtle mischaracterizations that might escape notice—they were obvious fabrications that triggered immediate judicial scrutiny.
Procedural Response and Firm Accountability
After conducting its own independent searches and finding no trace of the cited authorities, Judge Manasco issued two show cause orders—one for each motion—directed to Butler Snow and five of its attorneys. The court’s response was swift and firm: the attorneys were ordered to explain the fabricated citations on a short deadline and to appear at a sanctions hearing soon thereafter.
The attorneys responded quickly. They admitted that partner Matt Reeves had used ChatGPT to generate case citations and failed to verify them. The firm characterized the incident as “unacceptable” and requested proportionate sanctions, emphasizing that its AI use policies—adopted in June 2023—already required written approval and independent verification for generative AI tools. Butler Snow also described its broader compliance infrastructure, including the creation of an internal AI committee and the adoption of protocols to guide responsible use.
Following the court’s orders, the firm launched an extensive remedial effort. It retained outside counsel to conduct an independent review of more than 2,400 legal citations across 330 filings in 40 federal dockets. No additional hallucinated or mischaracterized citations were found. Butler Snow also implemented firmwide training on responsible AI use, reiterating that every citation—regardless of source—must be accurate, verified, and grounded in legal authority.
In the end, Judge Manasco sanctioned three attorneys—Reeves, Cranford, and Lunsford—but released two others. Associate Lynette Potter was not involved in the motions, and Daniel Chism, though copied on drafts, did not draft or review the filings. The court also declined to sanction the firm itself, crediting Butler Snow’s pre-existing AI policies and its robust post-incident response.
Individual Accountability: Why Three Attorneys Were Sanctioned
Judge Manasco analyzed each attorney’s conduct under the court’s inherent authority, which requires misconduct that is “tantamount to bad faith”—a higher threshold than negligence or even recklessness. The court found that all three sanctioned attorneys met that standard, but for different reasons, offering a case study in how professional responsibility applies to AI-related missteps.
Matthew Reeves – The AI User
Reeves admitted he used ChatGPT to generate legal citations for both motions and inserted them without verifying a single one. The court described his actions as a “complete and utter disregard for his professional duty of candor” and “reckless to the extreme.” Because he introduced the hallucinated citations in violation of firm policy and professional obligations, the court found his conduct tantamount to bad faith and merited the sanctions. 2025 U.S. Dist. LEXIS 141805 *47-8.
William Cranford – The Filing Attorney
Cranford drafted, signed, and filed both motions, including the fabricated citations inserted by Reeves. Although he did not know the source of the citations, Cranford admitted that he did not verify them before filing. The court emphasized that lawyers bear personal responsibility for the truth of filings they sign, regardless of who added the content. It rejected the idea that this was a minor error, stating: “The insertion of bogus citations is not a mere typographical error, nor the subject of reasonable debate.” 2025 U.S. Dist. LEXIS 141805 *49-50.
William Lunsford – The Supervisory Partner
Lunsford, the practice group leader and the only attorney designated as deputy attorney general, appeared on both signature blocks despite failing to review one motion and reviewing the other only superficially. The court called his conduct “particularly egregious,” citing his “indifference to the truth and complete personal disinterest in the most basic professional responsibility.” His credibility was further undermined by his initial attempt to be excused from the show cause hearing, which the court described as trying “to leave the mess for someone else.” 2025 U.S. Dist. LEXIS 141805 *53-7.
The court’s analysis and findings reveal three distinct paths to sanctions: direct misuse of AI without verification (Reeves), abdication of filing responsibilities (Cranford), and supervisory negligence compounded by deflection (Lunsford). The lesson is clear: AI-related sanctions do not depend on who typed the prompt. They extend to every attorney who signs, supervises, or approves work product without fulfilling their independent duty to verify its accuracy.
The lesson is clear: AI-related sanctions do not depend on who typed the prompt. They extend to every attorney who signs, supervises, or approves work product without fulfilling their independent duty to verify its accuracy.
Hon. Judge Ralph Artigliere (ret.).
Measured Justice: Sanctions Tailored to Individual Accountability
Judge Manasco crafted sanctions designed to deter future misconduct while recognizing the distinction between institutional policies and individual failures. The three sanctioned attorneys received public reprimand, disqualification from the case, and referral to the Alabama State Bar. To ensure the deterrent effect, they must provide copies of the sanctions order to their clients, opposing counsel, every attorney in their firm, and presiding judges in all pending cases where they serve as counsel.
The court’s reasoning reflects judicial frustration with the persistence of AI citation errors despite widespread awareness: “Mr. Cranford, Mr. Reeves, and Mr. Lunsford are well-trained, experienced attorneys who work at a large, high-functioning, well-regarded law firm. They benefitted from repeated warnings, internal controls, and firm policies about the dangers of AI misuse… . And yet here we are.” 2025 U.S. Dist. LEXIS 141805 *60-1. After reviewing previous cases of generative AI hallucinated citations, the court determined:
Having considered these cases carefully, the court finds that a fine and public reprimand are insufficient here. If fines and public embarrassment were effective deterrents, there would not be so many cases to cite. And in any event, fines do not account for the extreme dereliction of professional responsibility that fabricating citations reflects, nor for the many harms it causes. In any event, a fine would not rectify the egregious misconduct in this case.
2025 U.S. Dist. LEXIS 141805 *59.
Significantly, the court imposed no sanctions on Butler Snow itself, attorneys Daniel Chism and Lynette Potter, or client Defendant Dunn. This selective approach demonstrates judicial recognition that institutional AI policies, when properly implemented, can protect firms from collective punishment. The sanctions targeted only those individuals who failed to fulfill their personal verification duties, regardless of firm resources or policies.
The case establishes a framework for future AI sanctions: individual accountability with institutional protection for firms that demonstrate good faith policy implementation. Judge Manasco’s approach suggests courts will increasingly distinguish between systematic firm failures and individual professional lapses.
The case establishes a framework for future AI sanctions: individual accountability with institutional protection for firms that demonstrate good faith policy implementation. Judge Manasco’s approach suggests courts will increasingly distinguish between systematic firm failures and individual professional lapses.
Hon. Judge Ralph Artigliere (ret.).
The Preventable Failure: When Policies Meet Reality
The Johnson case represents a perfect storm of preventable failures. The Butler Snow attorneys chose general-purpose ChatGPT over legal-specific platforms, ignored firm policies requiring verification, and failed at basic quality control: checking citations before filing. Each choice—using unsuitable tools, bypassing protocols, and abandoning core duties—compounded the risk.
The deeper irony lies in Butler Snow’s sophisticated preparation for exactly this scenario. The firm had AI policies since 2023, an Artificial Intelligence Committee, substantial resources, and experienced attorneys who understood the risks. Yet these institutional safeguards proved worthless when the court finds that the individual lawyers chose convenience over compliance. The case proves that even the best policies are useless without personal accountability, a truth that applies not just to AI, but to every aspect of professional responsibility.
Citation Verification: A Timeless Professional Duty
The duty to verify case authority isn’t new—it’s as old as legal practice itself. Attorneys on both sides of the “v” and judges share responsibility for confirming the accuracy, validity, and relevance of any cited authority. This fundamental obligation became crystal clear to me in my first major hearing as a young associate.
During an expedited preliminary injunction hearing involving a $50 million coal contract bid process, opposing counsel—a prominent partner representing the city utility—read directly from a Southern Reporter volume in open court. The passage he quoted devastated our position, and the case wasn’t even cited in their responsive documents. Fortunately, I had read that same case and brought a copy to the hearing. When my turn came, I read the very next sentence that opposing counsel had omitted: “This is not a case of…” Our situation was crucially distinguishable, and the context completely changed the case’s meaning.
That moment taught me a career-defining lesson that served me both as a practicing lawyer and later on the bench: what lawyers say about cases—whether from experience, advocacy, or now from AI—isn’t always accurate. The responsibility to verify authority can never be delegated, whether to opposing counsel, junior associates, or artificial intelligence. The tools may change, but the professional duty remains constant.
The Inexcusable Pattern: Why Experience Is No Excuse
With universal awareness of AI hallucination risks, why do experienced lawyers at sophisticated firms still make these errors? The question haunts the profession because the failures aren’t isolated incidents—they’re a pattern revealing fundamental professional blind spots even with repeated cases and broad publication of the danger.
The possible explanations are troubling: overconfidence in AI capabilities driven by provider marketing, time pressures leading to verification shortcuts, misunderstanding that firm policies eliminate individual responsibilities, or simple technology illiteracy. But these explanations ring hollow when applied to experienced attorneys at well-resourced firms with established AI protocols.
The continuation of these cases suggests something more concerning than individual failures—a systemic disconnect between professional awareness and practice. Lawyers who would never file a brief without spell-checking are somehow comfortable submitting unverified citations generated by AI. The profession knows the risks, has the resources to prevent them, and understands the consequences of failure. Yet the pattern continues, making each new case not just a professional failure but an inexcusable abdication of basic competence.
Lawyers who would never file a brief without spell-checking are somehow comfortable submitting unverified citations generated by AI.
Hon. Judge Ralph Artigliere (ret.).
Ironically, all major AI platforms include prominent warnings about potential errors, making the continued failures even more baffling. The profession’s response to this crisis will determine whether AI becomes a valuable tool or a source of systemic professional embarrassment.
Why AI Fails at Legal Citations: The Technical Reality
Lawyers often anthropomorphize AI, assuming it “thinks” and “reasons” like humans. In reality, AI systems predict the next most likely word based on statistical patterns learned from training data, not through legal reasoning. It’s like asking a first-year law student who has memorized every case but never learned legal analysis to write your brief.
Legal practice presents unique challenges for AI systems. We spend years learning to distinguish holdings from dicta, interpret precedential value, and recognize factual nuances that affect case applicability. Many judicial opinions are deliberately constrained, with judges limiting holdings to specific facts and circumstances. When AI encounters this specialized domain, it lacks the contextual understanding that legal training provides.
The technical failures follow predictable patterns: AI might cite an overruled case that appears frequently in training data, reference Miranda v. Arizona in a contracts dispute because it associates “rights” and “waiver” language or fabricate Johnson v. Smith (2019) with perfect facts for your case—except the case never existed. The Butler Snow lawyers’ choice of general-purpose ChatGPT over legal-specific platforms compounded these risks.
While legal-specific platforms like Westlaw AI and Lexis+ reduce hallucination risks through curated databases, they cannot eliminate them entirely. The fundamental problem remains: AI systems optimize for plausible-sounding output, not legal accuracy. Whether using general or specialized platforms, the professional duty to verify every citation remains non-negotiable. Understanding these technical limitations is crucial because rules of ethics require lawyers to comprehend their tools before using them professionally.
Professional Rules: AI Use Under Existing Ethical Framework
The Johnson v. Dunn case sanctions are supported by established professional responsibility rules that apply regardless of technology. Under Model Rule 1.1 (Competence) and its Alabama counterpart, lawyers must understand their tools and limitations, maintain reasonable supervision over AI-assisted work, and stay current with technological developments. Model Rule 5.1 requires partners and supervisory lawyers to ensure compliance with professional conduct rules, a duty that extends to delegated work whether performed by humans or machines. This translates directly to AI use: understanding hallucination risks, verifying AI-generated content, and accepting responsibility for final work product regardless of who or what produced it.
Model Rule 3.3 (Candor Toward Court) requires independent confirmation of all cited authorities and prompt correction of discovered errors. Alabama Rule of Professional Conduct 3.3 prohibits knowingly making false statements of material fact or law, with comments emphasizing that assertions must be based on reasonably diligent inquiry. The Butler Snow attorneys’ failure to verify ChatGPT citations violated both competence and candor obligations, a dual professional responsibility breach that made sanctions inevitable.
The key lesson: AI doesn’t create new ethical obligations, but it makes existing duties more critical. Each sanctioned attorney had an independent duty to verify citations regardless of firm policies, AI assistance, or workflow assumptions. Technology changes, but professional responsibility remains constant.
The Butler Snow Lesson: Policies Without Compliance Fail
The Johnson v. Dunn case offers a sobering lesson about the limits of institutional controls. Butler Snow exemplifies sophisticated legal practice. It is a quality firm with experienced attorneys, comprehensive AI policies, training requirements, and supervision protocols. These institutional safeguards should have prevented AI citation failures, yet they utterly failed in this instance.
The case demonstrates that layered supervision systems fail when individuals abdicate their responsibilities. Reeves ignored verification duties, Cranford signed without checking, and Lunsford supervised without supervising. Assuming each attorney thought someone else would catch the errors, this was a collective failure befalling each of them.
This institutional paradox extends beyond AI to fundamental questions about professional accountability in modern legal practice. The most sophisticated policies cannot substitute for individual professional judgment and personal responsibility. The case proves that professional competence remains irreducibly individual, regardless of technological tools or institutional frameworks.
Taking Responsibility, Lawyer by Lawyer
Given that even sophisticated institutional controls failed at Butler Snow, what can individual lawyers do to protect themselves and their clients? To translate these lessons into everyday legal practice, lawyers can take these concrete steps to insulate themselves from avoidable mistakes:
Immediate Action Steps
-If you currently use generative AI for legal work:
- Stop using it for legal research and citations immediately—use traditional legal databases instead
- If you must use AI, treat it like an unreliable intern: useful for brainstorming, dangerous for citations
- Create a verification checklist: every citation independently confirmed before any filing
-Set up systematic verification workflows:
- Build citation checking into your standard practice routine, not as an afterthought
- Verify citations immediately after research while sources are fresh
- Document your verification process and keep notes showing you personally confirmed each source
-For supervisors and signers:
- Never sign what you haven’t personally reviewed
- Understand that delegation doesn’t eliminate your verification responsibility
- If your name is on the filing, you own every citation in it
Personal Professional Development
-Develop practical AI literacy:
- Learn your tools’ limitations before using them professionally
- Understand the difference between legal-specific platforms and general AI
- Practice with AI tools on non-critical work before any court submissions
-Build time management skills around verification:
- Factor verification time into project timelines from the start
- Recognize that shortcuts create greater delays when sanctions follow
- Develop personal systems that make citation checking automatic, not optional
The responsibility is individual and non-delegable. Firm policies, warnings, and experience provide no protection when personal verification duties are ignored.
The responsibility is individual and non-delegable. Firm policies, warnings, and experience provide no protection when personal verification duties are ignored.
Hon. Judge Ralph Artigliere (ret.).
The Profession’s Systemic Response Crisis
The Johnson case represents more than individual failures—it reveals the profession’s inadequate response to a known crisis. Notwithstanding industry-wide awareness since Mata v. Avianca in 2023, AI citation sanctions continue across federal and state courts, affecting attorneys at all experience levels and firm sizes. The consistency of these failures—almost always involving unverified AI citations—suggests that current professional education approaches are fundamentally inadequate.
Bar associations must move beyond awareness to competence. Current CLE offerings often focus on AI’s potential benefits while inadequately addressing practical limitations and verification requirements. Effective programs should include hands-on exercises showing how AI fails, mandatory verification protocols, and real-world examples of sanctions consequences.
Law schools need immediate curricular reform. AI literacy cannot be confined to specialized courses or eDiscovery classes. Law schools should consider embedding AI verification, citation accuracy, and tool limitations into required legal research and writing courses.
Law firms face an implementation crisis. The Butler Snow case proves that policies without practical training and compliance verification fail catastrophically. Firms must transition from establishing AI policies to ensuring their effective implementation through regular training, competency testing, and accountability measures.
The profession’s response must match the urgency of the problem. Individual accountability remains paramount, but systematic educational reform is essential to prevent the continued erosion of professional competence and public trust. While systematic educational reform addresses long-term prevention, individual lawyers face immediate consequences in an increasingly unforgiving judicial landscape.
The Path Forward: Individual Accountability in an Unforgiving Landscape
The Johnson case marks a turning point in judicial tolerance for AI citation failures. Judge Manasco’s measured but firm sanctions—public reprimand, case disqualification, and bar referrals—establish a new baseline for consequences that future courts will likely follow or exceed, not reduce. The days of treating AI hallucinations as innocent mistakes are over. The profession’s credibility and the efficiency of the court system depend on eliminating these failures, and judges will use whatever tools necessary to achieve that goal.
Professional survival requires immediate adaptation. Lawyers can no longer treat AI literacy as optional or assume institutional policies provide personal protection. The choice is stark: develop rigorous verification practices now, or face career-threatening consequences later. Every lawyer using AI—regardless of experience, firm size, or institutional support—bears individual responsibility for every citation they submit.
The profession stands at a crossroads. We can embrace the transformative potential of AI while maintaining our fundamental duty of accuracy, or we can allow continued failures to trigger restrictions that eliminate AI’s benefits entirely. The choice belongs to each individual lawyer, one verification decision at a time.
Conclusion
Johnson v. Dunn underscores that neither experience, resources, nor institutional policies can substitute for individual professional competence. In the age of AI-assisted legal practice, the core duty remains unchanged: verify everything. The obligation to confirm the accuracy of legal citations is not new, but the widespread use of generative AI tools—however powerful or efficient—has dramatically increased the risk of fabricated or mischaracterized authorities slipping into court filings.
The stakes are high. Courts cannot afford the damage to justice and public trust caused by flawed submissions. Judge Manasco imposed meaningful sanctions—not simply to punish, but to deter future misconduct where prior fines have failed. Her approach offers a roadmap for other judges confronting similar issues.
The stakes are high. Courts cannot afford the damage to justice and public trust caused by flawed submissions. Judge Manasco imposed meaningful sanctions—not simply to punish, but to deter future misconduct where prior fines have failed. Her approach offers a roadmap for other judges confronting similar issues.
Hon. Judge Ralph Artigliere (ret.).
The lesson is clear: lawyers must elevate their understanding and supervision of AI tools. That means more than adopting firm policies—it requires education, vigilance, and a recommitment to the fundamentals of legal practice. Those who meet this challenge will not only protect themselves from sanctions but will also serve their clients and the justice system more effectively.
I, for one, am proud to be a lawyer, and I believe the profession is up to the challenge.
July 31, 2025 © Ralph Artigliere 2025 ALL RIGHTS RESERVED (Published with permission)
Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.