AI Hallucinations, Sanctions, and Context: What a Florida Disciplinary Case Really Teaches

AI Hallucinations, Sanctions, and Context: What a Florida Disciplinary Case Really Teaches by Hon. Ralph Artigliere (ret.).
Image: Hon. Ralph Artigliere (ret.)

[EDRM Editor’s Note: EDRM is proud to publish the advocacy and analysis of the Hon. Ralph Artigliere (ret.). The opinions and positions expressed are his own. © 2026 Ralph Artigliere.]


There is no doubt that artificial intelligence now offers a powerful upside for high-level legal work. Since the widespread availability of generative AI, legal scholars, technologists, and product developers have demonstrated how these tools, when used properly, can enhance human performance. AI excels at accessing, organizing, and processing enormous volumes of information; lawyers and judges contribute judgment, experience, empathy, and ethical responsibility. Used together, the combination can be extraordinarily effective.

But along the way, too many busy professionals have missed an obvious and unforgiving truth: AI can be wrong; the human user remains fully responsible; and verification is not optional. Responsible attorneys must check every citation and assertion in a court filing against original, authoritative sources. There is no shortcut around that obligation.

The failure to observe this basic discipline has produced a growing body of cases involving AI “hallucinations,” meaning fabricated or inaccurate citations and assertions appearing in court submissions. Despite widespread publicity, judicial warnings, ethical guidance, and sanctions, the problem has not abated. A publicly maintained database tracking judicial decisions involving AI-generated hallucinations reports more than 500 such cases in United States courts as of this writing. Courts, facing repeated violations, have escalated sanctions in an effort to deter this conduct. What began with warnings, reprimands, and monetary penalties has, in some cases, progressed to disqualification from representation, mandatory disclosure of sanction orders to clients and courts, and referrals to bar disciplinary authorities. See R. Artigliere and W. Hamilton, Reasonable or Overreach? Rethinking Sanctions for AI Hallucinations in Legal Filings, EDRM (Aug. 18, 2025).

Against that backdrop, it was perhaps inevitable that disciplinary sanctions would move beyond reprimands. In January 2025, the Florida Supreme Court suspended a lawyer from the practice of law for two years in a case that included the filing of pleadings containing hallucinated citations. The Court’s brief order, however, does not describe the underlying misconduct. Those details appear in the referee’s report adopted by the Court, and the context matters. The suspension did not rest on hallucinated citations alone, but on a broader pattern of conduct that went well beyond isolated or inadvertent AI-related errors.

The Case in Context: What Neusom Is—and Is Not

The Florida Bar v. Thomas Grant Neusom, Discipline Case #201950315, is not a clean example of an isolated AI mistake escalating into suspension. The Florida Supreme Court’s order is brief and largely procedural; the Bar complaint and the referee’s report supply the substantive findings. Those materials describe a lawyer who repeatedly ignored court orders, reasserted arguments already rejected by the court, misrepresented legal authority, and failed to correct fabricated citations even after the deficiencies were identified by opposing counsel.

Hallucinated citations in Neusom corroborated broader misconduct; they did not define it.

Hon. Ralph Artigliere (ret.).

The record reflects a broader, sustained pattern of misconduct. In addition to inaccurate and fabricated citations, the referee found repeated violations of local rules, mislabeling of filings, improper attempts to re-litigate jurisdictional issues, filing of a bankruptcy petition in bad faith for the sole purpose of avoiding an eviction, and a failure to appear at the disciplinary hearing. The case involved prior federal court sanctions for misconduct and misrepresentation. Viewed as a whole, the referee concluded that the conduct demonstrated a fundamental breakdown in candor, diligence, and respect for the tribunal.

In that context, AI-generated hallucinations were not the trigger for discipline. They functioned as corroborating evidence of professional unfitness already established by a larger pattern of behavior. That distinction matters. If sanctions imposed in cumulative-misconduct cases become precedent for suspending lawyers based solely on isolated hallucination errors, disciplinary doctrine risks shifting from culpability-based assessment to outcome-based punishment untethered from intent or proportionality.

Unintended Fallout: AI Literacy, Deterrence, and the Risk of Overcorrection

There is a reason that bar associations publish summaries of disciplinary cases. They serve an important function by reminding lawyers of their professional obligations, identifying conduct that falls below ethical standards, and promoting education and deterrence. For those efforts to succeed, however, educational messaging must accurately reflect the nature of the misconduct at issue.

Judges and sanctioning bodies operate under a different mandate. Sanctions must fit the case, grounded in fairness and proportionality. Disciplinary decisions require calibration and differentiation, with outcomes anchored to specific findings of intent, degree of culpability, and actual or potential harm. When sanctioned conduct is mentioned without context, the risk is not under-enforcement, but overcorrection.

Neusom illustrates that courts and disciplinary authorities will scrutinize careless or improper use of generative AI. That scrutiny is both appropriate and necessary. At the same time, imprecise characterization of the case risks producing a broader and unintended effect: growing distrust or avoidance of AI tools altogether. That reaction would be counterproductive. Lawyers and judges who take the time to understand how generative AI systems work, and who apply basic verification discipline, can use these tools safely and effectively. The lesson should be competence, not fear.

Context matters. Once disciplinary cases enter bar publications, judicial education programs, CLE materials, and ethics updates, they are often absorbed as shorthand or headlines rather than read as records. Over time, nuance fades, and cases risk being remembered for a single fact rather than the full context that justified the sanction. This risk increases when the sanction is as serious as in Neusom.

When disciplinary cases are remembered as headlines rather than read as records, proportionality is the first casualty.

Hon. Ralph Artigliere (ret.).

This observation is not offered as criticism of bar publications, CLE, or the goal of promoting AI literacy. That goal, and the vehicles for messaging, are essential. The concern is more subtle. When AI misconduct sanctions are cited without context, lawyers may conclude that any hallucination error threatens their license. That perception discourages engagement, learning, and transparency at precisely the moment the profession most needs informed, responsible adoption of AI tools.

What Judges Should Take from Neusom

The central lesson of Neusom is not that the use of generative AI in legal drafting warrants suspension when errors occur. Rather, it is that courts and disciplinary authorities will closely scrutinize a lawyer’s conduct when AI-generated errors appear as part of a broader pattern of disregard for professional obligations.

Neusom does not stand for the proposition that an isolated hallucination, standing alone, justifies referral to disciplinary authorities or severe sanctions. The referee’s findings reflect repeated failures of candor, diligence, and compliance with court orders over an extended period, combined with an inability or unwillingness to correct errors when they were identified. In that context, hallucinated citations served as corroborating evidence of unfitness, not as the sole or primary basis for discipline.

Judges confronting AI-related errors in their own courtrooms should therefore resist outcome-based shortcuts. The presence of fabricated or inaccurate citations is serious and must be addressed, but the appropriate response depends on more than the error itself. Context, intent, repetition, corrective behavior, and actual or potential harm all matter. Treating Neusom as a per se rule risks collapsing these distinctions and converting negligence into presumed bad faith.

To avoid that result, courts need a principled way to distinguish between isolated AI-related mistakes and misconduct that reflects a deeper failure of professional responsibility. That distinction turns not on whether AI was involved, but on the lawyer’s conduct before, during, and after the error occurred.

The Fork in the Road for Future Sanctions

It is reasonable to expect that Neusom will be cited by courts, The Florida Bar, and other disciplinary authorities in future cases involving AI hallucinations. The profession now stands at a fork in the road.

One path treats hallucinated citations primarily as an outcome problem. Under that approach, the presence of false authority becomes the dominant fact, and sanctions escalate largely based on the error’s existence and visibility. The other path treats hallucinations as a culpability problem. That approach asks how and why the error occurred, what the lawyer knew or should have known, how the lawyer responded once the error was discovered, and whether the conduct reflects negligence, recklessness, or something closer to intentional misconduct.

If Neusom is read as endorsing outcome-based escalation, the result will be inconsistent sanctions and unnecessary chilling of responsible AI use. If it is read as reinforcing culpability-based analysis, it can help courts and disciplinary bodies respond firmly where warranted, while preserving proportionality and fairness in cases involving isolated AI-related mistakes.

A Culpability-Based Framework for Sanctioning AI Hallucinations

To avoid outcome-driven escalation and inconsistent discipline, courts need a principled and repeatable way to assess AI-related misconduct. A culpability-based framework allows courts and disciplinary bodies to respond firmly where warranted, while preserving proportionality and fairness in cases involving isolated AI-related errors.

There is an arguable case for coming down hard on hallucination cases. Submitting false citations or inaccurate legal assertions wastes judicial resources, imposes unnecessary costs on opposing parties, and can threaten just outcomes. Ethical duties of competence, candor, and diligence are clear, and verification of citations before filing is a low bar that every lawyer is expected to meet. The problem is also evolving. What began with fabricated case citations has expanded to include AI-generated errors in expert reports, exhibits, affidavits, and even draft court orders. Courts and bar associations therefore have ample support for imposing serious sanctions as a deterrent.

But reasonable sanctions are, by definition, contextual. Failures of candor exist on a spectrum, and not every hallucination reflects the same degree of culpability. Some errors stem from carelessness or inadequate verification. Others arise from breakdowns in supervision, communication, or training. Still others reflect knowing misrepresentation or persistent disregard of court directives. Treating all hallucination cases as morally equivalent collapses these distinctions and risks converting negligence into presumed bad faith.

Neusom fits on the corrosive end of the misconduct continuum. Not all hallucination cases look like that. Many errors occur without intent to deceive or gain unfair advantage, and even highly competent lawyers can make mistakes. Sanctions must be measured, thoughtful, and aligned with the conduct at issue.

In the deterrence and corrective effort, almost any lawyer would be devastated with a fine and reprimand, and those remedies would likely correct their future behavior as well as others in their firm or the general legal community that learn of the sanction. It is certainly motivation, a deterrent, and a balanced result. Learning experiences that sting need not be deadly in every case.

A useful way to achieve that alignment is the four-pillar approach developed by Professor William Hamilton. See R. Artigliere and W. Hamilton, Reasonable or Overreach? Rethinking Sanctions for AI Hallucinations in Legal Filings, EDRM (Aug. 18, 2025). Under this framework, courts and disciplinary bodies assess AI-related misconduct by examining:

State of Mind and Intent

Whether the conduct reflects negligence, recklessness, knowing misrepresentation, or deliberate deceit.

Verification and Process Failures

What steps were taken to verify AI-generated content before filing, and whether failures reflect individual carelessness or systemic breakdowns in supervision or training.

Response Once the Error Was Identified

Whether the lawyer promptly corrected the error, disclosed it to the court, and took responsibility, or instead ignored, minimized, or persisted in the misconduct.

Actual or Potential Harm

The degree of prejudice to the opposing party, the burden imposed on the court, and the risk to the integrity of the proceeding.

This framework does not excuse hallucinations. It recognizes them as serious professional failures. But it anchors sanctions in culpability rather than outcome alone. By doing so, it allows courts to impose severe sanctions where conduct warrants it, while avoiding reactionary escalation in cases involving isolated AI-related mistakes. That balance is essential if discipline is to deter misconduct without chilling responsible learning and adoption of tools that are now part of modern legal practice.

Education, Competence, and the Path Forward

Any discussion of sanctions for AI hallucinations would be incomplete without acknowledging a deeper problem. The legal profession is still struggling with AI education. Generative AI is not a passing phenomenon, and lawyers and judges will continue to encounter it in practice. Institutions ranging from law schools to bar associations therefore bear responsibility for ensuring that legal professionals understand how these tools work, how they fail, and how to use them responsibly.

That task is not simple. AI technology evolves rapidly, and lawyers and judges operate under constant time and resource constraints. With heavy substantive and procedural demands already competing for attention, the bandwidth for technology education is limited. Educational efforts must therefore be practical, accessible, and focused on real-world risk. While there is now a growing market for AI education, the sheer volume of offerings creates its own challenge: determining which sources are reliable, effective, and worth the investment of scarce time.

The Florida Bar has taken meaningful steps in this direction. Ethics Opinion 24-1, amendments to the Rules Regulating The Florida Bar addressing AI-related competence and supervision in the case of In re Amendments to Rules Regulating the Fla. Bar, 393 So. 3d 137, 2024 Fla. LEXIS 1373, 2024 WL 3976825 (Fla. 2024), and practical guidance such as The Florida Bar Guide to Getting Started with AI reflect a growing recognition that AI literacy is now part of professional competence. Other jurisdictions are moving in the same direction.

Yet the persistence of hallucinated content in court filings suggests that the message is not fully landing. The solution is not harsher sanctions alone, but better education coupled with clear expectations. Avoiding hallucinations requires understanding the limits of AI systems, using them with a human firmly in the loop, and verifying all citations and factual assertions against original sources before filing. Those obligations apply regardless of whether work is generated by a junior associate, a paralegal, or an AI tool. Sanctions should reinforce that discipline, while education should make compliance achievable.

Conclusion

The rise of AI-generated hallucinations in court filings presents a real and serious challenge for the legal profession. Courts and disciplinary authorities are right to respond firmly to conduct that undermines candor, competence, and the integrity of the judicial process. Sanctions remain an essential tool for deterrence and correction.

But discipline works best when it is anchored in context and culpability. Neusom, properly understood, reinforces that principle rather than undermines it. The case illustrates how AI-related errors can corroborate broader misconduct. Treating it otherwise risks outcome-driven escalation untethered from intent, proportionality, and long-standing disciplinary norms.

As generative AI becomes embedded in legal practice, the profession’s task is twofold: to enforce existing ethical obligations with clarity and consistency, and to ensure that lawyers and judges are equipped to meet those obligations through meaningful education and supervision. Sanctions should deter misconduct, not discourage responsible engagement with tools that, when properly understood and verified, can enhance rather than erode the administration of justice.


January 9, 2026 © 2026 Ralph Artigliere. ALL RIGHTS RESERVED (Published with permission.)
Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.

Author