[Editor’s Note: EDRM is proud to publish the advocacy and analysis of the Hon. Ralph Artigliere (ret.). The opinions and positions are those of Judge Artigliere (ret.). © Ralph Artigliere, January 20, 2025.]
Despite clear judicial warnings and sanctions, legal professionals continue to submit AI-generated court documents with fabricated content. This disturbing trend, exemplified by cases like Mata v. Avianca, threatens the integrity of our judicial system and has now extended to expert witnesses. Landmark cases and ethical guidelines have established clear precedents on AI misuse, yet the persistent submission of unverified content raises serious concerns about technological competence within the legal profession.
The Mounting Cost of AI Negligence
While generative AI platforms only entered mainstream use in November 2022, the legal profession has had sufficient time to adapt. Different attorneys have approached these tools at different paces – some as early adopters, others more cautiously – but the fundamental requirement to verify citations and content remains unchanged. There is no excuse for continuing to submit false citations and hallucinated content.
A shot heard round the world came with Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 466 (S.D.N.Y. 2023), in which the court sanctioned an attorney for including fake, AI-generated legal citations in a filing and explained why it was unacceptable. That case has been widely published, commented upon, cited by courts, and referenced in ethical guidelines in multiple jurisdictions. Yet many lawyers continue to make the same mistake, with the same result and notoriety of the kind no lawyer wants or needs. See, e.g., Park v. Kim, 91 F.4th 610, 614–16 (2d Cir. 2023) (referring attorney for potential discipline for including fake, AI-generated legal citations in a filing); Kruse v. Karlan, 692 S.W.3d 43, 53 (Mo. Ct. App. 2024) (dismissing appeal because litigant filed a brief with multiple fake, AI-generated legal citations).
Even more troubling, this pattern of AI-generated misinformation has now spread to expert evidence, where expert testimony, reports, and affidavits with fake citations and false information generated by AI have been offered by lawyers in court. See, e.g., Kohls v. Ellison, 2025 U.S. Dist. LEXIS 4928 * (D. Minn. Jan. 25, 2025); In re Matter of Weber, 2024 NY Slip Op 24258 (N.Y. Surrogates Ct. Oct. 10, 2024). Responsibility for these errors lies squarely on the lawyers sponsoring the testimony because they took no steps to prevent it.
Lawyers need to step up their game. Our profession must now fully embrace its ethical and professional obligations regarding AI use. The requirements are clear, and the solutions are straightforward. While generative AI platforms and tools are invaluable when used properly, they require three fundamental elements: sufficient knowledge, understanding of the tools, and careful attention to detail. These align perfectly with our existing professional responsibilities and skill sets all lawyers should possess. The time for learning curves has passed.
The Stakes: Justice, Ethics, and Professional Integrity
Courts cannot and should not tolerate AI-generated fabrications in legal submissions. As officers of the court, lawyers bear a fundamental responsibility to ensure accuracy and fairness in their submissions. See, Kohls v. Ellison, 2025 U.S. Dist. LEXIS 4928 *13-14. The ABA’s Model Rules of Professional Conduct, Rule 3.3, explicitly requires truthfulness and mandates correction of any false statements previously made to the tribunal. These aren’t mere guidelines – they are core principles upon which our legal system’s integrity depends. For good reason, lawyers who fail to meet these responsibilities are subject to disciplinary actions, sanctions, or damage to their professional reputation.
Bad Behavior Begets Serious Problems: When lawyers or their experts submit false information generated by AI, several things can happen, all of them bad. An injustice can occur when the court relies on the false information. Even when the falsity is uncovered, there can be wasting of the opposing party’s time and money, the Court’s time and resources, and reputational harms to the legal system. See Morgan v. Cmty. Against Violence, No. 23-cv-353-WPJ/JMR, 2023 WL 6976510, at *8 (D.N.M. Oct. 23, 2023). The lawyer or expert or both can suffer sanctions and irreparable harm to their reputations.
Systemic Reaction to Bad Behavior Needs to be Measured: When these unfortunate cases recur, legitimate and important use of generative AI for efficient and effective advocacy could be squelched through mistrust and fear of the use of generative AI. Courts can, and have, overreacted in placing restrictions on use of AI. See Hon. Xavier Rodriguez, Artificial Intelligence (AI) and the Practice of Law, 24 The Sedona Conference Journal 782 (Sept. 2023); J. Kubicki, Lead Memo: Court blunders with a GenAI disclosure form, Brainyacts (Oct. 4, 2024). For a list of many judicial orders and preferences regarding AI, see EDRM, Repository of Judicial Standing Orders Including AI Segments, found at https://edrm.net/judicial-orders-2/. On the other hand, the new Illinois Supreme Court Policy on Artificial Intelligence (Effective Jan. 1, 2025) takes an informative, enlightened and measured approach to guidance on generative AI, which is a trend that should be encouraged.
Overreactions by individuals and courts often stem from an outdated understanding of generative AI tools, despite their significant improvements in output and ease of use. Developers are continuously enhancing these platforms, yet they are unlikely to become reliable turnkey producers of legal work without human oversight. Fortunately, this issue is solvable when lawyers understand the causes of hallucinations and learn to select and use the appropriate AI tools effectively.
Root Causes: Understanding the AI Competency Gap
Lawyers are ethically obligated to understand the technology they use, as mandated by Comment 8 of ABA Model Rule 1.1 and its state counterparts. Yet when it comes to generative AI, many practitioners fail to grasp both its capabilities and limitations. This isn’t about intentional misconduct – as demonstrated in the recent Kohls v. Ellison case, which involved a submission by an expert offered in court by the Attorney General of Minnesota. Attorney General Ellison maintained that his office had no idea that their expert Professor Hancock’s declaration included fake citations, and counsel for the Attorney General sincerely apologized at oral argument for the unintentional fake citations in the Hancock Declaration. The Court took them at their word. 2025 U.S. Dist. LEXIS 4928 *11. Even experienced institutions like the Minnesota Attorney General’s office can fall prey to AI-generated fabrications without realizing it.
I submit that lawyers who make this mistake simply do not understand how generative AI functions and what is required to use it properly and safely. Lawyers must understand that, as a fundamental feature of its creative side, generative AI can produce output that appears realistic but is misleading or made up by the AI itself. If generative AI was used in a legal pleading or brief or an expert’s declaration or testimony, included citations and supported assertions may be entirely made up or be a real source but one that does not contain the purported language cited. These have been called hallucinations.
In the case of generative AI, professional and ethical specific guidance from California, Florida, New York, and other sources caution lawyers to become proficient in generative AI before using it. This requires lawyers to know the tools and to verify output of generative AI before submitting it to the court. In order to avoid hallucination issues and safely capitalize on the power of artificial intelligence, lawyers need to understand how generative AI works and how to safely use it. These are achieved by attorneys through study and practice.
Fortunately, the solution to correcting these errors is straightforward and well within the skill requisite skill set for lawyers.
The Path Forward: Responsible AI Integration in Legal Practice
The Illinois Supreme Court Policy on Artificial Intelligence is the most recent expression by courts on this issue, and the policy succinctly and effectively states the obligation:
Attorneys, judges, and self-represented litigants are accountable for their final work product. All users must thoroughly review AI-generated content before submitting it in any court proceeding to ensure accuracy and compliance with legal and ethical obligations. Prior to employing any technology, including generative AI applications, users must understand both general AI capabilities and the specific tools being utilized.
Illinois Supreme Court, Illinois Supreme Court Policy on Artificial Intelligence (effective Jan. 1, 2025).
This is a recurring theme on a critical issue. The obligation is clear. The path to getting it done is achievable. Now all it takes is for lawyers to take heed and comply.
Tool Selection: Picking the correct tool means finding generative AI that suits the task. Legal research and writing demand precision, yet many widely available generative AI platforms, such as OpenAI’s ChatGPT, Claude, and CoPilot, are designed for broad applications. These tools are trained on vast, general-purpose datasets and prioritize creativity, often producing responses that are “outside the box.” However, this can lead to inaccuracies, irrelevance, or even the fabrication of cases or legal principles.
To address these risks, specialized legal-centric AI tools are emerging. For example, tools like LexisNexis +AI and Westlaw’s CoCounsel use proprietary, closed-source datasets curated specifically for legal accuracy. These tools aim to minimize errors by leveraging algorithms fine-tuned for legal research. Despite these advancements, hallucinations and misinterpretations—such as misstating case law or misunderstanding the current state of the law—can still occur.
Once a tool is selected, it is essential to understand how it works and the proper way to use it. There are some fundamental best practices common to use of most generative AI, such as the manner of communicating or prompting.
Process Integration: The manner in which generative AI platforms are prompted (prompt engineering) can assist in accuracy, but the only way to ensure that a cited case or authority is accurate is to check it carefully and verify the existence, the content, and whether it is still valid authority and not overturned or superseded. But this is exactly what a lawyer should do with all work proposed by a law clerk, associate, or even co-counsel, before submitting it to the court under your signature.
The Power of Human-AI Collaboration: There is a name for this most fundamental way to avoid the danger of AI hallucinations. This has been called “human in the loop.” When using experts, ensure they have done the same by questioning them carefully. In the Kohls v. Ellison case, Court reminded counsel and the Attorney General that Fed. R. Civ. P. 11 imposes a “personal, nondelegable responsibility” to “validate the truth and legal reasonableness of the papers filed” in an action and suggested that “an ‘inquiry reasonable under the circumstances,’ Fed. R. Civ. P. 11(b), may now require attorneys to ask their witnesses whether they have used AI in drafting their declarations and what they have done to verify any AI-generated content.” 2025 U.S. Dist. LEXIS 4928 *12.
Given these considerations, human oversight remains the cornerstone of responsible AI use in legal practice. However, there is an enormous side benefit to the collaborative approach. The quality of work done with the assistance of AI is better when the lawyer works along with the tool rather than having it do the work on its own. Humans excel in areas requiring judgment and contextual understanding, while AI offers unparalleled speed and analytical capabilities. Working together, generative AI enhances, rather than replaces, legal expertise and experience. For example, AI can theoretically draft a brief for you, but the lawyer’s expertise, voice, and verification of content and authorities are essential. Alternatively, the lawyer can draft the brief, and then use AI to check for messaging, logic, consistency, and the strength and influence of arguments. This approach combines the creative input of the lawyer with the analytical power of AI, ultimately improving the quality of the work product.
My advice concerning this issue has been consistent: “Keeping humans in the loop to review, refine, and verify AI output—and allowing AI to analyze human drafts—ensures that efficiency is maximized without compromising ethical standards. Lawyers must remain in control, providing human oversight to ensure accuracy, context, and ethical compliance. This ‘human-in-the-loop’ approach allows AI to function as a co-intelligence rather than a replacement.” See Artigliere, R., Legal Professionals Should Embrace Generative AI Now and Join a New Era of Co-Intelligence, EDRM (Oct. 9. 2024).
Quick Reference Checklist: Verifying AI-Generated Legal Content
✓ Citation Verification
- Confirm each case citation exists in official reporters/databases
- Verify quoted language appears in the cited case
- Check if case is still good law
- Confirm case actually supports the stated proposition
✓ Expert Evidence Review
- Ask experts directly if AI was used in preparing materials
- Request documentation of research/analysis methods
- Verify any citations or data sources in expert reports
- Confirm qualifications and basis for expert opinions
✓ AI Tool Assessment
- Use specialized legal AI tools when available rather than general-purpose AI
- Document which AI tool was used and how it was prompted
- Maintain detailed records of AI interactions for accountability
- Understand the specific limitations of your chosen AI tool
✓ Content Validation
- Cross-reference key facts with original source documents
- Verify any statistics, data, or numerical claims
- Check for internal consistency throughout the document
- Compare AI output with human-sourced research
✓ Final Review Steps
- Have a qualified human attorney review all AI-generated content
- Validate all substantive legal arguments independently
- Ensure compliance with relevant court rules on AI use
- Document your verification process
Remember: The responsibility for accuracy remains with the submitting attorney, regardless of the AI tools or human assistants used to generate content.
The framework for responsible AI use in legal practice is now well-established. Success requires not just compliance with these guidelines but embracing them as fundamental to modern legal practice. The technology will continue to evolve, but these core principles of professional responsibility remain constant.
CONCLUSION
The legal profession is at a turning point in its relationship with artificial intelligence. The recent sanctions and missteps serve as a wake-up call, highlighting the need for transformation. The path forward is clear: lawyers must combine AI’s potential with their own expertise, acting as vigilant supervisors rather than passive users. By embracing this approach, we can elevate the practice of law while safeguarding its integrity. The courts have spoken. The ethical guidelines are clear. The tools and guidance are in place—now it’s time for the profession to lead responsibly. Our future depends not on using AI, but on using it wisely.
January 20, 2025 © Ralph Artigliere. ALL RIGHTS RESERVED (Published with permission.)
Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.