Navigating AI in the Judiciary: New Guidelines for Judges and Their Chambers

The Sedona Conference - Navigating AI in the Judiciary: New Guidelines for Judges and Their Chambers
Image: Holley Robinson, EDRM.

[EDRM Editor’s Note: This article will be published 26 Sedona Conf. J. 1 (forthcoming 2025). We are grateful to The Sedona Conference® for permission to republish. The opinions and positions are those of the authors.]


EDRM Citation:
Hon. Herbert B. Dixon Jr. et al., Navigating AI in the Judiciary: New Guidelines for Judges and Their Chambers, 26 SEDONA CONF. J. 1 (forthcoming 2025), https://thesedonaconference.org/sites/default/files/publications/Navigating%20AI%20in%20the%20Judiciary_PDF_021925_1.pdf.

Copyright 2025, The Sedona Conference®

For this and additional publications see: https://thesedonaconference.org/publications.


Hon. Herbert B. Dixon, Jr., Hon. Allison H. Goddard, Prof. Maura R. Grossman, Hon. Xavier Rodriguez, Hon. Scott U. Schlegel, and Hon. Samuel A. Thumma

Five judges and a lawyer/computer science professor walked into a bar . . . well, not exactly. But they did collaborate
as members of the Working Group on AI and the Courts as part of the ABA’s Task Force on Law and Artificial Intelligence to develop the following guidelines for responsible use of AI by judicial officers. The guidelines reflect the consensus view of these Working Group members only, and not the views of the ABA, its Law and AI Task Force, The Sedona Conference, or any other organizations with which the authors may be affiliated.

The authors include:

  • Dr. Maura R. Grossman, a Research Professor in the Cheriton School of Computer Science at the University of Waterloo and an Adjunct Professor at Osgoode Hall Law School of York University, who serves as a special master in both U.S. state and federal court;
  • Hon. Herbert B. Dixon, Jr., Senior Judge of the Superior Court of the District of Columbia;
  • Hon. Allison H. Goddard, U.S. Magistrate Judge of the U.S. District Court for the Southern District of California;
  • Hon. Xavier Rodriguez, U.S. District Judge of the U.S. District Court for the Western District of Texas;
  • Hon. Scott U. Schlegel, Judge of the Louisiana Fifth Circuit Court of Appeal; and
  • Hon. Samuel A. Thumma, Judge of the Arizona Court of Appeal, Division One.

We hope you will find these guidelines useful in your work as judges. They provide a framework for how you can use AI and Generative AI responsibly as judicial officers.


Guidelines for U.S. Judicial Officers Regarding the Responsible Use of Artificial Intelligence

These Guidelines are intended to provide general, non-technical advice about the use of artificial intelligence (AI) and generative artificial intelligence (GenAI) by judicial officers and those with whom they work in state and federal courts in the United States. As used here, AI describes computer systems that perform tasks normally requiring human intelligence, often using machine-learning techniques for classification or prediction. GenAI is a subset of AI that, in response to a prompt (i.e., query), generates new content, which can include text, images, sound, or video. While the primary impetus and focus of these Guidelines is GenAI, many of the use cases that are described below may involve either AI or GenAI, or both. These Guidelines are neither intended to be exhaustive nor the final word on this subject.

I. FUNDAMENTAL PRINCIPLES

An independent, competent, impartial, and ethical judiciary is indispensable to justice in our society. This foundational principle recognizes that judicial authority is vested solely in judicial officers, not in AI systems. While technological advances offer new tools to assist the judiciary, judicial officers must remain faithful to their core obligations of maintaining professional competence, upholding the rule of law, promoting justice, and adhering to applicable Canons of Judicial Conduct.

In this rapidly evolving landscape, judicial officers and those with whom they work must ensure that any use of AI strengthens rather than compromises the independence, integrity, and impartiality of the judiciary. Judicial officers must maintain impartiality and an open mind to ensure public confidence in the justice system. The use of AI or GenAI tools must enhance, not diminish, this essential obligation.

Although AI and GenAI can serve as valuable aids in performing certain judicial functions, judges remain solely responsible for their decisions and must maintain proficiency in understanding and appropriately using these tools. This includes recognizing that when judicial officers obtain information, analysis, or advice from AI or GenAI tools, they risk relying on extrajudicial information and influences that the parties have not had an opportunity to address or rebut.

The promise of GenAI to increase productivity and advance the administration of justice must be balanced against these core principles. An overreliance on AI or GenAI undermines the essential human judgment that lies at the heart of judicial decision-making. As technology continues to advance, judicial officers must remain vigilant in ensuring that AI serves as a tool to enhance, not replace, their fundamental judicial responsibilities.

Judicial officers and those with whom they work should be aware that GenAI tools do not generate responses like traditional search engines. GenAI tools generate content using complex algorithms, based on the prompt they receive and the data on which the GenAI tool was trained. The response may not be the most correct or accurate answer. Further, GenAI tools do not engage in the traditional reasoning process used by judicial officers. And, GenAI does not exercise judgment or discretion, which are two core components of judicial decision-making. Users of GenAI tools should be cognizant of such limitations.

Users must exercise vigilance to avoid becoming “anchored” to the AI’s response, sometimes called “automation bias,” where humans trust AI responses as correct without validating their results. Similarly, users of AI need to account for confirmation bias, where a human accepts the AI results because they appear to be consistent with the beliefs and opinions the user already has. Users also need to be aware that, under local rules, they may be obligated to disclose the use of AI or GenAI tools, consistent with their obligation to avoid ex parte communication.

Ultimately, judicial officers are responsible for any orders, opinions, or other materials which are produced in their name.

Hon. Herbert B. Dixon Jr. et al., Navigating AI in the Judiciary: New Guidelines for Judges and Their Chambers, 26 SEDONA CONF. J. 1 (forthcoming 2025).

Ultimately, judicial officers are responsible for any orders, opinions, or other materials which are produced in their name. Accordingly, any such work product must always be verified for accuracy when AI or GenAI is used.

II. JUDICIAL OFFICERS SHOULD REMAIN COGNIZANT OF THE CAPABILITIES AND LIMITATIONS OF AI AND GENAI

GenAI tools may use prompts and information provided to them to further train their model, and their developers may sell or otherwise disclose information to third parties. Accordingly, confidential or personally identifiable information (PII), health data, or other privileged or confidential information should not be used in any prompts or queries unless the user is reasonably confident that the GenAI tool being employed ensures that information will be treated in a privileged or confidential manner. For all GenAI tools, users should pay attention to the tools’ settings, considering whether there may be good reason to retain, or to disable or delete, the prompt history after each session.

Particularly when used as an aid to determine pretrial release decisions, consequences following a criminal conviction, and other significant events, how the AI or GenAI tool has been trained and tested for validity, reliability, and potential bias is critically important. Users of AI or GenAI tools for these foregoing purposes should exercise great caution.

Other limitations or concerns include:

  • The quality of a GenAI response will often depend on the quality of the prompt provided. Even responses to the same prompt can vary on different occasions.
  • GenAI tools may be trained on information gathered from the Internet generally, or proprietary databases, and are not always trained on non-copyrighted or authoritative legal sources.
  • The terms of service for any GenAI tool used should always be reviewed for confidentiality, privacy, and security considerations.

GenAI tools may provide incorrect or misleading information (commonly referred to as “hallucinations”). Accordingly, the accuracy of any responses must always be verified by a human.

III. POTENTIAL JUDICIAL USES FOR AI OR GENAI

Subject to the considerations set forth above:

  • AI and GenAI tools may be used to conduct legal research, provided that the tool was trained on a comprehensive collection of reputable legal authorities and the user bears in mind that GenAI tools can make errors;
  • GenAI tools may be used to assist in drafting routine administrative orders;
  • GenAI tools may be used to search and summarize depositions, exhibits, briefs, motions, and pleadings;
  • GenAI tools may be used to create timelines of relevant events;
  • AI and GenAI tools may be used for editing, proofreading, or checking spelling and grammar in draft opinions;
  • GenAI tools may be used to assist in determining whether filings submitted by the parties have misstated the law or omitted relevant legal authority;
  • GenAI tools may be used to generate standard court notices and communications;
  • AI and GenAI tools may be used for court scheduling and calendar management;
  • AI and GenAI tools may be used for time and workload studies;
  • GenAI tools may be used to create unofficial/preliminary, real-time transcriptions;
  • GenAI tools may be used for unofficial/preliminary translation of foreign-language documents;
  • AI tools may be used to analyze court operational data, routine administrative workflows, and to identify efficiency improvements;
  • AI tools may be used for document organization and management;
  • AI and Gen AI tools may be used to enhance court accessibility services, including assisting self-represented litigants.

IV. IMPLEMENTATION

These Guidelines should be reviewed and updated regularly to reflect technological advances, emerging best practices in AI and GenAI usage within the judiciary, and improvements in AI and GenAI validity and reliability. As of February 2025, no known GenAI tools have fully resolved the hallucination problem, i.e., the tendency to generate plausible-sounding but false or inaccurate information. While some tools perform better than others, human verification of all AI and GenAI outputs remains essential for all judicial use cases.

Read the original publication here.


Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.

Author

  • The Sedona Conference (TSC) is a nonprofit, 501(c)(3) research and educational institute dedicated to the advanced study of law and policy in the areas of antitrust law, complex litigation, intellectual property rights, and data security and privacy law. TSC was founded in 1997 by Richard G. Braman, who practiced in the areas of antitrust law, intellectual property, and complex litigation. TSC succeeds through the generous contributions of time by its faculties and Working Group members, and is able to fund its operations primarily through the financial support of its members, conference registrants, and sponsors.

    View all posts