
[EDRM Editor’s Note: The opinions and positions are those of Craig Ball. This article is republished with permission and was first published on June 19, 2025.]
Tomorrow, I’m delivering a talk to the Texas Second Court of Appeals (Fort Worth), joined by my friend, Lynne Liberato of Houston. We will address LLM use in chambers and in support of appellate practice, where Lynne is a noted authority. I’ll distribute my 2025 primer on Practical Uses for AI and LLMs in Trial Practice, but will also offer something bespoke to the needs of appellate judges and their legal staff–something to-the-point but with cautions crafted to avoid the high profile pitfalls of lawyers who trust but don’t verify.
Courts must develop practical internal standards for the use of LLMs in chambers. These AI applications are too powerful to ignore and too powerful to use without attention given to safe use.
Chambers Guidance: Using AI Large Language Models (LLMs) Wisely and Ethically
Prepared for Second District Court of Appeals (Fort Worth)
Purpose
This document outlines recommended practices for the safe, productive, and ethical use of large language models (LLMs) like ChatGPT-4o in chambers by justices and their legal staff.
I. Core Principles
- Human Oversight is Essential
LLMs may assist with writing, summarization, and idea generation, but should never replace legal reasoning, human editing, or authoritative research. - Confidentiality Must Be Preserved
Use only secure platforms. Turn off model training/sharing features (“model improvement”) in public platforms or use private/local deployments. - Verification is Non-Negotiable
Never rely on an LLM for case citations, procedural rules, or holdings without confirming them via Westlaw, Lexis, or court databases. Every citation is suspect until verified. - Transparency Within Chambers
Staff should disclose when LLMs were used in a draft or summary, especially if content was heavily generated. Prompt/output history should be preserved in chambers files. - Judicial Independence and Public Trust
While internal LLM use may be efficient, it must never undermine public confidence in the independence or impartiality of judicial decision-making. The use of LLMs must not give rise to a perception that core judicial functions have been outsourced to AI.
Never rely on an LLM for case citations, procedural rules, or holdings without confirming them via Westlaw, Lexis, or court databases. Every citation is suspect until verified.
Craig Ball with Lynne Liberato.
II. Suitable Uses of LLMs in Chambers
- Drafting initial outlines of bench memos or summaries of briefs
- Rewriting judicial prose for clarity, tone, or readability
- Summarizing long records or extracting procedural chronologies
- Brainstorming counterarguments or exploring alternative framings
- Comparing argumentative strength and inconsistencies of and between parties’ briefs
Note: Use of AI output that may materially influence a decision must be identified and reviewed by the judge or supervising attorney.
III. Prohibited or Cautioned Uses
- Do not insert any LLM-generated citation into a judicial order, opinion, or memo without independent confirmation
- Do not input sealed or sensitive documents into unsecured platforms
- Do not use LLMs to weigh legal precedent, assess credibility, or determine binding authority
- Do not delegate critical judgment or reasoning tasks to the model (e.g., weighing precedent or evaluating credibility)
- Do not rely on LLMs to generate summaries of legal holdings without human review of the supporting authority
IV. Suggested Prompts for Effective Use
These prompts may be useful when paired with careful human oversight and verification
- “Summarize this 40-page brief into 5 bullet points, focusing on procedural history.”
- “Summarize the uploaded transcript respecting the following points….”
- “Summarize the key holdings and the law in this area”
- “Rewrite this paragraph for clarity, suitable for a published opinion.”
- “List potential counterarguments to this position in a Texas appellate context.”
- “Explain this concept as if to a first-year law student.”
Caution: Prompts seeking legal summaries (e.g., “What is the holding of X?” or “Summarize the law on Y”) are particularly prone to error and must be treated with suspicion. Always verify output against primary legal sources.
V. Public Disclosure and Transparency
Although internal use of LLMs may not require disclosure to parties, courts must be sensitive to the risk that judicial reliance on AI—even as a drafting aid—may be scrutinized. Consider whether and what disclosure may be warranted in rare cases when LLM-generated language substantively shapes a judicial decision.
VI. Final Note
Used wisely, LLMs can save time, increase clarity, and prompt critical thought. Used blindly, they risk error, overreliance, or breach of confidentiality. The justice system demands precision; LLMs can support it—but only under a lawyer’s and judge’s careful eye and hand.
Prepared by Craig Ball and Lynne Liberato, advocating thoughtful AI use in appellate practice.
Of course, the proper arbiters of standards and practices in chambers are the justices themselves; I don’t presume to know better, save to say that any approach that bans LLMs or presupposes AI won’t be used is naive.
Craig Ball.
Of course, the proper arbiters of standards and practices in chambers are the justices themselves; I don’t presume to know better, save to say that any approach that bans LLMs or presupposes AI won’t be used is naive. I hope the modest suggestions above help courts develop sound practical guidance for use of LLMs by judges and staff in ways that promote justice, efficiency and public confidence.
Read the original article here.
Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.