[Editor’s Note: EDRM is proud to publish the Hon. Ralph Artigliere’s (ret.) advocacy and analysis. The opinions and positions are Judge Artigliere’s (ret.) December 5, 2023 © Ralph Artigliere. The author gratefully acknowledges the collaboration and assistance of University of Florida Levin College of Law Senior Legal Skills Professor William Hamilton in the preparation of this article.]
Generative AI is powering an explosion of innovative public and private sector models, though this burgeoning industry remains in its infancy. GenAI products gained widespread notoriety and availability in one amazing year. Lawyers, law firms, and their clients are bombarded with information about the tremendous potential as well as the dangers of misuse or abuse of these powerful tools. Judges, bar leaders and staff, and committees are diligently working to understand emerging technology and find pathways and guidelines for its effective and safe use in the system of justice. The state Bar Associations in California and Florida are considering or adopting generative AI guidelines and ethical rules. But is the impulse to move quickly to control the use of AI by lawyers causing overbroad constraints and undue stigma on valuable tools for justice?
The Fifth Circuit’s proposal to require a certification on the use of AI in its certificate of compliance was noticed in mid-November and is accepting public comment on the proposal through Jan. 4. What has prompted this proposal? Is the Fifth Circuit flooded with briefs containing generative AI “hallucinations” or “erroneous claims” that are so subtle and hidden that they will escape the normal due diligence review of the judges, clerks and opposing counsel? The first such rule from a Federal Circuit court will undoubtedly have wide following and impact, good or bad. This article focuses on the bad and renews warnings by others before me that overbroad restraints on use of generative AI will result in unintended harm to the cause of justice.
Avoid the Reactive Impulse to Over-Regulate
With any new perceived danger to justice, there is an impulse to do something. In the case of generative AI, the underlying logic seems to be that something this powerful could have profound negative effects along with its positive contribution and we must root out these underground harms which must be growing like wild weeds under the calm surface. The truth is quite the opposite. There is no evidence widespread abuse of generative AI in the practice of law. All we have are series of anecdotal stories that ironically were detected and corrected. The vast majority of the thousands of state and federal courts across the nation recognize that the rare events of generative AI misuse are easily remedied in our adversarial process utilizing the civil procedure and professional responsibility rules already in place. However, courts and Bar associations make big news when issuing another generative AI restrictive order.
Good faith efforts to provide guardrails for generative AI are laudable in principle. The challenge for regulators is the novelty, complexity, and rapidly advancing state of AI products, including generative AI. Lawyers need to understand the pitfalls and dangers of underinformed use of generative AI. Product selection, platform knowledge, and human oversight of generated AI work product can manage such issues as confidentiality and hallucinations. Properly tailored ethical guidelines and court rules can serve as effective reminders of the counsel’s duties. However, the impact of overbroad or unnecessary regulation outweighs the actual risk of constantly improving and tailored generative AI products. The unintended consequence of the barrage of limitations, warnings, and adverse images generated by proposed and adopted rules is a distinct chilling effect on the use of potentially game-changing tools.
New Fifth Circuit Proposed Rule Requiring Disclosure Concerning AI
A Fifth Circuit’s proposed rule change would require a disclosure on the use of AI in its certificate of compliance. Public comment is due by January 4. The Fifth Circuit proposed Rule 32.3, it reads:
…counsel and unrepresented filers must further certify that no generative artificial intelligence program was used in drafting the document presented for filing, or to the extent such a program was used, all generated text, including all citations and legal analysis, has been reviewed for accuracy and approved by a human.
Fifth Circuit proposed Rule 32.3,
But that is not the worst of it. The proposed amendment to FORM 6 (CERTIFICATE OF COMPLIANCE) requires litigants and counsel to complete alternative boxes over their signature of compliance:
3. This document complies with the AI usage reporting requirement of 5th Cir. R. 32.3 because:
□ no generative artificial intelligence program was used in the drafting of this document, or
□ a generative artificial intelligence program was used in the drafting of this document and all generated text, including all citations and legal analysis, has been reviewed for accuracy and approved by a human.
Id.
The court is invading the work product of counsel with a rule that will have no practical effect but that will chill the use of generative AI. Lawyers already understand that they are required to review the text, analysis, and citations for accuracy and accept full responsibility for content submitted to courts. That concept is already embodied in numerous rules of federal procedure and rules of professional responsibility. The proposed Fifth Circuit certification rule is redundant, unwarranted, and will chill the development of legal practice in adopting valuable new technologies.
Beyond that, work product is infringed and unintended consequences ensue. The rule casts generative AI as a rogue tool that is something to be feared and avoided. While the impact on federal appellate lawyers will be minimal, the more insidious impact will be downstream and the possible cascade of orders that endorsement by the Fifth Circuit will create on federal and especially state trial courts, where the use of generative AI is desperately needed rather than to be feared.
The Outsized Commotion Over the Mata Case
Concern over accuracy of content submitted to a court is certainly justified. Lawyers have always had responsibility duties of competence and candor as part of the duty of trust owed to the court. Regrettably, there were a couple early cases, the most notable of which is Mata v. Avianca, in which ChatGPT generated a brief containing counterfeit law and supportive cases resulting in sanctions. These early cases spurred overreaction on the part of judges, courts, and bar associations.
The Mata case came early in the public availability of generative AI products, an industry that is still in its infancy and is rapidly developing, evolving, and resolving many of the issues of accuracy and reliability. More importantly, Mata is more about lawyers who misused a tool, failed to check what was submitted, and then doubled down on the work product in question rather than admitting the inaccuracies. The case is not an example of widespread misuse and abuse of AI. It is a single case, in a single brief, by a lawyer who tried to cover up his error. Lost in the outsized commotion over Mata is the fact that the system worked. Opposing counsel called out the discrepancies and the judge correctly handled the matter, including administering sanctions to the offending counsel to deter recurrence of the problem. The case quickly gained notoriety, and while its widespread exposure may have warned others who were unaware of the danger of generative AI hallucinations, the case has become a “poster child” of generative AI run amuck leading to outsized and unnecessary regulatory response.
The analysis of Judge Paul Grimm and Maura Grossman concerning overreaction of judges to the Mata case is instructive. Grossman, Grimm, and Brown, IS DISCLOSURE AND CERTIFICATION OF THE USE OF GENERATIVE AI REALLY NECESSARY?, Author’s Copy of Aug. 11, 2023, to appear in Vol. 107, Issue 2 of Judicature (Oct. 2023) found at https://www.jdsupra.com/legalnews/is-disclosure-and-certification-of-the-2071688/. As succinctly stated by the authors:
… a likely unintended consequence of these standing orders and practice directives is to impede legal innovation and access to justice. The legal profession is already sufficiently risk averse and technologically backward. These orders will serve to chill the use of technology that could not only enable unrepresented parties to access the justice system but also reduce the time and cost for those who can afford representation.
Grossman, Grimm, and Brown, IS DISCLOSURE AND CERTIFICATION OF THE USE OF GENERATIVE AI REALLY NECESSARY?, Judicature, Vol. 107, Issue 2 (Oct. 2023) at p. 8.
I agree with them on all counts.
Lawyers Are Understandably Risk Averse
I can vouch for the risk averse characteristic of lawyers and judges. As a former judge and civil litigator, I have been teaching judges and lawyers about eDiscovery and law and technology for more than 20 years. The biggest obstacle for my audience of judges and lawyers is accepting change and learning what is necessary to derive the most out of the best new tools and technology available. I particularly recall a keynote address by Craig Ball at the annual University of Florida eDiscovery Conference a few years ago in which he called the profession to task for their reluctance to embrace eDiscovery while displaying a photograph of a well-dressed lawyer with his head in the sand. Harsh? Maybe. But true enough.
I understand the inertia. I was busy practicing law when Westlaw introduced computer research to law firms, the personal computer came on the scene, paper evolved to digital, and the internet and cloud computing changed things once again. It is tough for professionals in a deadline business already working hard to take the time and effort to learn something new and take the leap to use it when their reputation and livelihood are on the line. In addition to being risk averse, lawyers as professionals develop skills and methods for getting the job done, and they are reluctant to vary from what works for them. But in the case of generative AI workplace tools, do it they must, or they will be left behind and they and their clients will suffer. It is incredibly important to not deter lawyers (and even pro se litigants)[1]from using products that can be safely and effectively improve their work product and increase efficiency.
Legal work as a whole requires hard work and brick by brick analysis and creation of work product. Tried and true methods are difficult to set aside. The problem with inertia resulting from defending what has worked in the past is that trying new tools and methods is the best way to improve. Legal work is a sublime milieu for the power of generative AI tools, which can be used to sharpen communication, subtly adjust tone, brainstorm alternative paths, summarize and organize large amounts of data, test logistical strength of work, and provide checklist to ensure thorough coverage. All this can be done in an incredibly short time while preserving tried and true methods. AI tools complement existing skills and methods and can develop fresh ones. It takes an understanding of the products, but this can all be done safely and ethically.
Undue Stigma and Other Dangers Of Overbroad Rules
The proposed 5th Circuit Form will add to the growing rules and regs that will have a chilling effect on use of AI. It needlessly stigmatizes use of generative AI in an overly general fashion. Generative AI is employed in many products other than publicly available models like ChatGPT. Many concerns over AI products are based on early models, which have been modified and will be improved further. See Hon. Xavier Rodriguez, Artificial Intelligence (AI) and the Practice of Law, 24 The Sedona Conference Journal at p. 794 (Sep. 2023). As deftly stated by three additional learned authors:
We do not believe the courts that issued standing orders and practice directives intended to sow chaos, but the result has been a lack of clarity. There are many different GenAI and other AI technologies and some of the orders are not explicit about what technology use needs to be reported… .
Most search engines and word-processing systems will soon embed LLMs that will render GenAI ubiquitous in the daily tools that all litigators use. Rules of civil procedure should be technology neutral and should not have to be revised with the introduction of each new technological development. No one can predict what the legal technology landscape will look like two years from now. [footnotes omitted]
Grossman, Grimm, and Brown, IS DISCLOSURE AND CERTIFICATION OF THE USE OF GENERATIVE AI REALLY NECESSARY?, Judicature, Vol. 107, Issue 2 (Oct. 2023) at p. 7-8.
. I wholeheartedly join Grimm, Grossman, and Brown in urging against lack of clarity and overbreadth in generative AI regulation.
Ironically, under the Fifth Circuit proposal, using a tool that improves the quality of a brief may have the opposite effect of stigmatizing the content in the eyes of judges or opposing counsel. Upon disclosure of entirely proper use of AI, will opposing counsel argue: “who are you going to believe judge, a computer or a human like me?” Judges may subconsciously discount submissions that admit AI use or lawyers may shy from using AI out of fear that a judge will discount the work. Most importantly, will a chilling effect deter use and deprive the clients of the most effective, cost-efficient, and persuasive representation available?
The objectives of the proposed rule change could be accomplished by relying on the professionalism of the responsible attorney rather than requiring disclosure of how work was done. A further irony in adopting a court rule on use of generative AI is that human review in a generic sense is always required and may never be failsafe. Responsible lawyers are already obligated to ensure that the humans who generate or review work product submitted to the court are qualified and capable of creating the final product.
An acerbic comment from one of my student judges in an AI program says it all: “You are telling us lawyers are responsible to check the work generated by computers. Every day we get briefs and pleadings created by paralegals that the lawyers never check.” As a judge, I personally trusted and relied upon lawyers who certified accuracy to me. But I agree with the core point of the commenting judge that all work product a lawyer submits under certification, whether generated with the assistance of a computer or another human, must be checked for accuracy by a qualified person.
Trust the System and Existing Safeguards
The judge has an obligation and role in uncovering erroneous authority. When I was on the bench and a decision I made turned on a contested case authority, I read the case rather than relying on what the lawyers told me was in the case. Context and accuracy of a deciding case were my responsibility if I was signing an order.
I had a lesson on challenges that judges face early in my career as a trial lawyer. I had a temporary injunction hearing called on short notice for a new, potentially lucrative client for my firm. As a second year associate, I studied my tail off for the hearing, reading and re-reading the cases and fashioning my argument. My opponents were two experienced trial lawyers, and both had stellar reputations. The judge was so impressed with the fact those two lawyers were in his court room together, that he mentioned it when they walked in. I was an unknown. During our final arguments, one of the lawyers quoted a key case for their position, but he read the West headnote only, which made it appear conclusively that my case was dead in the water. When I got up to respond, I brought the Southern Reporter up to the bench and showed the judge the rest of the story from the actual text of the case, which distinguished our case facts from the law in the headnote.
In that case the system worked, and justice was served. Whether the misinformation by the opposing lawyer was intentional or laziness or not carefully reviewing the work of an associate who pointed out the law in the headnote is all beside the point. Misleading information can come to the judge with many faces, and I submit that looking differently at briefs that may have some information in them generated or corrected or curated by generative AI is the least of our problems. Judges and lawyers should be no more in fear of a misstatement of law by a generative AI tool than the words of a human. The system can work it out, and we do not need to invade work product and micromanage the way lawyers do their work to achieve justice. In fact, doing so is likely to have the opposite result of depriving lawyers and their clients of full and fair use of the best tools available while law firms prohibit the use of generative AI and courts and bar associations discourage their use through stigmatizing rules.
For years as a lawyer I dealt with specious arguments, facts, and law from the other side. As a judge, I expressly demanded a high degree of professionalism in my court, and the lawyers rose to the standard. Lawyers that were not up to standard were dealt with by me, the opposing counsel, or the wrath of the jury. In Mata, the judge dealt with the lawyer who was not up to standards. The system works.
What I found to be a much more common problem while on the bench was the wide range of ability of the lawyers to effectively communicate the client’s best case to me, the jury, or even the opposing side. Regrettably, through inability, time constraints, or laziness, more times than should have happened, clients’ cases were shortchanged. My passive demeanor on the bench belied the thoughts in my head about the shoddy preparation, tactics, argument, or organization of cases by one, and occasionally both sides of a case. The irony of the fear and rush to control AI in legal circles is that generative AI could help many a lawyer to better, more efficiently, and more persuasively present a case. Clients and their lawyers would be disserved by discounting and discouraging use of generative AI.
More Unintended Consequences
As more proposed and adopted rules and guideline occur, other courts and judges will be inclined to adopt the same or similar measures. Once one court adopts a rule, others are bound to follow suit. Within days after the Mata case hit the news, Judge Brantley Starr in Texas adopted a mandatory requirement regarding certification of AI. Within weeks other judges in U.S. and Canada followed suit and the requirements evolved in some cases to more restrictive language. See Grimm et. al, Judicature, Vol. 107 at 4-7 found at https://www.jdsupra.com/legalnews/is-disclosure-and-certification-of-the-2071688/. With a court as important and influential as the Fifth Circuit, we are bound to see other state and federal courts, judges, and Bar associations following suit. It is imperative that the proposal be carefully tailored to meet the true risks of generative AI, because “a likely unintended consequence of these standing orders and practice directives is to impede legal innovation and access to justice.” Grossman, et. al, Judicature, Vol. 107 at 8. Those who come first have the added responsibility to do no undue harm as they seek reasonable restraints and guardrails. Otherwise the chilling affect broadens.
Beyond the courtroom, the ripple effect of making generative AI a dirty word for legal work will carry over to dissuading use for entirely safe purposes in the law office. Using generative AI for more effective emails, summarizing documents, investigation and eDiscovery, communicating advice to clients, firm promotion, administration, volunteer work, education of lawyers and staff, prep of presentations at CLEs and Inns of Court, and much more will be retarded or delayed as lawyers are bombarded with reasons to not use AI products by judges, bar associations, and even clients who are caught up in the perceived downsides of AI. A day does not go by that I do not learn of another organization or firm that does not allow the use of AI.
Safe use of generative AI is already in our grasp if we make the effort to reach for it. Problem areas are being handled by advances in AI technology every day, and smart entrepreneurs and organizations are finding entirely safe and effective ways to incorporate and leverage the power of AI while avoiding problems in the naked public versions. Regrettably, safe and effective products are being tarnished by the reaction of rare examples of use of early models by uninformed lawyers.
I do not discount the risks of using generative AI. And lawyers who are careless or lazy do so at their peril. But those who learn the power and the ways to use generative AI safely and effectively will have a better opportunity right now to do amazing work. The practice of law thrives on preparation, logic, effective analysis, organization, and clear communication of relevant facts and law. Generative AI, for all its perceived faults, is actually an ideal tool for every one of those aspects of lawyering, in and out of court.
CONCLUSION
Use of generative AI will be a controversial leap of faith for lawyers who are busy with their day to day deadlines and comfortable with their current tools and methodology. Current products are in their infancy and rapidly developing and changing. There will be a learning curve in skill and ethical use of AI. But the safety of the products can be secured and their value to achieving justice cannot be overlooked.
There are challenges and potential pitfalls when generative AI is not understood and safely used by lawyers. In the effort to provide laudable and necessary guardrails for the use of generative AI models by the legal community, judges, organizations, and regulators must be mindful of the potential of overbreadth in the effort as well as the unintended but real consequence of discouraging and impeding the use of valuable tools essential to efficient and effective access to justice.
Leaders and courts should adopt the rules and guidelines that are necessary to safeguard justice, but please do not needlessly choke the golden goose and restrain the inevitable move to more productive technology in law practice.
December 5, 2023 © Ralph Artigliere ALL RIGHTS RESERVED (Published on edrm.net with permission.)
NOTE: Generative AI products were used to help draft and strengthen this article per EDRM GAI and LLM Policy.
[1] Pro se litigants pose greater challenges for judges and court administration. In addition to alerting pro se litigants to the dangers of faulty AI submissions, the use of generative AI may allow bad actors, vexatious litigants, or uninformed pro se parties to flood the court with person who file papers without a valid legal or factual basis behind their claims, or who files repetitive, burdensome, and unwarranted motions in a valid lawsuit. This wastes the time and resources of the court system and court staff, especially in lower courts and small claims. This is a different issue untethered from the question of ethical and proper conduct of lawyers and is beyond the scope of this article. Suffice it to say, pro se litigants could benefit from use of generative AI products and the access to justice issue is relevant to restrictions imposed on them as well.