Legal Professionals Should Embrace Generative AI Now and Join a New Era of Co-Intelligence

Legal Professionals Should Embrace Generative AI Now and Join a New Era of Co-Intelligence By Hon. Ralph Artigliere (ret.)
Image: Hon. Ralph Artigliere (ret.) with AI – Hat Tip to Ralph Losey’s Visual Muse GPT.

[Editor’s Note: EDRM is proud to publish the advocacy and analysis of the Hon. Ralph Artigliere (ret.). The opinions and positions are those of Judge Artigliere (ret.). © Ralph Artigliere, October 9, 2024.]


Generative AI is transforming legal practice, offering tools that legal professionals cannot afford to ignore. This technology is reshaping decision-making and workflow, as I have seen firsthand while teaching judges and lawyers about its potential. In a rapidly evolving legal landscape, those who adapt to AI now will gain a significant advantage in efficiency and accuracy, positioning themselves for success as AI tools become increasingly sophisticated and indispensable.

For the past two years, I have used tools like OpenAI’s ChatGPT 4.0, Microsoft Copilot, Claude, and Lexis+AI daily. This experience has shown me both the tremendous potential and the limitations of generative AI, prompting a deep exploration of its ethical and legal implications. While I have had the luxury of time to study these platforms, I recognize that most legal professionals are too busy to keep up with the rapidly evolving AI landscape. With information and capabilities changing weekly—if not daily—it is challenging to stay current.

Let’s explore why legal professionals should embrace generative AI now, the benefits it offers as well as the risks, and the current state of its adoption.

Less than a year after OpenAI’s launch in November 2022, many lawyers and firms began using or exploring AI tools, as reported by the ABA’s 2023 Artificial Intelligence (AI) TechReport. A new report from the Association of Corporate Counsel and Everlaw highlights Generative AI’s groundbreaking impact in corporate legal departments along with expectation of further impacts on the in-house and outside counsel. Despite the rapid adoption, the ABA survey showed that many legal professionals remained concerned about developing competence in generative AI tools.

Heed Rational Warnings but Beware of the Hype and the Doomsayers

As I teach lawyers and judges about AI, attendees continue to express concerns based on what they hear in the industry media and from bar associations and regulators. Reluctance to dive into such a controversial change in practice and workflow is understandable. AI either sounds too good to be true, too dangerous, too hard to master, or the most frequent concern: too early and not ready for prime time.

Legal professionals are overwhelmed by conflicting information about generative AI. While both exaggerated claims and alarmist warnings exist, the truth lies somewhere in the middle. The key is to balance enthusiasm with caution, adopting AI responsibly while recognizing its real benefits and limitations. Fear and hype should not hinder practical experimentation with AI’s current, tangible benefits.

The Court of Common Pleas in Butler, Pennsylvania, recently mandated in an Administrative Order that all filings include an affidavit disclosing whether generative AI was used—a requirement effective October 21, 2024. This redundant rule overlooks the existing duty of accuracy and highlights misunderstandings about AI’s role in legal work. Such measures can inadvertently stifle innovation, underscoring the need for informed regulation. See J. Kubicki, Lead Memo: Court blunders with a GenAI disclosure form, Brainyacts (Oct. 4, 2024).

Concern over reckless filings is understandable but are far better ways to handle the issue that do not place a chilling effect on access to justice through useful tools. Rules and ethical standards need to be measured, understandable, and not overbroad.

On the positive side, Chief Justice Roberts recently acknowledged AI’s transformative impact on legal work, noting that while judges will remain essential, AI will significantly affect trial-level proceedings (2023 Year-End Report). This recognition from the highest level underscores why legal professionals must become comfortable with AI now.

On the positive side, Chief Justice Roberts recently acknowledged AI’s transformative impact on legal work, noting that while judges will remain essential, AI will significantly affect trial-level proceedings (2023 Year-End Report).

The Hon. Judge Ralph Artigliere (ret).

Most ethical guidance on generative AI from responsible bar associations has been measured and quite good. Firmly based on current ethical rules, the ethical guidance from California, Florida, New York, and other state sources caution lawyers to become proficient in generative AI in order to safely maintain confidentiality and avoid the pitfall of potential bias. Further, lawyers are required to maintain adequate transparency so clients can participate in decisions that impact the safety of their secrets and confidential information. Most importantly, the rules and guidelines confirm that there is nothing wrong with proper use generative AI, and, in fact, lawyers are obligated to be proficient in the technology of generative AI so they can deliver services to their clients effectively. Measured ethical guidance provides guardrails that lawyers are quite familiar with navigating.

There is no doubt that AI platforms and products are a boon for legal professionals that cannot be ignored. Now is the time to learn what is needed to put the technology to good use. Getting started is not difficult.

Risks Can be Mitigated

Some legal professionals worry about privacy risks, job displacement, or AI-generated errors. As for job displacement, it is much more likely that lawyers and eDiscovery professionals who do not adapt to technology that is sweeping into the profession will be replaced than those who learn and use it. Law students and aspiring legal assistants or eDiscovery technicians who master AI will as students enter their respective fields armed with abilities and tools that will be sorely needed as AI proliferates.

Privacy risks and confidentiality are always a concern for lawyers, but lawyers and judges are familiar with maintaining privacy. The methodology for safely using AI can be mastered just as cloud computing and other technologies were adapted to safe use in law offices and courts.

AI hallucinations—plausible-sounding but incorrect outputs—can be mitigated through proper use and human oversight. As with any legal work, verification is essential. Errors can be avoided through effective prompting and ensuring that AI-generated content is verified by a human supervisor before being relied on.

There are entirely safe applications where experimentation can yield familiarity and comfort with the technology, but many lawyers and judges remain reluctant to try and those who do may be hiding their successes until using these tools becomes mainstream. Organizations should encourage their employees to use AI where it is ethically and legally possible. Unfortunately, not all organizations have an enlightened and updated understanding of AI. Irrational fear and uninformed limitations on emerging tools inhibit progressing to more effective and efficient workflows

Human in the Loop

Some concerns about AI missteps are valid. AI comes in many forms, and no platform is a failsafe, one-size-fits-all solution. In the legal realm, precision and specialized knowledge are essential, and lawyers are trained to make fine distinctions about legal precedent and laws. But there is a solution: the ‘human in the loop.’ While AI is powerful, it is not a substitute for human intelligence. Together, human expertise and generative AI can create co-intelligence that enhances decision-making. To achieve this safely and productively, legal professionals must understand generative AI.

Human oversight is crucial. Generative AI enhances, rather than replaces, legal expertise. For example, AI can theoretically draft a brief for you, but the lawyer’s expertise, voice, and verification of content and authorities are essential. Alternatively, the lawyer can draft the brief, and then use AI to check for messaging, logic, consistency, and the strength and influence of arguments. This approach combines the creative input of the lawyer with the analytical power of AI, ultimately improving the quality of the work product.

For example, AI can theoretically draft a brief for you, but the lawyer’s expertise, voice, and verification of content and authorities are essential. Alternatively, the lawyer can draft the brief, and then use AI to check for messaging, logic, consistency, and the strength and influence of arguments.

The Hon. Judge Ralph Artigliere (ret).

Keeping humans in the loop to review, refine, and verify AI output—and allowing AI to analyze human drafts—ensures that efficiency is maximized without compromising ethical standards. Lawyers must remain in control, providing human oversight to ensure accuracy, context, and ethical compliance. This ‘human-in-the-loop’ approach allows AI to function as a co-intelligence rather than a replacement.

Search for Reality Over Hype

As for the hype, just recognize that it comes with the territory. As competition grows, companies are racing to market AI products with inflated claims of their capabilities. These platforms are valuable, but legal professionals must remain realistic about their current limitations and potential for error.

Some products may ultimately improve and approach what is promised. For now, do not fall for the promises of turnkey products producing end-result research, finished briefs, trial preparation, court orders, or the like. Instead, generative AI can be safely and effectively used right now under step by step human supervision to improve efficiency of legal tasks and work product.

Adopt AI at Your Own Pace and Take Measured but Steady Steps into AI

Every legal professional must determine their own pace for adopting AI, but comfort only comes with practice. Start experimenting now to unlock its value. More importantly, these tools enhance both the quality and efficiency of legal work across various areas. Start with simple, routine tasks—such as drafting basic correspondence or summarizing documents. These low-risk applications provide a gentle introduction, allowing you to build skills and confidence. By starting small, you lay the groundwork for more complex uses as your comfort grows.

This low-risk introduction can foster familiarity and comfort with the technology. Many if not most will be impressed and some will be amazed and wonder why they did not try it sooner. Then, when future versions of generative AI become standard tools in every law office and courthouse, those who stepped up to the plate now will be miles ahead of the foot draggers.

Generative AI continues to improve daily, but it is not, and likely never will be, a fully automated solution capable of producing legal work without human oversight. These tools won’t replace lawyers or judges; instead, they will empower them to work faster and more effectively. As Owen Morris notes in The Transformative Power of Generative AI in the Legal Field (ABA, Sep. 12, 2023), there is no “easy button” for excellent legal work. The good news is that AI can already be used to significantly enhance many aspects of legal practice, provided that legal professionals approach it with a clear understanding of its limitations. For those who do, the potential applications are both vast and safe.

Employ the Concept of Co-Intelligence with AI but Never Yield Control

Complex legal tasks require precision, knowledge, and accuracy. Humans who partner with generative AI rather than tapping into it to produce their work product or do their research stand to gain the most. Generative AI is weaker than humans when it comes to avoiding bias, so-called hallucinations, and handling tasks that are not routine. On the other hand, generative AI excels in tasks that challenge humans: (1) instantaneously accessing and processing vast sources of information for search, logical analysis, consistency, and criticism; (2) finding the best and most precise term to communicate a thought; (3) analyzing logical flaws; (4) finding missing or inaccurate data; and (4) formatting arguments and thoughts in an effective and consistent structure.

On the other hand, generative AI excels in tasks that challenge humans: (1) instantaneously accessing and processing vast sources of information for search, logical analysis, consistency, and criticism; (2) finding the best and most precise term to communicate a thought; (3) analyzing logical flaws; (4) finding missing or inaccurate data; and (4) formatting arguments and thoughts in an effective and consistent structure.

The Hon. Judge Ralph Artigliere (ret).

For example, involving generative AI in legal drafting has decided advantages. AI provides immediate feedback using access to data and data-driven insights provided by the user and its massive amounts of training data. This enables the consideration of a larger array of facts for analysis, embracing a broader spectrum of perspectives and achieving this with unprecedented speed and complexity. Therefore the contribution of AI to the co-intelligence broadens context and increases accuracy, speed, and efficiency.

Professor Ethan Mollick of the Wharton School has an optimistic and realistic perspective about the use of AI by humans as co-intelligence. According to Prof. Mollick, AI can act as a thinking companion to assist in decision-making by causing us to reflect on our own choices with its richer, broader perspective that is unhinged from our own biases or directions inherent in our own minds. See Mollick, Co-Intelligence- Living and Working with AI, Penguin Random House (2024) at p. 49.

Yet, Prof. Mollick wisely cautions against simply relying on the AI to make our choices for us. Id. In the legal context especially, humans are essential for complete and precise legal analysis and addressing uniquely human issues like bias, believability, and circumstantial reliability. Assessing quality of evidence or legal factors, foundation and context, inference and logical connection, and probative value vs. prejudice are all sophisticated assessments requiring human input.

With a human constantly in the loop, it is possible to keep a check on the weaknesses of generative AI while capitalizing on the strengths. Look for ways to find valuable, practical uses for this technology by focusing on safe, measured tasks that enhance human work rather than replacing it.

Know Your Co-Intelligence Partner

Ethical and safe use of generative AI derives from knowing the technology to avoid errors and ethical pitfalls. Responsible use entails ensuring client confidentiality and data protection when using AI tools. Always check for bias and accuracy. When caselaw or statutes are involved always confirm that the cited material is relevant, controlling precedent, and accurately construed.

With these safeguards in mind, let’s explore some of the practical applications of AI in legal practice.

Current Safe Uses for AI Products

At a recent presentation to state court judges, I presented a number of potential uses of AI for judges and court systems. In Seminole County, Florida, for example, court administrators and the clerk’s office are teaming up to use generative AI to pour through thousands of court pleadings in small claims cases to see which ones employ civil rules and which ones do not. This effort resulted from a Florida Supreme Court mandate to identify presence or absence in those cases. The mandate was a burden on the administration and clerk’s office, and the use of a generative AI program they created quickly substantially reduced the man-hours required for the task with an accuracy rate that far exceeded human review of those documents.

For a recent presentation at an eDiscovery conference, Judge Allison Goddard tested the ability of the generative AI platform Claude.ai to draft a judicial order on three eDiscovery disputes. Using detailed prompts without party names, Judge Goddard gave a fact background and attached the cases she wanted Claude to apply to the disputes and asked Claude to apply the legal principles in each of the discovery disputes into a court order as a demonstration for her presentation. Judge Goddard was reportedly impressed by the outcome, as were attendees at the program. See Austin, D., Can AI Write a Case Law Order? One Judge’s Unique Idea: Artificial Intelligence Trends, eDiscovery Today (Oct. 2, 2024).

Judge Goddard found a safe and effective way to test the waters of AI use by judges. An alternative approach for such an exercise would be to provide the platform with a proposed draft order as well as facts and authorities and have Claude or OpenAI or Lexis+AI check the order for legal sufficiency, logic, and effective communication. Another idea for judges is to prompt the AI platform using your prior orders in other cases or orders by other judges that are well-written as examples for style and form purposes. These approaches ensure that the writer’s voice and direction are applied to the AI product.

Advances in use of AI in court reporting platforms and tools allow judges and lawyers to quickly identify relevant testimony in depositions and hearings and uncover inconsistencies or support for factual findings. See R. Feigenbaum and G. Stein, AI Tools for Judges, prevail.ai(Feb. 9, 2024).For lawyers who are advancing or opposing expert testimony in litigation, the ability to use generative AI products to summarize an expert’s extensive prior depositions and writings as well as competing authoritative testimony or writings in order to find support or inconsistencies, logical flaws in analysis, and other impeaching information makes the work more efficient and effective.

Reliable use cases for generative AI and for platforms powered by generative AI include:

  • Efficient Legal Research– Generative AI can streamline legal research by quickly summarizing case law and statutes.
  • Drafting Documents: Using AI with human supervision to draft routine legal documents, contracts, and briefs, saving time and reducing errors.
  • Improving Writing and Reasoning: Tools that help lawyers refine their writing and supplement their legal reasoning skills.
  • Case Analysis and Prediction: AI can have a role in analyzing past cases to predict outcomes and inform legal strategy.
  • Lay and Expert Witness Assistance: AI can assist in summarizing testimony and for expert witnesses AI can compare past testimony and writing to current testimony.
  • Case preparation: AI can suggest questions for deposition and trial testimony and prepare sophisticated and effective demonstrative exhibits, timelines, charts, and summaries for trial, mediation, or arbitration.

Benefits of a Measured Approach to AI Use

Immediate benefits flow from using generative AI:

  • Efficiency and Productivity: AI can handle routine, repetitive tasks, allowing lawyers to focus on more complex and strategic work requiring executive thinking and legal analysis.
  • Quality Control: AI is particularly good at spotting inconsistency and can aid in ensuring accuracy and consistency in legal documents and research.
  • Client Service: Enhancing client interactions and service delivery through faster and more accurate responses.
  • Competitive Advantage: Embracing AI early means staying ahead in the legal market by adopting innovative technologies.

Informed Professionals Will Benefit Sooner When Products Improve

These tools are improving every day at an incredible rate. It is hard to believe that OpenAI burst on the scene less than two years ago. While the early versions intrigued and amazed us, they pale in comparison with the power and utility of the tools available now. Improvement will continue in the robust, competitive market that exists today.

However, waiting for the next best thing before embracing AI is a mistake. Those who use the tools now will come to understand the nuances, drawbacks, and advantages of AI tools which will put them in good position to understand the best potential of emerging tools. See Mollick, Co-Intelligence- Living and Working with AI, Penguin Random House (2024) at p. 48. Perhaps more importantly, using the tools improves efficiency and work product right now.

Get Started by Using AI

If you have not engaged AI yet, try this: get an Open AI or Claude free account and experiment with harmless but challenging activity like drafting an email or creating an original bedtime story for your children. To improve your skills, ask the platform for tips on effective prompting and tips for safe use. Step by step instructions will follow. Then gradually develop your own use cases for AI and you are on your way.

Conclusion

Generative AI is transforming the legal profession, and those who embrace it today will lead tomorrow. Start experimenting now—take a few minutes to use an AI tool for a routine task. Embrace the innovation that will shape the future of law and position yourself as a leader in this evolving landscape.


October 9, 2024 © Ralph Artigliere. ALL RIGHTS RESERVED (Published with permission.)

Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.

Author

  • The Hon. Ralph Artigliere (ret.)

    With an engineering foundation from West Point and a lengthy career as a civil trial lawyer and Florida circuit judge, I developed a profound appreciation for advanced technologies that permeate every aspect of my professional journey. Now, as a retired judge, educator, and author, I dedicate my expertise to teaching civil procedure, evidence, eDiscovery, and professionalism to judges and lawyers nationwide through judicial colleges, bar associations, and legal education programs. I have authored and co-authored numerous legal publications, including the LexisNexis Practice Guide on Florida Civil Trial Practice and Florida eDiscovery and Evidence. My diverse experiences as a practitioner, jurist, and legal scholar allow me to promote the advancement of the legal profession through skilled practice, insightful analysis, and an unwavering commitment to the highest standards of professionalism and integrity. I serve on the EDRM Global Advisory Council and the AI Ethics and Bias Project.

    View all posts