[Editor’s Note: EDRM is proud to publish Ralph Losey’s advocacy and analysis. The opinions and positions are Ralph Losey’s copyrighted work.]
In a world increasingly influenced by artificial intelligence, the legal profession stands at a crossroads. Lawyers must adopt the new AI tools and quickly learn how to use them effectively and safely. That means learning the skill of how to use AI, which techs call “prompt engineering.” Fortunately, that just means word engineering, the art of knowing how to talk to ChatGPTs. The new generative AIs are designed to be controlled by natural language, not computer code. This is the easiest kind of engineering possible for lawyers. The precise use of language is every lawyer’s stock-in-trade and so prompt engineering is within every lawyer’s capacity.
Smart Computers Are Finally Here to Help Lawyers Do their Jobs
Ever since lawyers first started using personal computers in the eighties, we’ve eagerly awaited the day when they would get smart. We were often told that computers would soon progress from mere processing units to intelligent assistants. Dreamy promises of artificial intelligence were made, but never delivered. We were stuck for over forty years with dumb computers that can barely catch spelling errors. Finally, a breakthrough has been made. With the advent of generative AI, the long wait is over, and the dream of smart computers is becoming a reality.
The arrival of new generative AI, ChatGPT, is something that should be greeted by all legal professionals, indeed all computer users, with relief and enthusiasm, but also with a reasonable measure of care. We are finally leaving the horse-and-buggy stage of computers to fast moving cars, and, until you learn how to drive, they can be dangerous.
My Background and Use of AI in the Law
In 2012, I was lucky to have the opportunity to work on the landmark Da Silva Moore case. See, Austin The Da Silva Moore Case Ten Years Later; (EDRM 2/23/22) and Austin, A Case Where TAR Wasn’t Required (EDRM 8/9/22). Da Silva Moore established the legality of use of a special type of rule-based AI, a/k/a “active machine learning,” which is typically referred to in the law as predictive coding. I then began to specialize in this subfield of e-discovery. Thereafter, at Jackson Lewis I supervised the use of predictive coding in thousands of lawsuits across the country, and also taught and wrote about this type of AI. The emergence of truly smart generative AI in late 2022 with OpenAI’s release of GPT3.5, rekindled my enthusiasm in legal-tech. The long wait for smart computers was over. I put all thoughts of retirement aside.
I have been using the new AI tools in my legal practice ever since late 2022 on a limited basis, and as non-billable research and self education on a nearly full time basis. My focus lately has been on OpenAI’s prompt engineering instruction guide. My studies have centered around prompt engineering, the art of talking to Chat GPTs. My primary guide has been the instruction and best-practices advice provided by OpenAI. See OpenAI’s prompt engineering instruction guide. OpenAI is, of course, the company that created and first released ChatGPTs to the world.
My research includes extensive experimentation with GPTs, including my favorite project of seeing how ChatGPT4 could perform as an appellate judge. See e.g. Circuits in Session: Analysis of the Quality of ChatGPT4 as an Appellate Court Judge. I also did research on AI related policy and security issues attending DefCon 31 in August 2023 and participating in the AI hackathon. See e.g. DefCon Chronicles: Sven Cattell’s AI Village, ‘Hack the Future’ Pentest and His Unique Vision of Deep Learning and Cybersecurity Of course, you may have noticed that I make time to write extensively on generative AI and the law. Writing to share what I learn with my fellow professionals helps deepen my own understanding.
Trying to teach this is also a big help. I have recently done a few lectures, but my biggest teaching work so far has been behind the scenes. I have been “secretly” working on an online instructional program, Prompt Engineering Course for Legal Professionals, which will be based primarily on OpenAI’s prompt engineering instruction guide. The OpenAI guide, although invaluable and authoritative, is very technical and often difficult to understand. The goal of my work-in-progress course is to explain and build upon the OpenAI insights. I want to make their prompt engineering insights more accessible to legal professionals, to show how their six prompt engineering strategies can be applied in legal work. That is the key to empower any attorney to transform their legal practice with AI.
Once completed, the Prompt Engineering Course for Legal Professionals will be very detailed. It will probably require over twenty hours for a student to complete, and include homework and tests. More on all that later, when (not if!) it is finally finished. For now, this blog makes a short introduction to Open AI’s Six Strategies and how they can be applied as a kind of lawyer’s guide to future practice.
Why Prompt Engineering is it Important?
Prompt Engineering (“PE”) is the art of chatting with generative type AI to get the intended answers and guidance you need. It also serves to minimize errors that are still inherent to AI. PE involves learning how to craft questions and commands that guide large language models types of AI like ChatGPT to generate more accurate, relevant, and useful responses. It is in essence a new type of wordsmith activity involving the precise use of clear instructions, clear prompts, which the AI then responds to.
The analytical linguistic skills necessary to control AI by prompts should be learned by everyone who uses it because the quality of an AI’s output depends, at least in part, on the input it receives. Well designed prompts improve AI performance and minimize misunderstandings and errors, including over-creative type errors – such as just making up answers, including case law, with no basis in reality – called ‘hallucinations‘ in AI jargon, as well as lesser known, but related errors, such as sycophantism, failure to admit the AI does not know the answer to a question posed, and even failure to admit that its prior responses were wrong.
Good prompts help ensure that the AI’s responses are legally sound and reliable. Of course, all legal work by AI should still be verified and controlled by humans. Legal practice cannot be delegated to AI, but it can be a powerful assistant.
The Six Strategies of OpenAI’s Prompt Engineering
- Writing Clear Prompt Instructions: The cornerstone of effective interaction with AI is clarity in communication. Lawyers often deal with complex issues that require precise language. By providing clear, specific instructions, legal professionals can guide the AI to deliver targeted and applicable responses, enhancing the quality of legal research, drafting, and analysis.
- Providing Reference Texts: AI models can produce more accurate answers when supplemented with relevant texts. In legal settings, referencing statutes, case laws, or legal articles can direct the AI to base its responses on established legal doctrines, leading to more reliable and contextually appropriate answers.
- Splitting Complex Tasks Into Simpler Subtasks: Legal issues are often multifaceted. Breaking them down into simpler, more manageable components enables AI to handle each aspect thoroughly. This strategy is particularly useful in document review and legal research, ensuring comprehensive coverage of all relevant points.
- Giving the Model Time to ‘Think’: While AI doesn’t ‘think’ in the human sense, structuring prompts to simulate step-by-step reasoning can lead to more thorough and reasoned legal analysis. This tactic is akin to guiding a junior lawyer through a legal problem, ensuring they consider all angles and implications.
- Using External Tools: Integrating AI with external tools, like legal databases or current statutes, can significantly enhance the accuracy and relevance of the AI’s outputs. This synergy is crucial in law, where staying updated with the latest legal developments is vital.
- Testing Changes Systematically: Regular testing and refinement of AI prompts ensure that they remain effective over time. This strategy is akin to continuous legal education, where lawyers constantly update their knowledge and skills to maintain professional competence.
The Impact on Legal Practice
Improving prompt engineering skills can make a significant difference in legal practice. By mastering these strategies, legal professionals can:
- Enhance Legal Research and Drafting: AI can assist in drafting legal documents and researching case law or statutes, but it requires precise prompts to generate useful outputs. Lawyers adept in prompt engineering can leverage AI to produce high-quality drafts and research efficiently.
- Reduce Errors and Misinterpretations: Inaccurate AI responses can lead to legal missteps. Effective prompt engineering minimizes such risks, ensuring that the AI’s outputs are dependable.
- Stay Current with Legal Developments: The legal landscape is constantly evolving. Prompt engineering skills, especially using external tools and systematic testing, help lawyers keep up-to-date with the latest laws and judicial decisions.
- Improve Client Services: With AI’s assistance in routine tasks, lawyers can focus more on complex aspects of legal practice, improving overall client service.
- Ethical Compliance and Risk Management: Understanding AI’s capabilities and limitations through prompt engineering is crucial for ethical legal practice and managing the risks associated with AI use. never cite a case that only ChatGPT can find!
Enhancing Legal Expertise with AI
In the legal profession, where the precision of language is paramount, the ability to effectively prompt AI can transform how legal analysis, research, and documentation are conducted. By refining prompts, legal professionals can extract nuanced and specific information from legal databases, case law, and statutes, much more efficiently than by use of traditional research methods alone. This not only saves time but also increases the breadth of resources that can be consulted within tight deadlines. Still, you must always check citations, review cases yourself and verify all work.
Ethical Considerations and AI in Law
The ethical implications of using AI in legal practice cannot be overstated. Misguided or poorly constructed prompts can lead to incorrect legal advice or analysis, raising ethical concerns. We have already seen this in the well known case, Mata v. Avianca, (S.D.N.Y. June 22, 2023), where lawyers were sanctioned for citing fake cases. Also see, Artigliere, Are We Choking the Golden Goose (EDRM 12/05/23). Retired Judge Ralph Artigliere cautions against over regulations that might discourage lawyer use. The answer is lawyer training, not stringent regulations. It is imperative that lawyers become proficient in prompt engineering, to better control the errors, and better align the AI outputs with legal standards and ethical guidelines. This is vital to maintain the integrity of legal advice and uphold the profession’s ethical standards.
The Importance of Prompt Engineering to Future Legal Practice
The new GPT AI tools are incredibly powerful, but the quality of their output depends on:
- the clarity of the instructions they receive;
- sufficient context to any reference text provided;
- the ability to decompose complex tasks into simpler ones;
- having the time to “think”;
- the use of external tools when necessary; and,
- the systematic testing of changes.
If you learn to use these strategies, you can significantly enhance the effectiveness of your interactions with all GPT models. When strategically crafted prompts go into the AI, then gold standard responses can come out. Conversely, the old computer saying applies: “Garbage In, Garbage Out.” For lawyers, negligent use of AI can lead to a garbage heap of problems, including expensive remedial efforts, lost clients, angry judges (worse than angry birds), maybe even sanctions.
In these early days of AI, legal ethics and common sense require that you verify very carefully all of the output of GPT. Your trust level should be low and skeptical level high. Remember that it may seem like you are chatting with a great savant, but never forget, ChatGPT can be an Idiot-Savant if not fed with properly engineered prompts. GPTs are prone to forgetfulness, memory limitations, hallucinations, and outright errors. It may seem like a genius in a box, but it is not. That is one reason prompt engineering is so important, to keep the flattering bots under control. This will get better in the future no doubt, but the attitude should always remain: trust but verify.
Prompt engineering is a critical skill that everyone needs to learn, including legal professionals. Educational programs should make it easier for the profession to move smoothly into an AI future, a future where AI is an integral part of everything we do. We can all be more productive and more intelligent than we are now, and still be safe. Fear not friends, a little bit of prompt engineering knowledge will go a long way to ensure your effective use of AI enhanced computers. They will finally be smart, super-smart, and for that reason much more fun and enjoyable to have around the office.
You Need to Learn These Prompt Engineering Skills and Not Be Tempted to Just Turn Everything Over to Vendors
In the midst of AI’s rapid evolution, some AI companies are already suggesting that prompt engineering skills will soon be unnecessary. They claim that future software advancements will embed all necessary prompts anyone may need, reducing the user’s role to simply pressing buttons. As tempting as this siren call may be, the promise of future software, often referred to as ‘vaporware,’ is misleading. No software currently exists that can fully automate the nuanced and complex task of prompting effective legal analysis. Lawyers need to embrace the future in a self-reliant manner.
Plus, any approach that attempts to delegate the responsible use of AI to vendors raises critical ethical and practical questions. Can lawyers, bound by a duty of competence, ethically delegate their responsibilities of AI use to outside businesses? Does relying on a vendor to filter or conduct AI interactions compromise a lawyer’s duty to their clients?
The reality is, while AI tools can significantly augment a lawyer’s capabilities, they cannot replace the nuanced understanding and ethical judgment that comes with legal training. Delegating the power of tools like ChatGPT to for-profit service companies raises significant ethical issues and risks.
Furthermore, since law is by nature very complex and constantly evolving, AI needs the direct guidance and control of skilled legal practitioners to stay on track. Without a deep understanding of prompt engineering, lawyers may be ill-equipped to do so, they may be unable to guide AI effectively or critically evaluate its outputs.
Conclusion: Embracing a Responsible AI Future for the Legal Profession
As we embrace an AI-augmented legal landscape, mastery of prompt engineering is not just about efficiency but also responsibility. AI offers immense potential to transform legal practice, but its effectiveness hinges on the quality of our prompts. New educational initiatives are needed to equip the legal community to navigate this new era with confidence, ensuring AI is a boon, not a bane, to the legal profession.
Legal education should help legal professionals to operate the new AI tools themselves. They are too important and powerful to delegate to third parties. With the help of AI, and dedicated educators, everyone, especially wordsmiths like lawyers, can learn prompt engineering. Everyone can learn to use the new smart computers. AI is a great new tool, a powerful thinking tool, that can augment your own intelligence and legal work. It should be embraced not feared. Education about AI is the way forward.
Ralph Losey Copyright 2024 — All Rights Reserved. See applicable Disclaimer to the course and all other contents of this blog and related websites. Watch the full avatar disclaimer and privacy warning here.
Published on edrm.net with permission.
Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.