[Editor’s Note: EDRM is proud to publish Ralph Losey’s advocacy and analysis. The opinions and positions are Ralph Losey’s copyrighted work.]
Imagine standing on a razor-thin line—one step forward, and you unlock unprecedented legal capabilities; one misstep, and you fall into a chasm of costly errors. This is the reality legal professionals face as they begin to rely on AI. While AI is already revolutionizing the profession, its performance can be dangerously unpredictable—a genius at some tasks, a fool at others. So how can we protect ourselves? The answer isn’t just a matter of convenience; it’s a matter of survival in a world where AI mistakes are costly, and refusing to adapt leads to obsolescence.
Artificial intelligence (AI) is transforming the legal industry, offering unprecedented efficiencies while posing new challenges. Two recent scientific studies—Larger and More Instructable Language Models Become Less Reliable, and Navigating the Jagged Technological Frontier, highlight a critical issue: as AI systems grow more powerful, they remain prone to errors, especially in tasks just beyond their capabilities. One article calls this the jagged frontier and both note the zig-zag edge of its capabilities is counter-intuitive. AI and its jagged frontier present both opportunities and risks for legal professionals. It is imperative we navigate AI’s unpredictable nature, ensuring the technology is used to augment, not compromise, legal practice.
The Jagged Frontier: Understanding AI’s Capabilities and Limitations
The term “jagged frontier” refers to the uneven capabilities of AI, where some tasks fall within AI’s ability to excel, while others lie beyond its reliable performance range. This concept was prominently discussed in both Zhou, L., Schellaert, W., Martínez-Plumed, F. et al. Larger and more instructable language models become less reliable. Nature (2024), and Fabrizio Dell’Acqua, Edward McFowland III, Ethan Mollick, et al. Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality. (Harvard Business School Working Paper, No. 24-013, September 2023.)
The Larger and more instructable paper observed that as LLMs grow in size and receive more human alignment (such as reinforcement learning from human feedback “RLHF”), they become more capable in most areas, but not all. In some unpredictable areas, they may become more prone to errors, especially in questions that seem straightforward for humans. An example given by the paper is simple addition. Another example not mentioned, but well known in AI circles, is the inability of ChatGPT to tell how many “r”s are in the word strawberry. For reasons too technical to explain having to do with how tokens are used to build words, it cannot simply count the “r”s. As of today, it still answers that question with two, as shown below.
Yes, generative AIs today, even ChatGPT, can unexpectedly be a fool sometimes, not a genius. So, walk before you run the jagged edge and always be careful, AI can hallucinate too. Creativity and How Anyone Can Adjust ChatGPT’s Creativity Settings To Limit Its Mistakes and Hallucinations (e-Discovery Team, 7/12/23); Experiment with a ChatGPT4 Panel of Experts and Insights into AI Hallucination (e-Discovery Team, May 21, 2024).
The paper Larger and more instructable Language Models Become Less Reliable shows what many users had already discovered on their own, that as models grow more advanced, they paradoxically become more prone to errors in some simple human-intuitive tasks that are hard to predict. These models might generate highly competent results in complex, creative tasks but fail at simple arithmetic or basic logic. This unpredictability becomes dangerous in legal work, where any error can have significant consequences. It is unknown if this reliability problem will continue with even larger scaling and better internal designs and RLHF. In the meantime, be carefully where you walk, the road ahead may take a sharp turn.
The other paper, Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality is by researchers at Boston Consulting Group, Harvard and Wharton. They demonstrated that when AI is asked to perform tasks within its area of competence, within its frontier as they called it, then productivity and quality can improve significantly. However, when tasked with projects beyond its scope, its frontier, then performance suffers, resulting in errors that degrade work quality.
One of the lead authors of the Navigating the Jagged Technological Frontier paper, Wharton Professor Ethan Mollick, went a little deeper and noted more subtle errors along the jagged edge of unexpected incompetence. He wrote about it in his follow-up paper Centaurs and Cyborgs on the Jagged Frontier. (One Useful Thing, 9/16/23). To quote the second to last paragraph of his article (emphasis added):
People really can go on autopilot when using AI, falling asleep at the wheel and failing to notice AI mistakes. And, like other research, we also found that AI outputs, while of higher quality than that of humans, were also a bit homogenous and same-y in aggregate. Which is why Cyborgs and Centaurs are important – they allow humans to work with AI to produce more varied, more correct, and better results than either humans or AI can do alone. And becoming one is not hard. Just use AI enough for work tasks and you will start to see the shape of the jagged frontier, and start to understand where AI is scarily good… and where it falls short.
Ethan Mollick, Centaurs and Cyborgs on the Jagged Frontier.
Both studies highlight the importance of recognizing the limits of AI in legal practice. Legal professionals cannot afford to fall asleep at the wheel, relying too heavily on AI without proper oversight. The infamous Mata v. Avianca case, where an attorney submitted AI-generated but non-existent citations, is a prime example of what happens when AI’s capabilities are misunderstood or unchecked. For legal professionals, navigating this jagged frontier means knowing when to rely on AI and when to exercise human judgment. It also means to trust but verify, an expression that has become a truism in the law. See e.g. Panel of AI Experts for Lawyers: Custom GPT Software Is Now Available (6/21/24); Can AI Really Save the Future? A Lawyer’s Take on Sam Altman’s Optimistic Vision (10/04/24)
The Centaur and Cyborg Models of AI Integration
My earlier article, From Centaurs to Cyborgs (4/24/24) expanded on the idea of human-AI collaboration, following two models: Centaur and Cyborg. These two models were studied by the researchers in the Navigating the Jagged Technological Frontier study. They represent different ways of integrating AI into all types of work, including legal work. As I put it in my blog:
Centaurs refers to a type of hybrid usage of generative AI that combines human and AI capabilities. It does so by maintaining a clear division of labor between the two, like a centaur’s divided body. The Cyborgs by contrast have no such clear division and the human and AI tasks are closely intertwined.
Ralph Losey, From Centaurs To Cyborgs: Our evolving relationship with generative AI.
The Centaur Model: In this approach, human and AI tasks are clearly divided. For example, a lawyer might develop a legal strategy, while AI assists in drafting supporting documents. This model is effective for tasks that require human intuition and strategy while leveraging AI’s ability to automate routine tasks.
The Cyborg Model: Here, the integration between human and AI is much deeper. AI augments the human workflow in real-time, contributing to decision-making and problem-solving dynamically. This model is useful in fast-paced environments where continuous collaboration between human judgment and AI processing is necessary. It also requires greater skills and experience on the part of the human operator.
Both models offer valuable insights for legal professionals, as they demonstrate how AI can be used to enhance productivity without compromising quality. However, both models also caution against over-delegation to AI, a mistake that can lead to errors in legal documents, strategic missteps, and ethical breaches.
Implications for Legal Practice
Legal professionals must grapple with the question: How can AI be trusted in high-stakes situations where accuracy is critical? Both the Larger and more instructable study and the Navigating the Jagged Technological Frontier study stress that AI is not yet a replacement for human expertise—it is a tool that must be wielded carefully.
Increased Productivity with AI: AI has the potential to streamline repetitive tasks, such as document drafting, discovery, and legal research, which can save lawyers significant time. When used correctly, AI can increase the speed of legal work, allowing lawyers to focus on higher-level strategy and client interaction.
The Risk of AI Errors: At the same time, AI introduces new risks. LLMs are prone to “hallucinations”—generating plausible but incorrect or non-existent information. In the legal field, such errors can lead to misinformed strategies, faulty arguments, or even ethical violations. Lawyers must remain vigilant and ensure that AI outputs are rigorously verified.
Ethical Considerations: Legal professionals are bound by ethical duties, including competence (Model Rule 1.1), which requires them to understand the technology they use. This includes knowing the limitations of AI. The risk of over-reliance on AI could lead to negligence, as seen in recent cases where lawyers used AI without proper oversight, resulting in sanctions or reputational damage.
Training and Expertise: Training is critical. Lawyers need to understand prompt engineering—the art of crafting precise prompts to minimize AI errors—and the boundaries of AI’s knowledge. This allows them to use AI effectively without falling prey to its limitations. As legal professionals become more adept at using AI, they can move from a Centaur approach (clear task division) to a Cyborg approach (seamless integration), gradually increasing their reliance on AI while maintaining control over the legal process. The path to superintelligent performance will, I predict, come right after the Cyborg stage and use wearable machine interfaces such as watches and glasses, not brain implants.
The Role of Legal Tech Experts
Legal tech experts play a crucial role in ensuring that law firms and legal departments implement AI safely and effectively. They can:
Identify Appropriate AI Tools: Not all AI tools are suited for legal work. Legal tech experts must vet software to ensure it meets the rigorous demands of legal practice, balancing productivity gains with reliability and accuracy.
Develop AI Usage Policies: Legal tech experts can help firms develop policies that govern AI usage, ensuring that lawyers know when and how to use AI in a way that enhances their practice while avoiding risks. For example, policies might include mandatory review processes for AI-generated work, preventing errors from slipping through the cracks.
Educate Lawyers: Ongoing education is essential. Legal tech experts should train lawyers on how to interact with AI, focusing on prompt engineering, error-checking, and understanding the limitations of AI systems. This helps lawyers and their staff become competent supervisors of AI rather than passive users. This is an ethical requirement for all legal professionals.
A Call to Action: Embrace AI Cautiously but Confidently
The scientific research is clear: AI holds tremendous potential to improve legal work, but its unpredictability requires caution. Legal professionals must begin using AI, but they should do so incrementally. By adopting the Centaur model first and gradually progressing toward a Cyborg approach, lawyers can develop the skills needed to navigate AI’s jagged frontier while maintaining the high ethical and professional standards that the legal profession demands.
Start with Low-Risk, High-Reward Tasks: Lawyers should begin by using AI for low-risk tasks, such as contract drafting, discovery, and basic legal research. These tasks are typically repetitive and rule-based, where AI excels. By automating routine tasks, legal professionals can increase their productivity without exposing themselves to significant risk.
Double-Check AI Outputs: Lawyers must treat AI-generated work as a first draft rather than a final product. Every document produced by AI should be reviewed carefully, not only for factual accuracy but also for tone, legal soundness, and strategic fit. The risk of AI “hallucinations” or producing homogenous, generic output remains high, and only human judgment can ensure quality control. Also beware of the tendency of current models to sycophancy where the AI may twist its answers just to agree with you. Worrying About Sycophantism: Why I again tweaked the custom GPT ‘Panel of AI Experts for Lawyers’ to add more barriers against sycophantism and bias (e-Discovery Team, 7/9/24).
Develop AI Literacy: Legal professionals must commit to ongoing training in AI and legal tech. Familiarity with prompt engineering, knowing how to detect and correct errors, and staying current on AI advancements will ensure that lawyers can work efficiently and safely with AI. Competence in this area is now a requirement under Model Rule 1.1 of Professional Responsibility, and failing to develop these skills could be considered negligence in the future.
Collaborate with Legal Tech Experts: Law firms should leverage the expertise of legal technologists who understand both AI’s capabilities and its limitations. These experts can help lawyers develop tailored AI strategies that maximize benefits while mitigating risks. Legal tech experts can also guide firms in selecting the right AI tools and setting up the necessary infrastructure to integrate AI into daily workflows.
Create a Framework for Ethical AI Use: Law firms need clear ethical guidelines for AI use. These policies should outline when and how AI can be used, specify review processes for AI-generated work, and establish protocols for dealing with AI errors. This framework ensures that AI use aligns with legal ethics and professional responsibility, protecting both clients and lawyers.
The Future of Legal Practice: From Centaurs to Cyborgs
The journey from Centaur to Cyborg, and then perhaps to AGI driven super-competence, is not just a metaphor for AI adoption—it represents the future of the legal profession. As lawyers grow more comfortable with AI, the division between human and machine tasks will blur. AI will no longer be just a tool for repetitive tasks; it will become an integral part of legal strategy, research, and decision-making. However, this transition requires careful, thoughtful integration.
The Centaur model offers a safe starting point, where humans and AI collaborate but with clear task separation. This model allows lawyers to retain control while benefiting from AI’s strengths. As legal professionals become more adept, they can move toward the Cyborg model, where AI and human input are deeply intertwined, and AI enhances every aspect of legal work. What comes next is speculation, but it is safe to assume our competence will continue to increase as the AI intelligence continues to grow in an exponential rate
Even in a Cyborg environment, human oversight remains essential. The risks of over-delegation, AI errors, and ethical breaches are always present. The key to success is learning where AI excels and where it struggles and making decisions based on that knowledge. The hybrid human-machine relationship must keep the human legal professionals in charge.
Conclusion: Moving Forward with Confidence and Caution
Legal professionals stand at a pivotal moment in history, where AI is transforming the legal landscape. The studies discussed—Larger and More Instructable Language Models Become Less Reliable and Navigating the Jagged Technological Frontier—offer valuable insights into both the promise and the peril of AI in law.
For the legal profession, the path forward involves embracing AI cautiously but confidently. Lawyers should:
- Begin by using AI in low-risk tasks and gradually expand its role as they become more skilled in managing its outputs.
- Always double-check AI work to maintain quality and avoid errors.
- Invest in ongoing training to remain competent and ethical in the use of AI.
- Collaborate with legal tech experts to develop AI strategies that are tailored to their specific practice areas.
- Establish clear ethical guidelines for AI use to protect both clients and the legal profession.
Call to Action: It is time for the legal profession to get going and take the next steps in AI adoption. Start by integrating AI into your practice with a Centaur approach, dividing tasks strategically and cautiously. As you gain more confidence and competence, move toward the Cyborg model, where AI can become an indispensable partner in your legal work. For those who choose to specialize and develop deep expertise in AI use, there is no limit to what you can do beyond the Cyborg stage and into the next generation of superintelligent AI. But remember—AI is a powerful tool, not a replacement for human judgment. Merge with AI, but retain your identify and control. By staying vigilant, learning continuously, and prioritizing ethical use, legal professionals can harness the full potential of AI while safeguarding the integrity of the legal system.
This transformation may be inevitable, but requires a thoughtful approach to avoid the many risks that lie ahead on the jugged edge. The future of legal practice will not be about human vs. AI but rather human + AI—a powerful partnership that, when managed correctly, can elevate the profession to new heights.
The time to act is now. Begin your journey from Centaur to Cyborg and beyond!
Want to dive deeper into the themes of this article?
Listen to Ralph Losey’s new podcast, Echoes of AI: Episode 3 | Navigating the AI Frontier: Balancing Breakthroughs and Blind Spots, for an engaging exploration of AI’s advancements and its lingering challenges, including a humorous take on AI mispronunciations.
Ralph Losey Copyright 2024 – All Rights Reserved
Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.