Avoiding the Third Rail of Legal AI: Don’t Let the Machine Think for You

Why professional responsibility demands we use AI as a tool—not a substitute—for legal thinking and decision-making.

Avoiding the Third Rail of Legal AI: Don’t Let the Machine Think for You by Hon. Ralph Artigliere (ret.)
Image: Hon. Ralph Artigliere (ret.) with AI.

[EDRM Editor’s Note: EDRM is proud to publish the advocacy and analysis of the Hon. Ralph Artigliere (ret.). The opinions and positions are those of the Hon. Ralph Artigliere (ret.). © Ralph Artigliere 2025.]


AI is transforming legal practice at breakneck speed. Generative AI tools promise unprecedented efficiency using machines that can digest complex information and deliver polished, persuasive output in minutes rather than hours. But for all their power, these tools harbor a hidden danger that threatens the very foundation of legal practice.

The danger isn’t hallucinations or factual errors. Those risks, while real, can be managed through careful review. The real threat is far more subtle: the temptation to let AI do our thinking for us.

This is the third rail of legal AI adoption. Like the electrified rail that powers a train but kills those who touch it, the impulse to hand over substantive legal work to machines risks short-circuiting the human judgment that must guide justice. When we let AI draft our pleadings, write our orders, or generate our legal conclusions, we don’t just risk professional embarrassment; we risk losing the very skills that make us effective advocates and fair arbiters.

This is the third rail of legal AI adoption. Like the electrified rail that powers a train but kills those who touch it, the impulse to hand over substantive legal work to machines risks short-circuiting the human judgment that must guide justice.

Hon. Judge Ralph Artigliere (ret).

The Hidden Risk: When AI Becomes the Decision-Maker

One of the most essential building blocks for responsible AI use in law is resisting the urge to let the machine do the thinking. Lawyering and judging are deeply human professions. Precision, nuance, empathy, creativity, and judgment born of experience are not just helpful—they’re indispensable.

But therein lies the tension. Judges and lawyers operate under immense pressure: tight deadlines, high caseloads, and the constant demand to do more with less. Add to that the all-too-human temptations of profit and convenience, and the appeal of a fast, automated “solution” becomes hard to resist. Yet in law, shortcuts come at a cost—and professional ethics must always override convenience and efficiency.

When generative AI appeared on the scene with promises of speed and simplicity, many legal professionals rushed in—often without the diligence or training required to use these tools safely. The hard truth is this: using AI well requires time, effort, and discernment. The real danger lies in pressing the easy button—letting AI deliver the answer instead of guiding us toward one. We saw that in the Mata v. Avianca debacle, where hallucinated cases made their way into court filings. Regrettably, too many lawyers have followed the same path.

That’s the third rail on this runaway train of AI adoption: the temptation to let AI generate final work product—pleadings, orders, sentencing decisions—without critical human authorship. Like the electrified rail that powers the train but kills those who touch it, this temptation risks short-circuiting the human insight that must guide justice.

For years, I’ve warned judges and lawyers against handing over the pen. The risks go far beyond hallucinations (which, while real, can be managed through review and source verification). The deeper threat is the erosion of thoughtful legal reasoning—the kind that draws on lived experience, contextual judgment, and moral discernment. These are human responsibilities. Machines can’t carry them.

The deeper threat is the erosion of thoughtful legal reasoning—the kind that draws on lived experience, contextual judgment, and moral discernment. These are human responsibilities. Machines can’t carry them.

Hon. Judge Ralph Artigliere (ret).

And let’s be honest: AI tools are fast. That’s part of their appeal. But speed without judgment breeds mistakes—and when those mistakes happen, the machine isn’t accountable. The lawyer or judge is.

The Real Costs of Delegating to AI: Loss of Learning, Judgment, and Ownership

Another risk of over-relying on AI in professional work is more subtle—but no less consequential: the erosion of learning and experience that shape good judgment. When machines do too much of our thinking and writing, we risk losing the trial-and-error process that builds current awareness and future competence.

A recent study from Cornell University, Your Brain on ChatGPT, warns of the potential neural and behavioral consequences of using AI in academic writing. See Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, Cornell University, 10 June 2025 found at https://arxiv.org/abs/2506.08872?utm_source=substack&utm_medium=email. While media headlines exaggerated its findings with claims of “brain damage,” Professor Ethan Mollick offered a more grounded perspective:

Our fear of AI ‘damaging our brains’ is actually a fear of our own laziness. The technology offers an easy out from the hard work of thinking, and we worry we’ll take it. We should worry. But we should also remember that we have a choice. Your brain is safe. Your thinking, however, is up to you.

Mollick, E., Against “Brain Damage”: AI can help, or hurt, our thinking, July 7, 2025 found at https://www.oneusefulthing.org/p/against-brain-damage?utm_source=post-email-title&publication_id=1180644&post_id=167495203&utm_campaign=email-post-title&isFreemail=true&r=2o63no&triedRedirect=true&utm_medium=email.

Mollick is right. If we treat AI as a co-intelligence—used deliberately and reflectively—it can elevate our work. But if we surrender to the easy-button mentality, we short-circuit our own growth.

This is not just an issue in legal practice. It shows up clearly in education, where we’ve long seen the difference between teachers who push students to think versus those who simply give them the answers. The best legal educators resist spoon-feeding. They challenge students through Socratic method, ambiguous hypotheticals, and real-world exercises that force them to engage, reflect, and own their conclusions.

As an instructor in judicial education, I face this same temptation. With so little time and so much ground to cover—on topics like eDiscovery, AI, and emerging tech—it’s tempting to just hand out answers. But real learning comes when we push judges to engage directly with unfamiliar material. I’ve seen it work best through practical exercises and demonstrations like those by my frequent co-teacher, Professor Bill Hamilton, whose use of colored boxes to teach precision and recall is both clever and unforgettable. It takes courage to put judges in the hot seat with tough questions, but that’s where meaningful learning happens.

With so little time and so much ground to cover—on topics like eDiscovery, AI, and emerging tech—it’s tempting to just hand out answers. But real learning comes when we push judges to engage directly with unfamiliar material.

Hon. Judge Ralph Artigliere (ret).

Fly Fishing and the Art of Legal Judgment: This principle of doing the hard work yourself applies beyond the classroom. It reminds me of teaching my granddaughter to fly fish. Once she mastered basics, she insisted on doing it herself—casting, mending, and presenting the fly her way. I wanted to intervene, to show her how I thought it should be done. But she learned far more by trying, failing, and figuring it out. She became not just a better angler, but a more confident person. That spirit of ownership—of learning by doing—is exactly what lawyers and judges risk losing if they let machines take over too much of the process.

If lawyers and judges delegate the work of writing and decision-making to AI—no matter how powerful the tool—the result is no longer their own. Judges are entrusted to make difficult calls and stand behind them because those decisions reflect their insight, experience, and accountability. The same is true for lawyers, who owe their clients professional judgment, not mechanical output.

Letting AI take the wheel—drafting opinions, generating sentences, writing motions—may sound efficient, but it puts technology in a role that only humans should hold. As I often say, self-driving cars are one thing. Self-driving legal professionals are quite another.

Judge Scott Schlegel puts this risk into clear perspective. He acknowledges AI’s value for organizing data and identifying patterns, but draws a firm line at decision-making:

AI can certainly streamline routine tasks, help identify patterns in case law, and organize information. However, we must be absolutely clear about where its role ends. Decision making and accountability must rest with a person who can explain, defend, and bear responsibility for the decision.

Schlegel, S., Even If Technology Gets Every Call Right—Something Still Feels Wrong, July 7, 2025, found at https://judgeschlegel.com/blog/when-ai-gets-an-email-what-digital-workers-in-banking-mean-for-courts-and-law-firms-1.

He’s right. Once we allow AI to determine outcomes—or even recommend them—we begin to erode the empathy, judgment, and personal responsibility that give the legal system its legitimacy. When a machine drafts the outcome, who do we hold accountable? Trust in our institutions suffers when there’s no clear human hand guiding the process.

I’ll admit: I’m biased in favor of lawyers and judges doing their own work. Throughout my career, whether in a big firm, a small firm, or on the bench, I handled my own depositions, research, and discovery. I had help from excellent staff, but I took ownership of the substance. Technology helped me compete, but it never replaced the core work of thinking, writing, and deciding.

Even today with more powerful tools at my disposal, in my work as an educator I resist the impulse to hand it off. Restraint matters—especially when the tools get more powerful. The minute we outsource the hard work of legal reasoning to machines, we risk more than bad outcomes. We risk forgetting what the work is really for.

Embracing AI—With Eyes Wide Open

Let me be clear: I am not anti-AI. Quite the opposite. I use generative AI tools daily—and I find them immensely valuable. But I use them intentionally, on the right tasks, for the right reasons, and in ways that avoid the risk of error or professional embarrassment.

AI can amplify our abilities as lawyers and judges. When deployed with knowledge, discipline, and ethical awareness, it empowers us to be more effective, more insightful, and more efficient. But that promise is only realized when we maintain control—when we treat AI as a co-intelligence, not as a substitute for our professional judgment.

AI can amplify our abilities as lawyers and judges. When deployed with knowledge, discipline, and ethical awareness, it empowers us to be more effective, more insightful, and more efficient. But that promise is only realized when we maintain control—when we treat AI as a co-intelligence, not as a substitute for our professional judgment.

Hon. Judge Ralph Artigliere (ret.) (emphasis in original).

As Ralph Losey and I argued in The Future Is Now, embracing AI doesn’t mean replacing ourselves. It means working alongside these tools to raise the standard of our work—not to shortcut it. See Artigliere, R. & Losey, R., The Future Is Now: Why Trial Lawyers and Judges Should Embrace Generative AI Now and How to Do it Safely and Productively, 48 Am. J. Trial Advocacy 323 (Spring 2025), found at https://cumberlandtrialjournal.com/the-future-is-now-why-trial-lawyers-and-judges-should-embrace-generative-ai-now-and-how-to-do-it-safely-and-productively/.

Conclusion: Keep Your Hands on the Controls

AI is a powerful tool that can elevate legal practice when used with discipline and wisdom. It can help us research faster, organize complex information, and identify patterns we might miss. But it cannot—and must not—replace the human judgment that gives legal decisions their legitimacy and weight.

The line is clear: AI should enhance our thinking, not substitute for it. When we delegate the core work of legal reasoning to machines, we don’t just risk bad outcomes—we abandon the professional responsibility that defines our calling.

The legal profession stands at a crossroads. We can embrace AI as a co-intelligence that amplifies our abilities, or we can surrender to the easy-button mentality that threatens to erode the very skills that make us effective. The choice is ours, but the stakes couldn’t be higher.

Use AI to think better, work smarter, and see further—but never let it think for you. In a profession built on human judgment, that’s not just good advice. It’s an ethical imperative.


July 11, 2025 © Ralph Artigliere 2025 ALL RIGHTS RESERVED (Published with permission.)
Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.

Author

  • The Hon. Ralph Artigliere (ret.)

    With an engineering foundation from West Point and a lengthy career as a civil trial lawyer and Florida circuit judge, I developed a profound appreciation for advanced technologies that permeate every aspect of my professional journey. Now, as a retired judge, educator, and author, I dedicate my expertise to teaching civil procedure, evidence, eDiscovery, and professionalism to judges and lawyers nationwide through judicial colleges, bar associations, and legal education programs. I have authored and co-authored numerous legal publications, including the LexisNexis Practice Guide on Florida Civil Trial Practice and Florida eDiscovery and Evidence. My diverse experiences as a practitioner, jurist, and legal scholar allow me to promote the advancement of the legal profession through skilled practice, insightful analysis, and an unwavering commitment to the highest standards of professionalism and integrity. I serve on the EDRM Global Advisory Council and the AI Ethics and Bias Project.

    View all posts