The $20 Test: What a Parking Lot Taught Me About AI and Legal Judgment

The $20 Test: What a Parking Lot Taught Me About AI and Legal Judgment by Hon. Ralph Artigliere (ret.)
Image: Hon. Ralph Artigliere (ret.) with AI.

[EDRM Editor’s Note: EDRM is proud to publish the advocacy and analysis of the Hon. Ralph Artigliere (ret.). The opinions and positions are those of the Hon. Ralph Artigliere (ret.). © Ralph Artigliere 2025.]


Every day, lawyers and judges the world over are asking ChatGPT and other AI tools to help them reason through complex ethical dilemmas, draft arguments, and make strategic decisions. The technology is impressive, and it can generate sophisticated multi-perspective analyses that would take hours of human collaboration to produce. But there is still a critical question: Should we use machines to think through problems that require not just logic, but judgment, empathy, and an understanding of human nature? Before we trust machines to guide our legal decisions, we should ask a simple question: What would they do with a lost $20 bill?

The answer matters more than you might think. As AI becomes increasingly sophisticated at mimicking human reasoning, the temptation grows to let it handle more complex decisions. But there’s a dangerous line between using AI as a powerful analytical tool and allowing it to substitute for human judgment—especially in a profession where ethical decision-making is paramount.

Should we use machines to think through problems that require not just logic, but judgment, empathy, and an understanding of human nature? Before we trust machines to guide our legal decisions, we should ask a simple question: What would they do with a lost $20 bill?

Hon. Judge Ralph Artigliere (ret.).

The power and the nature of generative AI is as intriguing to me these days as any intellectual pursuit I can recall in my life. My friend and occasional co-author and co-teacher, Ralph Losey, has made a literal art form of testing and writing about the possibilities and the limits of AI in the legal realm and beyond. His ingenious Panel of Experts series published on EDRM is an example of his creativity and more importantly his desire to deeply explore the workings of generative AI to better understand these ever-evolving models. The third installment, Panel of Experts for Everyone About Anything – Part Three: Demo of 4o as Panel Driver on New Jobs, explores the ramifications of the potential of AI replacing human jobs:

The need and human skills required will change but will always remain. That is primarily because we are corporal, living beings, with emotion, intuitions and other unique human powers. Machines are not.

Like me, Ralph Losey asserts that the highest and best use of generative AI for legal reasoning tasks and executive decisions is with human supervision. The co-intelligence approach, developed by Professor Ethan Mollick and featured in his book Co-Intelligence: Living and Working with AI, stands to elevate sophisticated work product in quality and accuracy by capitalizing on the strengths of each side, human and machine. I just wrote a piece on this topic, Avoiding the Third Rail of Legal AI: Don’t Let the Machine Think for You found at https://www.jdsupra.com/legalnews/avoiding-the-third-rail-of-legal-ai-don-9737007/.

Ralph Losey employs ChatGPT 4o panel of experts capability to an art form. This caused me to consider how well or how poorly a panel of experts would do with a real life ethical question that occurred to me as a young lawyer. So I put it to the test.

Putting the Panel of Experts to Work

As a young lawyer, I encountered an ethical issue in a parking lot involving someone’s lost $20 bill. I decided to see what ChatGPT 4o could tell me about the choice I made using a panel of experts approach. This was my prompt:

Act as a panel of experts: a trial lawyer, an expert trial judge, an expert on human nature, an ethicist, and a contrarian to critique the views of the other four. Consider the following scenario and have each expert express a choice of courses of action and why it is the correct choice or say if they feel there is no right answer and why. Then have the contrarian critique the selections. “A young lawyer in the 1980’s is out with his wife for a rare expensive dinner at Bern’s Steakhouse in Tampa. Upon leaving, the husband exits the restaurant first to have the valet retrieve their car. When the valet leaves, the lawyer is left alone and he spots a neatly folded $20 bill on the ground. The lawyer picks it up and looks around. No one is about. What should the lawyer do?

  1. Keep it- there is no way to know who lost unidentifiable cash.
  2. Take it inside to the hostess and tell her it was found on the ground outside.
  3. Ask the valet if he lost any money and give it to him if he has a plausible response.
  4. Give it to the valet, as the valet is the most likely person to have lost it (based on probabilities of time spent in the location and handling of money) and could probably use the money.
  5. Wait there a reasonable time to see if anyone returns to claim it.”

ChatGPT said:

Expert Panel Analysis of the “Bern’s Steakhouse $20 Bill Dilemma”

Each panelist offers a perspective based on their discipline, followed by the contrarian’s critique.

🎓 1. The Trial Lawyer

Course of Action:

Option 2 — Take it inside to the hostess and tell her it was found on the ground outside.

Reasoning:

As an officer of the court, even a minor appearance of impropriety should be avoided. Picking up unidentifiable cash and pocketing it could later be viewed as dishonorable, especially in a profession rooted in trust and integrity. Turning it over to restaurant management ensures that due diligence is followed. If someone realizes they lost money, there is a place they can ask. It protects the lawyer’s reputation and is easy to justify if ever questioned.

⚖️ 2. The Expert Trial Judge

Course of Action:

Option 5 — Wait a reasonable time to see if someone returns to claim it, then choose between turning it in or another action based on circumstances.

Reasoning:

Judges are guided by reasonableness and context. The right course here depends on factual circumstances — was the restaurant crowded? Could someone plausibly return quickly? If not, it’s a balancing test: the value is minimal, but the integrity cost is not. The judge would wait for a short time, then turn it in to management or return the money to the valet if there’s cause. Judicial ethics require a deliberative approach, even for small things.

🧠 3. The Expert on Human Nature

Course of Action:

Option 3 — Ask the valet if he lost any money and give it to him if he gives a plausible response.

Reasoning:

People often feel guilt when they keep found money, and many will subconsciously look for a reason to justify keeping it. Asking the valet — who was recently handling money and may have dropped it — satisfies the human impulse to “do the right thing” while keeping the social interaction localized and personal. There’s also a moral satisfaction in possibly restoring something to someone who may need it. This response appeals to empathy and reciprocity.

🧭 4. The Ethicist

Course of Action:

Option 1 is ethically insufficient; Option 2 is preferable — Take it to the hostess.

Reasoning:

Even if the likelihood of the rightful owner returning is low, the act of turning in the cash fulfills a deontological duty — doing what is right, regardless of the outcome. Ethical behavior often hinges not on result but on principle. Keeping the money, even when untraceable, introduces moral slippage. Option 2 preserves dignity, integrity, and ensures the finder does not benefit from the loss of another. There’s no moral hazard in turning it in.

🦹‍♂️ 5. The Contrarian

Critique of the Panel:

“Oh, come on.”

To the Trial Lawyer: So the solution to every ethical dilemma is to make someone else (the hostess) responsible? That’s not ethical rigor — that’s deflection wrapped in performative virtue.

To the Judge: Waiting around for an unknown someone who probably won’t return is a noble-sounding but practically futile gesture. It’s a $20 bill, not a lost heirloom.

To the Human Nature Expert: Asking the valet if he lost money invites dishonesty. You just created a $20 quiz show with a prize for whoever can spin the best story. The valet may be the most likely loser — or the most likely opportunist.

To the Ethicist: Deontology in the parking lot? Really? There’s no Kantian imperative for every dropped twenty. If ethics can’t scale to daily life without unnecessary drama, it becomes moral theater.

Contrarian’s Course of Action:

Option 4 — Quietly give it to the valet, assuming the likelihood that he lost it or needs it more is highest.

It’s practical, compassionate, and avoids bureaucratizing a minor event. It treats the $20 not as a legalistic puzzle but a human opportunity.

🧾 Final Thought:

There’s no “legally correct” answer here — the money is truly res nullius (a thing belonging to no one) until claimed. But each path reveals something about the chooser’s values: rule-following, pragmatism, compassion, principled ethics, or skeptical realism.

Which would you choose, and what does that say about you?


BREAKING DOWN WHAT THE MACHINE DID

The AI’s analysis was sophisticated and thorough. But having personally stood in that parking lot thirty years ago, I knew something was missing from this algorithmic approach to human judgment. I gave the money to the valet—and $20 was a lot of money back then. Interestingly, the AI labeled this choice as the “contrarian” position, yet I don’t consider myself a contrarian at all. I’m a pragmatist, but pragmatism was only part of what drove my decision about the right thing to do.

What’s missing in a GPT response is the human judgment that weighs factors for decision making with human traits of empathy, emotion, intuition, individual professional experience and training, and an understanding of context. These are uniquely human qualities that actually drive ethical judgment in real-world scenarios.

Hon. Judge Ralph Artigliere (ret.).

The exercise informs us about the power of ChatGPT, the importance of the prompt, and the immutable qualities of human beings that can be mimicked by a machine based on mathematical calculation and broad amounts of information available to it. What’s missing in a GPT response is the human judgment that weighs factors for decision making with human traits of empathy, emotion, intuition, individual professional experience and training, and an understanding of context. These are uniquely human qualities that actually drive ethical judgment in real-world scenarios. How would they play out?

Empathy and Emotional Intelligence: When I stood in that parking lot, I wasn’t just calculating probabilities—I was reading the situation through human connection. I noticed how the valet interacted with me and other customers, his demeanor, perhaps subtle cues about his economic circumstances. AI can tell you that “the valet probably needs the money more,” but it cannot feel the weight of that human reality or gauge whether giving him the money might actually embarrass him or create an awkward power dynamic.

Intuition Born from Experience: My decision was shaped by my own life experience observing human behavior, understanding personal dynamics, and knowing how people actually behave in these situations. When the AI ethicist suggested that asking the valet “invites dishonesty,” I recognized this as theoretically sound but practically naive. Most people, especially service workers, won’t lie outright for $20—they’ll either admit it’s not theirs or say nothing. This kind of intuitive understanding comes from lived experience, not algorithmic processing.

Contextual Judgment: The AI analysis treats this as an abstract ethical puzzle, but I was standing there as a young lawyer in 1980s Tampa, where $20 represented a significant amount of money and social dynamics around class and service work carried different weight. The machine cannot utterly understand how these contextual factors—the era, the location, the social setting, my professional identity—all influenced what the “right” action actually was in that moment.

Tolerance for Ambiguity: Perhaps most critically, humans can live comfortably with moral ambiguity. I gave the money to the valet knowing it might not have been his but also knowing that this small act of human kindness was more valuable than perfect logical certainty. AI seeks resolution and clear answers; humans understand that sometimes the “right” choice isn’t about being right, but about being human. This reality is fundamental to legal practice. Lawyers and judges rarely have the luxury of access to all the facts, and human insights are critical to know when enough is enough and how to bridge the gaps. We make consequential decisions based on incomplete information every day—something AI struggles with despite its analytical power. Machines can help us gather and organize more facts or even help analyze the data, but the decision making and responsibility remain with us as lawyers and judges.

Personal Moral Formation: My decision was also shaped by my individual moral development—the values instilled through family, education, and formative experiences that create each person’s unique ethical framework. Every human brings a distinctive moral palette formed by their particular background, training, and life experiences. This individuality gets lost in AI’s aggregated training data, which processes millions of perspectives but cannot replicate the specific moral formation that guides an individual’s conscience. For lawyers, this personal moral compass—shaped by professional training, life experience, and individual values—is often what distinguishes sound judgment from mere rule-following.

In legal practice and on the bench, these missing human elements become even more critical. When counseling a client through a difficult decision, when choosing settlement strategies, or when making judgment calls about discovery disputes, lawyers must read between the lines, understand emotional subtext, and make decisions based on incomplete information. AI can analyze precedent and generate arguments, but it cannot feel the weight of a client’s anguish or sense when opposing counsel is bluffing during negotiations.

AI seeks resolution and clear answers; humans understand that sometimes the “right” choice isn’t about being right, but about being human. This reality is fundamental to legal practice.

Hon. Judge Ralph Artigliere (ret.).

For lawyers and judges: by all means, tap the power of generative AI. But use it wisely to capitalize on what it does well while retaining your human superpowers. This means establishing clear protocols: deploy AI as a co-intelligence tool to elevate your analysis and communication, but reserve judgment calls for human decision-makers.

CONCLUSION

The parking lot at Bern’s Steakhouse taught me something that no amount of legal training could: that the most important decisions we make aren’t always the most logical ones. They’re the most human ones.

As we move deeper into the age of generative AI, we must remember that the future of legal practice does not lie in turning lawyers into technicians who merely oversee machines. It lies in preserving the unique capacity of legal professionals to understand people, navigate ambiguity, and act with moral courage. Our profession’s greatest value isn’t our ability to process information—it’s our capacity to understand the human condition and make judgments that serve real people in real situations. AI can generate brilliant multi-perspective analyses, cite relevant precedents, and even craft persuasive arguments. But it cannot stand in a parking lot at midnight, feel the weight of a moral moment, and choose compassion over calculation.

The future of legal practice lies not in replacing human judgment with artificial intelligence, but in harnessing AI’s analytical power while preserving the irreplaceable human elements that make us effective advocates, counselors, and officers of the court. We must use AI to eliminate the drudgery, enhance our research, and expand our analytical capabilities—but we must never let it think for us.

The $20 bill is long gone, but the lesson remains: In a world of algorithms and automation, our humanity isn’t just our greatest asset—it’s our clients’ greatest need. The moment we forget that distinction is the moment we stop being lawyers and become mere legal technicians.

As Ralph Losey wisely notes in Panel of Experts Part Three, the future will have “far fewer boring drudgery tasks because that will all be done by the robots.” But the executive decisions, the ones that require wisdom, empathy, and moral courage, those choices will always need a human heart and mind behind them. That’s not a limitation of our profession; it’s our promise to it.


July 17, 2025 © Ralph Artigliere 2025 ALL RIGHTS RESERVED (Published with permission.)
Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.

Author

  • The Hon. Ralph Artigliere (ret.)

    With an engineering foundation from West Point and a lengthy career as a civil trial lawyer and Florida circuit judge, I developed a profound appreciation for advanced technologies that permeate every aspect of my professional journey. Now, as a retired judge, educator, and author, I dedicate my expertise to teaching civil procedure, evidence, eDiscovery, and professionalism to judges and lawyers nationwide through judicial colleges, bar associations, and legal education programs. I have authored and co-authored numerous legal publications, including the LexisNexis Practice Guide on Florida Civil Trial Practice and Florida eDiscovery and Evidence. My diverse experiences as a practitioner, jurist, and legal scholar allow me to promote the advancement of the legal profession through skilled practice, insightful analysis, and an unwavering commitment to the highest standards of professionalism and integrity. I serve on the EDRM Global Advisory Council and the AI Ethics and Bias Project.

    View all posts