
[EDRM Editor’s Note: EDRM is proud to publish the advocacy and analysis of the Hon. Ralph Artigliere (ret.). The opinions and positions are those of the Hon. Ralph Artigliere (ret.). © Ralph Artigliere 2025.]
This month I celebrated another birthday—an occasion that naturally prompts reflections on time, legacy, and the world future generations will inherit. As a legal educator, a pivotal question on my mind is how the legal system, and the rest of humanity for that matter, will manage artificial intelligence—particularly generative AI—as it becomes embedded in everything from everyday tasks to complex problem-solving. My perspective, shaped by years of teaching judges and lawyers about the intersection of law and technology, has given me a clear view of both the promise and the pitfalls presented by generative AI.
I remain concerned that lawyers keep making the same mistakes about generative AI, using it like an easy button to produce and submit in court work product that contains unacceptable uncorrected defects like cases or holdings that do not exist. I once thought, perhaps naively, that lawyers would quickly learn from cases like the 2023 case Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023), in which lawyers were sanctioned for submitting in court and then doubling down on hallucinated citations in a brief created by ChatGPT. That bucket of cold water to the face and the notoriety and spate of ethical and court guidelines that followed should have been fair warning that machine created content must be reviewed by the submitting lawyer and verified before relying upon it.
Lawyers are Still Missing the Mark
In my experience, the vast majority of legal professionals, when given guardrails and expectations, rise to meet or exceed the expectations. But that has not occurred in this case, and more and more lawyers have displayed an inability or unwillingness to learn, understand, or heed the pitfalls of generative AI. Nearly two years after the Avianca case lawyers and some expert witnesses used by lawyers continue to make the same bonehead mistake.
Whether this spate of errant behavior in our profession is borne of ignorance of the technology or laziness is irrelevant. Both are ethically and professionally unacceptable. The result is not only case by case costs and delays. Fear of hallucinated content leads some courts and jurisdictions to overreact. That overreaction may include overly broad AI bans, which endanger access to courts.
Fear of hallucinated content leads some courts and jurisdictions to overreact. That overreaction may include overly broad AI bans, which endanger access to courts.
The Hon. Judge Ralph Artigliere (Ret.).
Having done what I can with my limited influence as a legal educator to inform lawyers and judges about the tremendous upside and the dangers of rapidly developing AI technology, my concern over continued missteps is on my mind of late. By contrast, this month I received some promising news about how younger generations are adjusting to the availability of AI. Without the same legal training or years of professional experience, the next generation appears to be more attuned to ethical AI use than many attorneys.
Refreshing Perspectives from Students
Two experiences enlightened my thoughts about AI and solidified my optimism about the way students and young professionals are wrestling with AI’s potential. First, I served as a guest speaker in a university course on computer ethics, led by a neighbor of mine who teaches hundreds of undergraduates each term. Each time I speak, I’m struck by the professor’s practical and enlightened perspective and the students’ curiosity, insight, and respect for responsible AI usage. The class is impressively structured to help students think deeply about ethical technology use, and my role as a guest speaker is to share real-world examples of how AI is both leveraged and misapplied in legal contexts. We start with the Avianca case and all the stumbling lawyers who continue to misuse the technology and we work toward the good things occurring with AI in the legal profession. Based on their interest and the questions they ask, it is clear that these students realize that AI is here to stay, and professionals and students alike must learn how to use it wisely and ethically.
The second informative experience resulted from my volunteer work reviewing scholarship applications for a respected organization. The applications are voluminous and the competition is high, which increases the importance of the student essays. This year’s essay assignment centered on the ethical implications of ChatGPT and other AI text generators in academia and the professions—a timely topic that generated remarkably thoughtful responses. Over eighty essays I reviewed from high school seniors to graduate students revealed a shared recognition among almost all the students that AI can boost productivity, deepen creativity, and serve as a valuable study partner when used properly.
What impressed me most was the clarity with which students distinguished between helpful and harmful uses of AI. Numerous individuals highlighted the thoughtful applications of AI, such as creating personalized study flashcards, summarizing intricate material, generating practice questions for study, and brainstorming project ideas. Others explained how AI helps break down voluminous material and complex concepts when professors and teachers aren’t available. Some noted AI’s potential to make educational resources more accessible and to help overcome language barriers.
Yet these same students demonstrated remarkable ethical awareness about where to draw the line. They articulated a fundamental principle: AI should enhance human thinking, not replace it. Most recognized that using AI to complete assignments undermines learning objectives and is a form of academic dishonesty. Many expressed concern that overreliance could stunt critical thinking skills and personal growth. Several even pointed out the societal risks of AI-generated misinformation and bias, showing awareness beyond their immediate academic concerns.
What stood out was the nuanced framework students had developed around AI use. They weren’t making simplistic judgments based on the technology itself, but rather evaluating applications based on intent and outcome and backing their analysis with real life examples.
The Hon. Judge Ralph Artigliere (Ret.).
What stood out was the nuanced framework students had developed around AI use. They weren’t making simplistic judgments based on the technology itself, but rather evaluating applications based on intent and outcome and backing their analysis with real life examples. One student made the insightful comparison between AI and autopilot systems in aviation—powerful tools that require human understanding and oversight to be used safely and effectively.
These reflections reinforce a principle I stress in my own teaching: AI works best as a collaborative partner rather than a substitute for the human’s genuine thought. The students recognized the value of AI in speeding up research or providing new angles on a difficult topic, but they also drew a line when it came to using AI to complete entire assignments without personal engagement. These are exactly the kinds of distinctions that demonstrate a working ethical compass around emerging technology.
And in my study of ethical use of AI by lawyers, it is easy to see that the understanding of these students far exceeds that of the numerous lawyers who have bungled use of AI by submitting briefs in court without verifying content. That is encouraging to me about the future but a disappointment about the state of our profession. Why do sophisticated lawyers fail to grasp ethical limits that students as young as high school are obviously mastering with ease?
The Way Forward in Academia and the Legal Profession
Some student essays suggested innovative educational approaches: integrating AI literacy into curricula, establishing clear guidelines for appropriate AI use in assignments, and designing assessments that evaluate critical thinking in ways that can’t be replicated by AI tools. These suggestions highlight how students aren’t just passive consumers of this technology—they’re actively thinking about how to shape its role in society. Law firms and continuing legal education programs can do the same by actively training lawyers on responsible AI usage instead of placing blanket bans or ignoring the issue.
These thoughtful suggestions from students signal that, moving forward, we all share a responsibility to guiding all generations in navigating AI’s possibilities while avoiding their pitfalls. Educators can foster discussions about where AI assistance ends and original thought begins, providing examples of how to incorporate AI tools responsibly. Instead of banning AI use, teachers should permit responsible and effective use of technology for practical educational experiences.
Those like me who are teaching professionals must demonstrate ethical use of AI in real-world situations—for instance, using AI to draft preliminary content but making sure to cite, check, and refine everything that goes public. All human users must be reminded of their ultimate responsibilities of oversight and verification of output. Those who create generative AI platforms for legal use in drafting documents should focus on partnership in creativity rather than offering applications that produce end content like briefs and court orders. Similarly, students are taught to approach AI as a supportive learning companion rather than a shortcut.
Conclusion: Hope for the Future
While the opinions I encountered through classroom discussions and scholarship essays were limited to academically motivated students, the depth of their grasp on AI’s pros and cons boosted my optimism about the future. AI and machine learning will continue reshaping life—academically, professionally, and socially. The question is not whether these technologies will dominate our world, but whether we will use them wisely. Thankfully, younger generations are already receiving practical guidance in computer ethics from enlightened teachers and professors.
Ultimately, these experiences showed me that many young people are approaching AI with open minds and reasoned caution—a perspective that, if nurtured, will help them harness technological innovation for the greater good. While challenges remain, the thoughtful engagement I’ve witnessed is a strong indicator that future generations won’t just adapt to our AI-driven world; they’ll help shape it in ways that preserve human creativity, integrity, and collaboration.
While challenges remain, the thoughtful engagement I’ve witnessed is a strong indicator that future generations won’t just adapt to our AI-driven world; they’ll help shape it in ways that preserve human creativity, integrity, and collaboration.
The Hon. Judge Ralph Artigliere (Ret.).
Both educators and professionals must harness that optimism to enact tangible changes, ensuring that the lessons of thoughtful experimentation, ethical vigilance, and personal accountability younger generations are learning about AI actually guide the rest of us forward. I encourage my peers in the legal profession—and in every field—to follow the lead of these students: leverage AI wisely, but always confirm, verify, and think critically.
Indeed, that is how we ensure the future looks bright.
March 19, 2025 © Ralph Artigliere. ALL RIGHTS RESERVED (Published with permission.)
Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.