
[EDRM Editor’s Note: This article first appeared in Science and Technology Law Review, Vol. 26 No. 2 (2025), and is published here with the permission of the authors: Maura R. Grossman & Hon. Paul W. Grimm (ret.). The opinions and positions are those of the authors.]
Between 2014 and 2024, rapid advancements in computer science ushered in a dramatic new form of technology—Generative AI (“GenAI”). It offered seemingly limitless possibilities for creative applications never before imagined. But it also brought with it a darker side—the ability to create synthetic or “fake” text, images, audio, and audiovisual depictions so realistic that it has become nearly impossible—even for computer scientists—to tell authentic from fake content. Along with this new technology, new terms have been introduced, including “hallucinations” and “deepfakes.” The use of GenAI technology has not been limited to computer scientists and IT professionals. It is readily available on the Internet at little or no cost to anyone with a computer and Internet access. It is no exaggeration to say that GenAI has democratized fraud, and that an ever-increasing amount of content on the Internet is now synthetic or AI-generated. Deepfakes have been used for satire and amusement but also to humiliate and destroy the reputations and careers of persons depicted in the fakes, to spread disinformation, to manipulate elections, and to mislead the public. They will most certainly find their way into the resolution of court cases where judges and juries will face real challenges understanding the operations and output of complex AI systems and distinguishing between what is real and what is not.
We address the ways in which both known-to-be-AI-generated evidence and suspected deepfake evidence may be offered during trials.
Grossman, M., & Grimm, P. (2025). Judicial Approaches to Acknowledged and Unacknowledged AI-Generated Evidence. Science and Technology Law Review, 26(2). https://doi.org/10.52214/stlr.v26i2.13890
In this Article, we explore the development of GenAI and the deepfake phenomenon and examine their impact on the resolution of cases in courts. We address the ways in which both known-to-be-AI-generated evidence and suspected deepfake evidence may be offered during trials. We review the research literature regarding the ability of deepfakes to mislead and influence juries, and the challenges with detecting deepfakes that judges, lawyers, and juries composed of laypersons will face. We draw an important distinction between two kinds of AI evidence. The first is “acknowledged AI-generated evidence,” about which there is no dispute that the evidence was created by, or is the product of, an AI system. The second is “unacknowledged AI-generated evidence,” or potential deepfake evidence, where one party claims the evidence is an authentic representation of what actually happened, and the opposing party claims the evidence is a GenAI-fabricated deepfake. We discuss the application of existing rules of evidence that govern admissibility of evidence and how they might be flexibly applied—or slightly modified—to better address what is at issue with known AI-generated evidence. With respect to unacknowledged AI-generated evidence, we explain the challenges associated with using the existing rules of evidence to resolve the question of whether such evidence should be admitted, and the possible prejudice if it is allowed to be seen by the jury. We describe two proposed new rules of evidence that we have urged the Advisory Committee on Evidence Rules to consider regarding the evidentiary challenges presented by acknowledged and unacknowledged AI-generated evidence, and the actions proposed by the Committee to date. We finish with practical steps that judges and lawyers can take to be better prepared to face the challenges presented by this unique form of evidence.
We describe two proposed new rules of evidence that we have urged the Advisory Committee on Evidence Rules to consider regarding the evidentiary challenges presented by acknowledged and unacknowledged AI-generated evidence, and the actions proposed by the Committee to date.
Grossman, M., & Grimm, P. (2025). Judicial Approaches to Acknowledged and Unacknowledged AI-Generated Evidence. Science and Technology Law Review, 26(2). https://doi.org/10.52214/stlr.v26i2.13890
Read the original article here.
Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.