
[EDRM Editor’s Note: EDRM is proud to publish Ralph Losey’s advocacy and analysis. The opinions and positions are Ralph Losey’s copyrighted work. All images in the article are by Ralph Losey using AI. This article is published here with permission.]
The law has long adapted to include new participants. First, ships could be sued as if they were people. Later, corporations became legal entities, and more recently even rivers have been declared “persons” with rights. Now we move from ships to silicon: artificial intelligence. A new era of generative AI models can produce words, images, and decisions that resemble the marks of inner awareness. Whether that resemblance is illusion or something more, judges and lawyers will soon confront it not only in legal philosophy and AI seminars, but in motions practice and evidentiary hearings.

The right question is not whether AI is truly conscious, but whether its testimony can be tested with the same evidentiary rigor we apply to human witnesses and corporate entities. Can its words be authenticated, cross-examined, and fairly weighed in the balance of justice?
Courts today are only beginning to brush against AI — sanctioning lawyers for fake citations, issuing standing orders on disclosure of AI use, and bracing for the wave of deepfake video and image evidence. The next frontier will be AI outputs that resemble testimony, raising questions of authentication and admissibility. If those outputs enter the record, courts may need to consider supporting materials such as system logs or diagnostics — not yet common in litigation but already discussed in the scholarship as possible foundations for reliability.
This article follows that path. It begins with the history of legal personhood, then turns to the rules of evidence, and finally examines the personhood and consciousness debate. Along the way, it offers a few practical tools that judges and legal-techs can start using to handle AI in the courtroom. The aim is modest but urgent: to help the law take its first steady steps from ships to silicon, from abstract algorithms to evidence that demands to be weighed.

When the AI is Allowed to Speak
Picture a deposition in complex commercial litigation. Counsel asks the sworn AI witness the most routine of questions: “Can you identify this document which has been marked for the record as Exhibit A?” Without hesitation, the system responds: “Yes, I can. It is part of my cognitive loop.” On its face the response sounds absurd. Machines are not conscious beings, are they? Yet the behaviors behind such a technical statement — goal-directed reasoning, persistent memory, and self-referential diagnostics — are already present in advanced AI systems.
The central risk is not that machines suddenly wake up with human-like awareness. It is that courts, lawyers, judges, arbitrators, and juries will be confronted with outputs that look like intentional human statements. When a human witness identifies an exhibit, counsel ask how and probe the witness’s memory, perception, and possible bias. When an AI says “this document is part of my cognitive loop,” a new type of cross-examination is needed: What loop are you referring to? How is that a part of you? Who are you? Are you not just a tool of a human? Shouldn’t the human you work with be testifying instead of you?
Those questions go to the heart of the credibility problem. Cross-examination works because a human witness can be pressed on perception, memory, or bias. When the witness is an AI, there is no memory in the human sense, no sensory perception of the world, and no personal motive to expose. The answers to “What loop? How is it part of you? Who are you?” may have to come not from the witness itself, but from logs, audit trails, and technical experts who can lay a proper foundation for AI testimony. Counsel on both sides will need to be creative, asking new kinds of questions. How does one prepare an AI witness for cross-examination like this? What objections should be raised? How should a judge respond? At first, there will inevitably be trial and error, appeals, and rehearings. The old boxes just don’t fit anymore.

Legal Personhood: from Ships, to Rivers to Citizens United
Law has long been pragmatic in its treatment of nonhuman actors as legal persons. See e.g. Wikipedia:
In law, a legal person is any person or legal entity that can do the things a human person is usually able to do in law – such as enter into contracts, sue and be sued, own property, and so on.
Roman law, collegia (guilds or associations) functioned as legal entities capable of owning property, contracting, and suing or being sued. During the medieval the common law of admiralty started treating ships as juridical res, subject to in rem suits, even though no one believed the ships were alive. See The Siren, 74 U.S. 152 (1868). The Siren concerned a famous iron-hulled side-wheel steamship named Siren, which the US Navy finally captured in Charleston Harbor in 1865. It was a private trading ship that had run past the Union blockade 33-times, more than any other in history. During capture the Siren’s crew abandoned ship and Union sailors claimed it as a prize of war. The Union sailor crew-owners later accidentally ran into and sunk another ship in New York and that led to the Siren being sued in rem for damages caused its tort.

In the United States, the expansion of corporate personhood began in the late 19th century. Santa Clara County v. Southern Pacific Railroad, 118 U.S. 394 (1886) where, via a mere reporter’s headnote, corporations were cast as “persons” under the Fourteenth Amendment.
More recently, juridical recognition has extended beyond human institutions to natural entities: New Zealand’s Whanganui River was declared a legal person under the Te Awa Tupua Act 2017; Spain’s Law 19/2022 conferred legal status upon the Mar Menor lagoon, supposedly affirmed by Spain’s Constitutional Court in 2024; and Ecuador’s 2008 constitutional reforms enshrined rights of nature, allowing ecosystems standing in constitutional litigation.
In American constitutional doctrine, the controversial Citizens United v. FEC decision (558 U.S. 310 (2010)) further illustrates the elevated legal status of corporations. It held that corporate expenditures in elections are protected speech under the First Amendment. See e.g., The Brennan Center’s Citizens United Explained (provides a detailed critical account of both the decision’s legal reasoning and its broader democratic consequences). Also see: Asaf Raz, Taking Personhood Seriously (Columbia Business Law Review, Vol. 2023 No. 2, March 6, 2024).
These examples show that legal personhood has never been limited to human beings. No one thought ships could think, or rivers could speak, or corporations had beating hearts. Yet all have been treated as persons when it served broader purposes of justice, commerce, or environmental protection. Legal personhood is, at bottom, a policy tool — a fiction the law deploys when the benefits outweigh the costs. If the law has extended personhood in these ways, it is not too much of a stretch to ask whether AI could be next. That debate is already underway.

The Debate Over AI Personhood
Legal scholars, ethicists, and policymakers are deeply divided on this issue, and the arguments on both sides are instructive for anyone imagining what might happen when an AI “takes the witness chair.”
Arguments for AI personhood. Proponents point to precedent. Legal personhood has never been limited to natural persons. Corporations, associations, municipalities, and even natural entities like rivers have been granted legal standing. If a corporation — a legal fiction with no body or mind — can be a person, then it is not unthinkable that a sufficiently advanced AI might one day be treated similarly. Advocates argue that doing so could help fill accountability gaps when AI systems act autonomously in ways not directly traceable to programmers, operators, or owners. Others look ahead to the possibility of artificial general intelligence (AGI) with traits akin to self-awareness. If AI were to achieve something approaching subjective awareness or moral reasoning, then denying rights could be seen as ethically exploitative.
The judicial perspective. An especially thoughtful treatment comes from former SDNY District Judge Katherine B. Forrest in The Ethics and Challenges of Legal Personhood for AI, Yale Law Journal Forum (April 2024). Forrest examines AI’s increasing cognitive abilities and the challenges they will pose for courts, raising concerns about model drift, emergent capabilities, and ultra vires defenses. Her analysis grounds the personhood debate not in philosophy but in the daily realities of judging.
She predicts that while early AI cases will involve “relatively straightforward” questions of tort liability and intellectual property, the deeper ethical dilemmas will not be far behind. As she puts it:
Courts will be dealing with a number of complicated AI questions within the next several years. The first ones will, I predict, be interesting but relatively straightforward: tort issues dealing with accountability and intellectual property issues relating to who made the tool, with what, and whether they have obligations to compensate others for the generated value. If an AI tool associated with a company commits a crime (for instance, engaging in unlawful market manipulation), we have dealt with that before by holding a corporation responsible. But if the AI tool has strayed far from its origins and taken steps that no one wanted, predicted, or condoned, can the same accountability rules apply? These are hard questions with which we will have to grapple.
Forrest then pushes further, highlighting the inevitable collision between doctrine and ethics:
The ethical questions will be by far the hardest for judges. Unlike legislators to whom abstract issues will be posed, judges will be faced with factual records in which actual harm is alleged to be occurring at that moment, or imminently. There will be a day when a judge is asked to declare that some form of AI has rights. The petitioners will argue that the AI exhibits awareness and sentience at or beyond the level of many or all humans, that the AI can experience harm and have an awareness of cruelty. Respondents will argue that personhood is reserved for persons, and AI is not a person. Petitioners will point to corporations as paper fictions that today have more rights than any AI, and point out the changing, mutable notion of personhood. Respondents will point to efficiencies and economics as the basis for corporate laws that enable fictive personhood and point to similarities in humankind and a line of evolution in thought that while at times entirely in the wrong, are at least applied to humans. Petitioners will then point to animals that receive certain basic rights to be free from types of cruelty. The judge will have to decide.
Forrest’s conclusion underscores the urgency of the debate: these issues will not remain theoretical for long. Courts will face them in live cases, on real records, with harms alleged in the here and now.
Her article also offers a striking observation about Dobbs v. Jackson Women’s Health Org., 597 U.S. 215, 276 (2022) noting that it left decisions as to when personhood attaches to the states. By doing so, it opened the door to highly variable juridical interpretations of personhood. As Forrest notes, the decision eliminated any requirement of human developmental, cognitive, or situational awareness as a prerequisite for bestowing significant rights, while at the same time diminishing the self-determination — and therefore liberty — of women. That framework, she suggests, could ironically be repurposed as a basis for extending rights to a human creation: AI. If the law does not demand awareness as a condition of personhood, why exclude machines?

Arguments against AI personhood. Forrest discusses both sides of the AI personhood debate. Critics of AI personhood argue that it lacks the qualities that justify recognition as a legal person. Unlike humans, AI systems have no consciousness, no perception, and no subjective experiences. They process data but do not feel. Treating a machine as a legal person, they warn, could blur the line between humans and tools in ways that erode human dignity. Others worry about liability arbitrage, with corporations offloading blame onto AI “shells” that have no assets and no capacity to make victims whole. That divide is already echoed in the academic literature. See Abeba Birhane, et al., “Debunking Robot Rights Metaphysically, Ethically, and Legally” (2024).
Alternative approaches. Because both extremes raise serious problems, lawmakers and scholars have considered middle-ground options. The European Parliament once floated the idea of “electronic personhood” for robots but ultimately rejected it. The EU AI Act, adopted in 2024, takes a different path: treating certain AI systems as regulated entities subject to logging, oversight, and human accountability, while stopping short of personhood. Other proposals focus on enhancing corporate liability for harms caused by AI or creating a new, limited legal category that acknowledges AI’s unique features without elevating it to full personhood. As Asaf Raz has observed in Taking Personhood Seriously (Columbia Business Law Review, March 2024), legal personhood has always been instrumental, “a policy tool rather than a metaphysical judgment,” and the question is how best to deploy that tool in light of modern challenges.
The Citizens United shadow. In the United States, debates over AI personhood unfold in the long shadow of Citizens United v. FEC, 558 U.S. 310 (2010). By extending First Amendment protections to corporate political spending, the Supreme Court illustrated how powerful the fiction of corporate personhood can become once entrenched. The Brennan Center’s “Citizens United Explained” (2019) offers a detailed critique of that ruling and its consequences for democracy. For many, it stands as a cautionary tale: once nonhuman entities gain even limited rights, those rights may expand in ways courts never intended.
Where courts stand today. For now, these debates remain in the academic and policy realm. No judge has yet been asked to declare an AI system a legal person. What courts do face, however, are more immediate evidentiary challenges: AI-generated outputs, filings drafted with the help of large language models, and the specter of deepfakes masquerading as authentic evidence. Whether or not AI is ever granted personhood, judges must already decide how to handle these new kinds of artifacts under the familiar rules of evidence.

From Philosophy to Procedure: Evidence First
We have traced the history of legal personhood and surveyed the personhood debate. But speculation only goes so far. Courts today are beginning to face a more immediate question: when AI outputs appear in discovery or trial, can they be admitted as evidence? From the fake citations in Mata v. Avianca to standing orders warning lawyers not to submit unverified AI text, judges are already being forced to draw early lines. To keep cases on track, they need tools that are practical, conservative, and rooted in existing evidentiary doctrine.
Here are three such tools for judges, litigators, and legal technologists to consider and refine:
- ALAP: AI Log Authentication Protocol
- Replication Hearing Protocol
- Judicial Findings Template for AI Evidence
Introduction. These are small steps, not sweeping reforms. They echo the serious issues introduced by Judge Paul Grimm and Professors Maura Grossman and Gordon Cormack in Artif icial Intelligence as Evidence, 19 Nw. J. Tech. & Intell. Prop. 9 (2021). That article, though written before generative AI emerged, remains indispensable.
As Grimm, Grossman, and Cormack put it:
The problem that the AI was developed to resolve — and the output it produces — must ‘fit’ with what is at issue in the litigation. How was the AI developed, and by whom? Was the validity and reliability of the AI sufficiently tested? Is the manner in which the AI operates ‘explainable’ so that it can be understood by counsel, the court, and the jury? What is the risk of harm if AI evidence of uncertain trustworthiness is admitted?” (Id. at 97–105).
They stress two core concepts: validity (whether the system does what it was designed to do) and reliability (whether it produces consistent results in similar circumstances). Those concepts have guided courts for years in assessing scientific and expert evidence. They should also guide us here.
For more recent thinking by Grimm and Grossman, see e.g: The GPTJUDGE: Justice in a Generative AI World, Duke Law & Technology Review (Oct. 2023); Judicial Approaches to Acknowledged and Unacknowledged AI-Generated Evidence (May 2025), which addresses deepfakes and recommends using expert testimony to ground admissibility rulings. Also see, Losey, R., WARNING: The Evidence Committee Will Not Change the Rules to Help Protect Against Deep Fake Video Evidence (e-Discovery Team, Dec, 2024).

Tool 1: ALAP — AI Log Authentication Protocol
Purpose & Rationale. ALAP (AI Log Authentication Protocol) is designed to meet the authentication requirement of Federal Rule of Evidence 901(b)(9), which permits authentication of evidence produced by “a process or system” if the proponent shows that the process produces “an accurate result.”
Checklist. Under ALAP, the producing party should provide:
- Model and version identification;
- Configuration record (data sources, parameters, safety settings);
- Prompt and tool call logs;
- Guardrail or filter events;
- Execution environment (hardware/software state);
- Custodian declaration tying the output to this configuration.
Support & Authority.
- Fed. R. Evid. 901(b)(9); Fed. R. Evid. 902(13)–(14) (self-authenticating electronic records).
- United States v. Lizarraga-Tirado, 789 F.3d 1107, 1110–11 (9th Cir. 2015) (Google Earth annotation generated by software, not hearsay, if properly authenticated);
- People v. Goldsmith, 326 P.3d 239, 246 (Cal. 2014) (red-light camera images authenticated as automatically produced records).
Tool 2: Replication Hearing Protocol
Purpose & Rationale. When a human testifies, cross-examination probes perception, memory, and bias. AI has none of those faculties, but it does have vulnerabilities: instability, sensitivity to prompts, and embedded bias in training data. A replication hearing provides a substitute.
The goal is not to achieve exact duplication of output — which may be impossible with evolving, probabilistic models — but to test whether the system is substantially similar in its answers when asked the same or variant questions. In this sense, replication hearings align with the reliability gatekeeping function under Daubert and Kumho Tire. See Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579, 589 (1993); Kumho Tire Co. v. Carmichael, 526 U.S. 137, 152 (1999). They also align with the Evidence Rule governing expert testimony, where “perfection is not required.” Fed. R. Evid. 702, Advisory Committee Note to 2023 Amendment (last two sentences of the 2023 Comment).
For example, I prompted ChatGPT4o as a legacy model on September 28, 2025 as follows: “Provide a one sentence description of artificial intelligence.” It responded by generating the following text: “Artificial intelligence is the field of computer science dedicated to creating systems capable of performing tasks that typically require human intelligence, such as reasoning, learning, perception, and decision-making.“
I provided the same prompt one minute later to the current model, ChatGPT-5, and received this response: “Artificial intelligence is the branch of computer science that designs systems capable of performing tasks that typically require human intelligence, such as reasoning, learning, problem-solving, and language understanding.”
GPT-5 is supposed to be smarter, and its answer reflects that, a little, but is, to me at least, substantially similar to the response of the prior model, GPT-4o. One says is a “field” of computer science, the other a “branch.” One says “reasoning, learning, perception, and decision-making” the other “reasoning, learning, problem-solving, and language understanding.”

Protocol. At its core, a replication hearing should:
- Lock the environment as closely as possible. The producing party must document the version of the system, its configuration, and parameters in place at the time of the original output. If that version is no longer available, the proponent must show why and explain what changes have occurred since.
- Re-run the prompts in a controlled setting. The same queries should be submitted, alongside small variations, to test whether answers remain consistent in meaning. You could do repeat runs to circumvent the changing models issue as part of your tests, just as I did above.
- Log everything. Inputs, outputs, timestamps, and environment details should be captured to permit later review. And be prepared to produce them, so do not include private attorney comments in such a log, such as “Oh no, this will kill our case is we disclose it.”)
- Compare for stability of meaning. The measure is not identical phrasing, but whether the AI provides answers that are effectively the same — the substance is consistent even if the wording differs.
Limitations & Judicial Discretion. Replication hearings are not a silver bullet. Models change, versions drift, and nondeterminism ensures some variation. They should be treated as a stress test, not an absolute guarantee. Consistent results support reliability; unraveling under modest variation reveals weakness. Judges should demand enough stability for adversarial testing and fair weight — but not perfection.
Support & Authority.
- Fed. R. Evid. 702; Advisory Committee Note to 2023 Amendment:
- “Nothing in the amendment imposes any new, specific procedures. Rather, the amendment is simply intended to clarify that Rule 104(a)’s requirement applies to expert opinions under Rule 702. Similarly, nothing in the amendment requires the court to nitpick an expert’s opinion in order to reach a perfect expression of what the basis and methodology can support. The Rule 104(a) standard does not require perfection. On the other hand, it does not permit the expert to make claims that are unsupported by the expert’s basis and methodology.”
- The Rule 104(a)Rule 104. Preliminary Questions. “(a) In General. The court must decide any preliminary question about whether a witness is qualified, a privilege exists, or evidence is admissible. In so deciding, the court is not bound by evidence rules, except those on privilege.”
- Grimm & Grossman, Artificial Intelligence as Evidence, 19 Nw. J. Tech. & Intell. Prop. 1, 46, (2021).
- Grimm & Grossman, Judicial Approaches to Acknowledged and Unacknowledged AI-Generated Evidence (May 2025) at pgs 152 and 153:
- “Finally, the court should set a deadline for an evidentiary hearing and/or argument on the admissibility of acknowledged AI-generated or potentially deepfake evidence sufficiently far in advance of trial to be able to carefully evaluate the evidence and challenges and to make a pretrial ruling.These issues are simply too complex and time consuming to attempt to address on the eve of or during trial.“
- “Expert disclosures should be detailed and not conclusory and must address the evidentiary issues that judges have to consider when ruling on evidentiary challenges, such as the Rule 702 reliability factors and the Daubert factors that we have previously discussed.“

Tool 3: Judicial Findings Template for AI Evidence
Purpose & Rationale. Judges must leave a clear record showing how they handled AI evidence. Federal Rule of Civil Procedure 52(a) already requires findings of fact in bench trials. Extending that practice to AI evidence rulings will give appellate courts a meaningful basis for review.
Template Elements. A model order admitting or excluding AI evidence should, at minimum, address:
- Authentication Measures. Whether the proponent satisfied ALAP requirements — identification of the model/version, logs, custodian declaration, and reproducibility artifacts.
- Replication and Stability Findings. Whether the AI produced the same or substantially similar outputs under controlled re-runs; if not, why not.
- Bias and Sensitivity Testing. Whether adversarial prompts or variant inputs were tested, if reasonably possible and warranted under proportionality standards (Fed. R. Civ. P. 26(b)(1)).
- Protective Measures Applied. Any confidentiality safeguards imposed, including redactions, attorneys’-eyes-only restrictions, or non-waiver stipulations.
- Reliability Determination. The court’s conclusion: admit, admit with limits, or exclude — and the reasoning for that conclusion.
Support & Authority.
- Fed. R. Civ. P. 52(a)(1); General Elec. Co. v. Joiner, 522 U.S. 136, 146 (1997) (emphasizing the abuse-of-discretion standard for evidentiary rulings but requiring a record of reasoning).
- Grimm & Grossman, Judicial Approaches at pg. 154 suggest information helpful for a court to rule includes evidence on validity, reliability, error rates, bias, and in the special cases of AI fraud allegations, “the most likely source of evidence, what the content or metadata suggests about provenance or manipulation, and the probative value of the evidence versus the prejudice that could occur were the evidence to be admitted. unacknowledged AI-generated evidence, information about the most likely source of evidence, what the content or metadata suggests about provenance or manipulation, and the probative value of the evidence versus the prejudice that could occur were the evidence to be admitted.“

Speculation on Future AI Evidence Tools
So far, we have stayed close to the ground, offering simple tools that courts could adopt tomorrow morning without rewriting the Rules. But technology does not stay still. In two to four years — perhaps sooner — we will see generative AI systems like GPT-6 or GPT-7 deployed in ways that make today’s questions about “outputs” seem quaint. These systems may not only generate records but actually appear in court to give live testimony, answering questions in real time. They may prove to be very good at cross-exam — and finally stop apologizing. What happens to our starter tools in that future world?
Let us consider each in turn.
Tool 1. ALAP in the Age of GPT-7: From Logs to Consciousness Diaries
Today’s ALAP demands logs, prompts, and configurations. In the GPT-6/7 era, those logs may look more like consciousness diaries: running records of what the system “attended to,” what internal states it represented, and why it chose one answer over another. Already, researchers are experimenting with far greater clarity of process, with “chain of thought logging” and “explainable AI” systems that preserve a trace of the model’s reasoning. Dario Amodei Warns of the Danger of Black Box AI that No One Understands (e-Discovery Team, May 19, 2025) (discusses Amodei’s AI MRI proposal, voluntary transparency rules and export‑control “breathing room”). Future ALAP may require not just the external inputs and outputs, but the internal rationale artifacts, what path the AI followed inside its trillion-parameter brain.

Imagine a courtroom where the proponent of Exhibit A does not simply submit logs, but a time-stamped trace of the AI’s deliberations, a transcript of a digital mind. It will likely be very impressive in its complexity. A trillion-transformer transcript is beyond what a single human could fully comprehend, much less create. Yet it will be produced, it will be disclosed and attacked by opposing counsel and their own AI. They will look for holes and errors, as they should. If the proponent of Exhibit A has done their job correctly and tested the Ai generation fully before production, the opposition will find no errors of significance. Exhibit A will then be authenticated and admitted as accurate and reliable.
The legal arguments will then focus on the real disputes: the significance of Exhibit A, and how the AI-generated evidence applies to the facts and issues of the case. The weight of that evidence, and the ultimate outcome, will remain — as they should — in human hands: judge, arbitrator, and jury..
Tool 2: Replication Hearings: From Sandbox Runs to AI Depositions
Replication today means re-running queries in a sandbox to test stability. In the GPT-6/7 era, it may look more like a deposition of the AI itself. Counsel could pose variations of the same question live, in a controlled setting, to see whether the system answers consistently or unravels. Dozens of rephrasings, edge cases, and adversarial prompts could probe whether the AI’s testimony holds up under pressure.
Think of it as Daubert meets the Turing Test: is the AI stable enough under questioning to count as reliable testimony, or does it contradict itself like a nervous witness? Judges may even order recorded mock trial runs of AI testimony as the new form of replication hearing — “stress tests” that simulate cross-examination before the real thing.
Tool 3. Judicial Findings Templates: From Written Orders to Dynamic Bench Reports
Today, findings templates are static orders: a few pages where a judge checks boxes on authentication and admissibility. In the GPT-6/7 era, they may evolve into dynamic bench reports. A judge would not just note that an AI output was authenticated and replicated, but attach the full supporting record: the AI’s self-examination logs, replication deposition transcripts, error analyses, and even explainability metrics such as probability distributions or self-reported uncertainty. Independent audits of system reliability might become standard exhibits.
Picture an appellate court reviewing not just a written order, but a bundle: the ALAP diary, the replication deposition, and the judge’s annotated findings, all linked together. It would be the twenty-first-century equivalent of a paper record on appeal — except the “witness” was silicon, not flesh.
Evidence Tools of Tomorrow
In short, the tools we begin with today will not remain static. ALAP could evolve into machine “reasoning diaries.” Replication hearings could resemble live AI depositions. Judicial findings templates may grow into multimedia records of AI testimony, complete with cross-exam transcripts, explainability metrics, and confidence scores.
That future is not science fiction — it is the natural extension of what courts already require: transparency, stability, and a record clear enough for appellate review. Just as ships, corporations, and rivers once forced the law to expand its categories, AI will compel judges and lawyers to reshape the evidentiary toolkit. The old boxes do not fit anymore, but the work of testing, admitting, and weighing evidence remains the same.

Conclusion: The Call of the Frontier
We began with ships, corporations, and rivers. Each, in its time, seemed an unthinkable candidate for legal personhood, yet each was granted recognition when the law needed a tool to achieve justice. Today, AI systems stand at the edge of that same conversation. The question is not whether they are conscious, but whether their words, records, and actions can be trusted enough to enter our courtrooms.
We promised practical tools, and we have delivered: ALAP for authentication, Replication Hearings for reliability, and Judicial Findings Templates for clarity. They are modest steps, but they mark the beginning of a path forward. What began as philosophy has become procedure. What began as speculation has become concrete tools judges and lawyers can use.

Looking ahead, those tools will evolve. Logs may become digital diaries, replication may resemble live AI depositions, and judicial findings may grow into dynamic bench reports. Opposing counsel will test them with rigor — often with the aid of their own AI. Judges will demand completeness and clarity before evidence is admitted. That is the adversarial system doing its work.
The choice is ours. We can resist and cling to the old boxes, or we can step forward and build new ones. The Siren, 74 U.S. 152 (1868), the first U.S. case to treat a ship as a legal entity, now sets sail again, this time into the waters of artificial intelligence. The horizon is uncharted, but the wind is at our back and the AI sextant points the way.

Ralph Losey Copyright 2025 – All Rights Reserved.
Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.