From Competence to Judgment: How AI Compresses Litigation Work and Why That Makes Judgment More Important

Image: Hon. Ralph Artigliere (ret.) and William F. Hamilton.

[EDRM Editor’s Note: EDRM is proud to publish the advocacy and analysis of the Hon. Ralph Artigliere (ret.) and William F. Hamilton. The opinions and positions expressed are their own. © 2026 Ralph Artigliere and William F. Hamilton.]


Artificial intelligence has entered legal practice with unusual speed and reach. Within a brief period, tools that can organize information, generate analysis, and structure legal arguments have become embedded across litigation workflows from early case assessment to motion practice and trial preparation. The profession’s initial response has focused on competence: whether lawyers and judges understand these tools well enough to use them safely. That focus is necessary. But it is no longer sufficient. Generative AI is different.

The profession has adapted to prior waves of technology, often unevenly and over extended periods. This new AI transition is reshaping where legal work concentrates and what remains for lawyers to do. Artificial intelligence is advancing rapidly, reaching more aspects of practice, and exerting immediate pressure to engage. Its adoption is no longer optional. For those who attain competence and adopt it effectively, AI offers substantial gains by allowing lawyers to move more quickly from organizing case information to focusing their efforts where judgment matters most.

The most consequential question is what remains when AI competence is assumed. The answer is judgment. Artificial intelligence is not simply another tool layered onto existing practice. It is restructuring how legal work is performed. Tasks that once depended on time, labor, and staffing are increasingly performed through computational capability. That shift changes not only efficiency, but how cases are understood, how strategies are formed, and how litigation advantage is created.

The defining risk is not that lawyers will fail to use these tools. It is that they will mistake their outputs for judgment. At the same time, the defining opportunity is that lawyers who use these tools effectively can move more quickly from data to judgment, focus more directly on strategy, and apply their experience to the aspects of advocacy that matter most.

The question is no longer whether the profession will adopt artificial intelligence. It is whether lawyers will use it in a way that improves not only efficiency, but the quality of legal judgment itself.

The defining risk is not that lawyers will fail to use these tools. It is that they will mistake their outputs for judgment.

Hon. Ralph Artigliere (ret.) and William F. Hamilton.

The Structural Shift: From Scale to Capability

For decades, litigation advantage depended heavily on scale. Large firms were able to deploy teams of lawyers to review documents, build chronologies, synthesize facts, and develop arguments through labor-intensive processes. When a case involved tens of thousands or millions of documents, the ability to assign large numbers of lawyers to the task was itself a strategic advantage. Time and manpower enabled these firms to develop a more comprehensive understanding of the record, test more theories, and respond more effectively to opposing arguments.

Much of what the profession came to regard as expertise in early-stage litigation was intertwined with that capacity. The ability to identify key documents, assemble timelines, recognize patterns across large datasets, and surface relevant issues was understood as part of the lawyer’s skill set.

Artificial intelligence changes that equation fundamentally. Tasks that once required dozens of lawyers doing first-pass review, issue identification, chronology development, transcript synthesis, and document clustering, can now be performed computationally. A system can analyze large volumes of documents, group them by topic, generate timelines across custodians, and identify relationships within the data in a fraction of the time previously required. Early litigation tasks that were once the product of organization, classification, and synthesis performed at scale are where artificial intelligence performs most effectively. Used properly, artificial intelligence performs those functions efficiently and consistently. What was once labor becomes algorithmic capability. AI is evolving from a search tool to an analytical system that supports human judgment.

That does not diminish the role of legal expertise. It clarifies it. The lawyer’s role was never simply to process information. It was to decide what mattered, what did not, and how the facts and law should be brought together to advance a client’s position. When substantial portions of the processing function are automated, what remains becomes more accessible and visible faster. What remains is uniquely human: critical analysis and judgment. AI performs the work of organization and synthesis. The lawyer determines what matters, what is reliable, and how the case should be presented.

The consequences of this shift for litigation practice and access to justice are significant. A smaller, disciplined team equipped with effective AI workflows can now analyze large datasets, construct case narratives, and identify strategic pathways in ways that previously required far greater institutional resources. For example, a two-lawyer team can now ingest a large production, generate an initial timeline of key events, cluster communications by subject matter, and identify potential inconsistencies across witnesses within days rather than weeks. That does not mean the analysis is complete or correct. It means that the baseline organization of the case can be reached far more quickly and by far fewer people. As a result, smaller, disciplined teams can now perform work that previously required large firm resources.

The potential implications for access to justice emerge as the advantages of scale are reduced and smaller firms and individual practitioners may be able to analyze large datasets, develop case narratives, and engage in complex litigation in ways that were previously constrained by resources.1 That increased capability may expand access to legal representation in certain contexts and allow more litigants to pursue or defend claims effectively. At the same time, access to tools alone does not resolve disparities in outcomes. The effective use of these capabilities still depends on the exercise of judgment, which remains unevenly distributed across the profession. The advantage of scale has not disappeared. Large firms still bring experience, specialization, and resources that matter in complex litigation. But the exclusive advantage of manpower has been materially reduced, and the timeline of availability is accelerated.

This is not simply a gain in efficiency. When judgmental capability replaces labor as the primary driver of early-stage case development, the source of competitive advantage changes. The question is no longer who can deploy the most people. It is who can most effectively interpret, test, and refine the output generated by these systems. That shift changes not only who can perform the work, but when and how understanding is formed in a case.

Compression, Not Cognition

The most important contribution of artificial intelligence to litigation is not cognition. It is compression. An AI system can condense a 300-page record into a 3-page summary, extract apparent issues, and reorganize facts into a narrative shape, and do it quickly. But it cannot determine what matters, weigh credibility, or evaluate whether the compressed output is right. AI does not reason or exercise judgment; it generates structured outputs based on patterns in data, organizing information in ways that often resemble analysis. Human judgment is required to determine whether those outputs are accurate, complete, and meaningful for advocacy. Compression accelerates the appearance of understanding without guaranteeing its validity.

To understand why that matters, it helps to consider how litigation work has traditionally unfolded. Building an understanding of a case has historically been a sequential process. Lawyers review documents, then summarize them, then compare accounts, then develop timelines, and only after that begin to shape arguments. Each step depends on the one before it. Insight follows effort, and effort takes time. That sequence imposed a kind of discipline. Lawyers encountered the record in stages. Patterns emerged gradually. Understanding was built incrementally, often through repetition and comparison.

That process, however, also imposed limits. A legal case rarely develops in a clean, linear fashion. New information arrives unevenly, often introducing conflict rather than clarity. Integrating large volumes of material, particularly from multiple custodians, external sources, or prior proceedings, has historically been slow and incomplete. Lawyers worked with partial visibility, refining their understanding as the record developed.

Artificial intelligence does not simply make each step faster. It alters the order in which those steps occur. Large volumes of documents can be processed and organized at once, rather than reviewed sequentially. Timelines can be generated across custodians, themes surfaced, and relationships identified before a lawyer has worked through the underlying material in a linear way. Information that would previously have been encountered in fragments can now be presented as a structured whole.

What previously unfolded over time can now appear simultaneously. In that sense, AI compresses several dimensions of litigation work: reducing the time required to process large records, narrowing the distance between raw data and usable insight, accelerating the iteration cycle for testing arguments, and closing the gap between isolated facts and a structured narrative. The result is that lawyers can engage with the apparent shape of an entire record far earlier in a case than was previously possible. Instead of working from isolated portions of the record, teams can operate from a shared, continuously updated understanding of the case as a whole.

When organization appears quickly, it can feel complete. A timeline generated in minutes, a summary that reads cleanly, or a set of themes that align with an initial theory of the case can create a powerful sense of clarity. But that clarity may be provisional. It reflects how the system has organized the information, not necessarily how the information should be understood. When patterns and structure emerge quickly, they can be mistaken for understanding. That initial structure is not tested. Judgment requires that testing.

Because the output can appear complete, errors are not always obvious. Inaccurate quotations, incomplete summaries, or misaligned conclusions may be embedded within otherwise coherent narratives. The task is not simply to check for errors, but to determine whether what appears reliable is, in fact, sound. That is the work of judgment, exercised through verification.

Compression accelerates the appearance of understanding without guaranteeing its validity.

Hon. Ralph Artigliere (ret.) and William F. Hamilton.

This distinction between generating organized output and exercising judgment has been articulated with precision in the literature on AI itself. Brian Cantwell Smith (1950-2025), Reid Hoffman Professor of Artificial Intelligence and the Human at the University of Toronto, draws a foundational distinction between what he calls “reckoning” — the computational capacity to process information and generate structured outputs — and judgment, which he describes as dispassionate, deliberate thought grounded in ethical commitment and oriented toward responsible action in the world.

Smith’s thesis, developed in The Promise of Artificial Intelligence (MIT Press 2019), is that AI systems, however powerful their computational capacity, do not and cannot exercise judgment in this sense. They produce output, but they do not bear responsibility for them. Responsibility remains with the human who relies on those outputs. That distinction matters not only philosophically, but in the practice of law. When AI generated organization or analysis is mistaken for judgment, the error is not merely a missed fact check. It reflects a category mistake, confusing computational reckoning with professional decision making. And it places responsibility on a tool that does not and cannot bear it.

AI does not assume responsibility for the conclusions it suggests. As Smith observes, AI “doesn’t give a damn.”2 It has no stake in the outcome, no accountability to the record, and no relationship to the real world in which decisions have real impacts on people, institutions, and outcomes.3 AI generates output without understanding what it means for the people who must act on them. The responsibility for care, commitment, and existential involvement remains entirely with the lawyer or judge who relies on that output. That distinction defines the critical boundary between generated content and professional judgment.

The task for the AI empowered lawyer is no longer simply to build case comprehension from the ground up. It is to evaluate and refine the provisional intelligible sense offered by the AI. That is not a question of speed or efficiency. It is a question of judgment.

Early Case Assessment Reimagined: From Data to Decisions

Nowhere is the structural shift more visible than in early case assessment (ECA). ECA is a critical stage of litigation because it shapes how a case unfolds, how resources are allocated, and how strategy is formed and communicated. Early assessments influence staffing, discovery planning, and client expectations regarding cost, risk, and potential outcomes. When done well, ECA provides a significant strategic advantage by aligning legal analysis with practical decision-making at the outset of the case.

The constraint is no longer access to information. It is the discipline to question what appears to be known.

Hon. Ralph Artigliere (ret.) and William F. Hamilton.

Traditionally, ECA was constrained by time and labor. Lawyers worked with partial visibility into the record, forming strategies based on limited document review, initial interviews, and incremental discovery. Early impressions were provisional, and case comprehension deepened as more information became available. Insight followed effort, and effort took time.

Generative AI changes that sequence. AI-assisted workflows allow lawyers to move rapidly from raw data to structured analytical arrangements. Large document sets can be clustered by issue, timelines generated across custodians, key actors identified, and themes surfaced at the outset. Instead of sampling the record, lawyers can begin to engage with its overall shape early in the case. What once required weeks of iterative review can now emerge in hours as an organized view of the record.

This fundamentally changes the function of ECA. It shifts the process from gathering information to generating insight and from delayed strategy formation to front-loaded decision-making.4 The practical effect is that strategic judgments are made earlier, often before the record has been fully tested through adversarial development. What emerges is not judgment itself, but an enriched analytical environment in which judgment can be exercised earlier. AI assisted workflows surface relationships, themes, and temporal patterns that previously emerged only after weeks of review, allowing lawyers to evaluate competing theories, test assumptions, and refine strategy at the outset of a case.

That shift has immediate consequences. Lawyers can assess the merits of a case earlier, refine discovery with greater precision, test claims and defenses before positions harden, and make more informed decisions about settlement or motion practice. In many cases, the initial framing of the dispute is no longer driven by what can be reviewed in time, but by what can be organized and surfaced computationally.

But this capability introduces a new risk. AI-generated early narratives can create false confidence. Patterns may be incomplete, clustering may obscure nuance, and early coherence can be mistaken for accuracy. A timeline that appears comprehensive may omit critical context. A set of themes that aligns with an initial theory may reflect how the system has organized the data rather than how the facts should be understood.

ECA has always involved decision-making under uncertainty. AI does not remove that uncertainty. It changes its timing and its appearance. Understanding forms earlier, but it may rest on a foundation that has not yet been tested. For example, in a dispute involving multiple custodians, a lawyer can now generate an initial map of communications, identify periods of concentrated activity, and surface key documents tied to disputed events within an abbreviated time frame. That early visibility can shape discovery requests, refine claims or defenses, and inform settlement posture. But the apparent clarity of that early structure must still be tested against the underlying record.

The constraint is no longer access to information. It is the discipline to question what appears to be known.

Litigation Practice Reframed

The same structural shift extends across the litigation lifecycle.

In discovery, the work moves from document-by-document review to pattern recognition across the record. Instead of encountering information sequentially, lawyers can identify clusters of related documents, surface themes, and detect relationships among custodians early in the process. What once required sustained manual review can now be approached as an exercise in interpreting organized patterns over the entire data set in the case file. AI can assist in organizing facts into narrative form, but the selection, emphasis, and meaning of that narrative remain the product of human judgment.

In depositions, preparation shifts from reviewing transcripts individually to analyzing testimony across witnesses. AI-assisted tools allow lawyers to compare accounts, identify inconsistencies, and track the evolution of key facts across multiple depositions. The focus moves from isolated preparation to understanding how testimony fits within a broader evidentiary structure. The same dynamic applies to expert work, which is often among the most data-intensive aspects of litigation. Expert analysis may involve prior testimony, publications, underlying data, and competing opinions across multiple matters. AI-assisted tools can organize and synthesize that material quickly, identifying themes, potential weaknesses, and areas for cross-examination. But as with other stages of litigation, the output reflects how the information is structured, not necessarily how it should be evaluated. Determining which opinions are reliable, which assumptions matter, and how expert evidence should be presented remains a function of judgment.

In motion practice, AI accelerates the organization of arguments and supporting authority. Drafting becomes less about constructing a position from a blank page and more about refining, testing, and improving structured arguments generated through iterative interaction with the record and the law.

Trial or mediation preparation reflects the same shift. Instead of assembling evidence sequentially, lawyers can work from an integrated view of the case, linking documents, testimony, and themes into a cohesive narrative earlier in the process.

In each instance, AI changes how the work is performed. It compresses the path from information to organization. But it does not change who is responsible for the outcome. The task remains to determine what matters, what is reliable, and how the case should be presented. That remains a function of judgment.

The direction of travel in legal AI reinforces this point. The next wave is not limited to better prompt-response systems, but includes increasingly “agentic” tools that can connect steps in a workflow, maintain context across tasks, and generate more integrated work product with less continuous user prompting. Legal-industry commentary in 2026 has treated agentic AI as a defining development,5 particularly in environments such as litigation support, document analysis, and knowledge work.6

The agentic evolution is one more reason for lawyers to stay engaged with the technology as it advances. Lawyers who understand these tools as they improve will be better positioned to use them effectively as their capabilities expand. As these tools become more autonomous, the judgment required changes in kind: not merely evaluating outputs, but designing the oversight structures that determine which decisions require human intervention and which can be trusted to run. Designing that framework is itself an act of professional judgment that no AI system can perform on its own behalf. That is a more demanding exercise of professional responsibility, not a lesser one.7

The same shift has implications for privilege and confidentiality. The use of AI tools in analyzing documents, developing strategy, or generating work product raises questions about whether those materials remain protected, particularly where tools operate on external platforms or without appropriate supervision. Recent decisions reflect these emerging issues. In United States v. Heppner,8 a federal court declined to extend privilege protections to AI-generated content created without attorney supervision, while in Warner v. Gilbarco, Inc.,9 a court recognized work-product protection where AI-assisted analysis was used under conditions consistent with traditional legal practice.⁸ In Jeffries v. Harcros Chems. Inc.,10 a federal court ordered that only closed AI tools could be used in processing discovery documents, declining to permit the use of open-platform tools on the ground that doing so would risk unauthorized disclosure of protected material.

These outcomes turn not on the existence of the technology, but on how it is used. The judgment required is not merely whether to use AI, but how to structure its use—what information is shared, under what conditions, and with what degree of oversight—to preserve the protections that litigation depends upon.

As these systems automate larger portions of analytical work, they allow lawyers to focus more directly on judgment-intensive aspects of case strategy. But again, greater capability does not lessen the importance of judgment. It heightens it. The more complete and useful the output appears, the more important it becomes to test what the system has done, determine where human review is indispensable, and decide what can be trusted. None of this replaces judgment. It changes when and how judgment is exercised.

The New Divide: Judgment, Not Access

The emerging divide in the legal profession is not between those who use AI and those who do not. It is between those who exercise judgment in its use and those who mistake fluency for reliability.

Access to tools is no longer scarce. Judgment is.

Hon. Ralph Artigliere (ret.) and William F. Hamilton.

AI systems are increasingly capable of producing outputs that are clear, well-structured, and internally coherent. They generate responses that read like analysis, align with expected forms of legal reasoning, and can be quickly incorporated into workflows. That fluency is powerful, but it can also be misleading. An output that appears complete and persuasive may rest on incomplete information, misapplied authority, or untested assumptions.

The question is no longer whether lawyers have access to these tools. That barrier has largely disappeared. What matters is how those tools are used: whether outputs are interrogated, tested against the record, and evaluated for legal significance, or accepted at face value because they sound right.

Access to tools is no longer scarce. Judgment is.

Institutional Response: Divergence Without Disagreement

Courts and organizations are responding to the use of AI in legal practice, but not uniformly.

Some jurisdictions require disclosure and certification when AI is used in filings.11 Others do not, relying instead on existing professional obligations.12 Some organizations restrict or prohibit the use of certain tools, while others actively encourage their adoption under defined conditions.

These differences reflect variation in approach, not disagreement in principle. Across courts and institutions, the underlying premise is consistent: responsibility remains human. The use of AI does not shift accountability from the lawyer or judge to the tool.

Where responses differ is in how that responsibility is operationalized. Training, supervision, and internal governance are increasingly emphasized, but understanding the technology is only part of the equation. The critical issue is how its outputs are evaluated and relied upon in practice. In that sense, institutional responses are converging on the same conclusion reached at the individual level. The tools may vary. The policies may differ. The responsibility does not.

Judicial Competence: The Constraint on Structural Change

The structural shift in litigation does not occur in isolation. It is mediated by the courts.

Judicial familiarity with AI is uneven. Federal courts often have greater resources and exposure to technology-driven litigation. State courts, where most cases are heard, frequently operate under heavy dockets with limited technical support and encounter AI-related issues in real time. In practice, this means that a federal court with clerks and regular exposure to complex discovery may expect detailed, technically informed presentations on AI-related issues, while a high-volume state court may encounter those same issues with less advance notice and fewer institutional resources.

The result is not resistance. It is uneven capacity. The transformation of litigation practice assumes that courts can evaluate AI-assisted work. But uneven judicial familiarity introduces friction. The pace of change will depend not only on what lawyers can do, but on what courts can understand and assess. This creates a transitional moment in which lawyer judgment must bridge the gap.

Practitioners must:

  • tailor advocacy to the court,
  • raise technical issues early rather than by surprise,
  • present expert explanation clearly, and
  • support development of judicial resources and guidance.

In this period, judgment is not only an individual obligation. It is a system function.

The Next Risk: Deskilling

A further risk is conceptual deskilling.

As AI systems increasingly assist in organizing information, generating summaries, and framing analytical paths, there is a corresponding risk that lawyers become less engaged in the underlying process of building arguments from the record upward. The process of identifying what matters, weighing competing interpretations, and developing a position grounded in the evidence is central to effective advocacy. It is also a skill that depends on experience and sustained practice.

The same forces that expand capability also introduce a countervailing risk: the erosion of the habits that produce judgment. AI can accelerate access to the information needed for structured understanding, but it can also shift the lawyer’s role from constructing analysis to reacting to it. When initial frameworks, narratives, or lines of reasoning are generated externally, there is a risk that they are accepted, refined, or adjusted without being independently developed. Over time, that shift can erode the habits that support sound legal judgment.

Recent research suggests this risk is not merely theoretical. A 2025 MIT Media Lab study found that reliance on large language models may reduce engagement in independent problem-solving and critical analysis, a phenomenon described as “cognitive debt.”13 Although the study remains a preprint, its findings are consistent with broader research on cognitive offloading.

The skill is not merely in selecting among alternatives, but in understanding how those alternatives arise, what assumptions underlie them, and where they may be incomplete or flawed. Experience gained through performing these tasks directly is what allows lawyers to apply judgment effectively in more complex and uncertain situations. The danger is not that lawyers will use AI. It is that they will rely on it without fully engaging in the reasoning process it is meant to support.

Conclusion — From Competence to Judgment

Competence is now the baseline.

Artificial intelligence is redistributing capability, compressing time, and reshaping how litigation work is performed. The defining professional task is no longer simply to use these tools competently, but to exercise judgment in how their outputs are interpreted, tested, and applied. The future of litigation advantage will not turn on access to technology, but on the discipline to question what it produces and the judgment to decide what can be trusted.

As AI systems continue to increase in capability and begin to operate across larger portions of the litigation workflow, that distinction will become more pronounced. Lawyers will be asked not only to evaluate outputs, but to determine when they should be relied upon, when they must be tested, and when they must be set aside. But that shift is not only a constraint. It is an opportunity.

As AI assumes more of the mechanical and volume-driven aspects of legal work, it allows lawyers to focus more fully on the tasks that define effective advocacy: judgment, strategy, and the application of experience to complex and uncertain problems.

The reckoning is available to everyone. The judgment belongs to the lawyer.

Hon. Ralph Artigliere (ret.) and William F. Hamilton.

Those who develop the discipline to use AI with judgment will not only avoid its risks. They will amplify their ability to understand cases, refine strategy, and deliver faster and better outcomes for their clients. The risk is not that lawyers will rely on machines. It is that they will rely on them without recognizing where judgment is required. Used well, these tools enrich rather than diminish the lawyer’s role by allowing them to focus more directly on the exercise of their fundamental capacity: judgment.14 The reckoning is available to everyone. The judgment belongs to the lawyer.


Notes

  1. See Thomson Reuters, 2025 Generative AI in Professional Services Report (2025) found at https://www.thomsonreuters.com/content/dam/ewp-m/documents/thomsonreuters/en/pdf/reports/2025-generative-ai-in-professional-services-report-tr5433489-rgb.pdf. ↩︎
  2. Brian Cantwell Smith, The Promise of Artificial Intelligence: Reckoning and Judgment 108 (MIT Press 2019) (quoting John Haugeland). ↩︎
  3. Id. at 108-113. ↩︎
  4. See Doug Austin, From Data to Decisions: Leveraging Generative AI for Early Case Assessment in Modern Litigation, E-Discovery Today (Mar. 26, 2026) found at https://ediscoverytoday.com/2026/03/26/from-data-to-decisions-leveraging-generative-ai-for-early-case-assessment-in-modern-litigation-ediscovery-webinars/. ↩︎
  5. . Pereyra, Harvey’s Spectre Agent Points to ‘Law Firm World Model,’ artificiallawyer, (Apr. 3, 2026) found at https://www.artificiallawyer.com/2026/04/03/harveys-spectre-agent-points-to-law-firm-world-model/. ↩︎
  6. Sabastian Niles, How Law Firms Can Lead the Agentic AI Era—And What Clients Now Expect, Harv. L. Sch. F. on Corp. Governance (Mar. 24, 2026) found at How Law Firms Can Lead the Agentic AI Era — And What Clients Now Expect; Nicole Black, Legalweek 2026: Home Bases, Agentic AI and the Race to Own the Lawyer’s Desktop, ABA J. (Mar. 24, 2026) found at Legalweek 2026: Home bases, agentic AI and the race to own the lawyer’s desktop; Catherine Reach, The Emergence of Agentic AI, N.C. Bar Ass’n (July 14, 2025), found at https://www.ncbar.org/2025/07/14/the-emergence-of-agentic-ai/. ↩︎
  7. In applying agentic power in which AI systems initiate rather than respond by executing multi-step workflows with limited prompting, the judgment question is still present but changes in kind. The lawyer is no longer primarily evaluating an output but also designing oversight structures and deciding which decision points require human intervention. ↩︎
  8. 2026 U.S. Dist. LEXIS 32697, 2026 WL 436479 (S.D.N.Y. Feb. 17, 2026). ↩︎
  9. 2026 U.S. Dist. LEXIS 27355, 2026 WL 373043 (E.D. Mich. Feb. 10, 2026). ↩︎
  10. 2026 U.S. Dist. LEXIS 63182, 2026 LX 167878, 2026 WL 820218 (D. Kans. Mar. 25, 2026)(open-loop generative AI tools create unacceptable risks of disclosure, loss of control, inability to delete or claw back data, and potential GDPR violations). ↩︎
  11. See, e.g., Florida 17th Jud. Cir. Administrative Order 2026-03-Gen (Amendment 1)(requiring disclosure and certification of compliance with verification and accuracy); Florida 11th Jud. Cir. (Miami‑Dade County) Administrative Order 26‑04 (mandating that any attorney or self‑represented litigant who uses generative AI in preparing a pleading, motion, memorandum, response, proposed order, or other filing must affirmatively disclose that use on the face of the document and certify that all authorities have been independently verified). ↩︎
  12. See, e.g., Ill. Sup. Ct., Policy on Artificial Intelligence (Jan. 1, 2025)(use of AI is permitted without disclosure provided it complies with legal and ethical standards) ↩︎
  13. See Nataliya Kosmyna et al., Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task, arXiv:2506.08872 (June 10, 2025) (preprint), available at https://www.media.mit.edu/publications/your-brain-on-chatgpt/ ↩︎
  14. Brian Cantwell Smith, The Promise of Artificial Intelligence, supra n. 2 at 105-114. ↩︎

April 13, 2026 © 2026 Ralph Artigliere and William F. Hamilton. ALL RIGHTS RESERVED (Published with permission.)
Assisted by GAI and LLM Technologies per EDRM’s GAI and LLM Policy.

Authors

  • With over 45 years of experience as a civil trial lawyer, judge, author, educator, and legal professional, I bring a unique blend of expertise in the practice of law, judicial perspective, and passion for continuing education and professional development. My career highlights include serving as a Circuit Judge in Florida's 10th Judicial Circuit, where I presided over felony criminal, circuit civil, and family cases, and held leadership roles as Administrative Judge of the Civil and Family Divisions.

    View all posts
  • Master Legal Skills Professor and Director at UF Law International Center for Automated Information Retrieval.
    View all posts