
ChatGPT
← Back to Blog
Search
Authors
The AI Sanction Wave: $145K in Q1 Penalties Signals Courts Have Lost Patience with GenAI Filing Failures
Courts are escalating enforcement against AI-generated hallucinations in legal filings, with $145,000 in Q1 2026 sanctions. Key rulings, a growing judicial AI paradox, and emerging liability risks for developers signal a major shift for legal...
Important A.I. Work Product and Protective Order Decision: Application to Pro Se Litigant and Beyond?
Can using AI in litigation stay protected, or does it risk exposing strategy? A new decision in Morgan v. V2X draws a careful line, protecting AI-assisted work product while placing real limits on how confidential...
“Colorado policy could shield AI from complaints regarding unauthorized practice of law”
A new Colorado policy could limit enforcement of unauthorized practice of law claims against AI tools, even as litigation against OpenAI alleges such tools engage in UPL.
Something Big Is Happening — But Not What You Think
Attorney Ralph Losey, who pioneered machine learning, AI and quantum i law, rebuts Matt Schumer’s viral essay on the immediate impact of existing AI, revisiting past claims, including the bar passage rate.
Book Review: Tom O’Connor, “Artificial Intelligence for the Rest of Us”
Tom O’Connor’s latest book, Artificial Intelligence for the Rest of Us, is a practical guide for legal professionals navigating AI tools and ethics. With contributions from Rakesh Madhava, Brett Burney, Elizabeth Guthrie, and David D....
2025’s Data Upheaval: What AI, Third-Party Risk, and Data Sprawl Mean for Your 2026 Strategy
2025 marked a seismic shift in data governance as AI adoption soared, third-party risks expanded, and courts demanded automation. Learn how legal and compliance teams must adapt in 2026.
Three Major LLMs Released in Twelve Days: Why Single-Model Discovery Platforms Are Now a Liability
OpenAI, Google, and Anthropic each launched new flagship LLMs in under two weeks. Legal discovery platforms relying on a single model are now structurally disadvantaged. Here’s why multi-model architectures are the only sustainable solution.
Reasonable or Overreach? Rethinking Sanctions for AI Hallucinations in Legal Filings
When AI-generated hallucinations appear in court filings, how should judges respond? A new four-pillar framework proposes principled, proportional sanctions to protect the justice system without overreach.
When AI Conversations Become Compliance Risks: Rethinking Confidentiality in the ChatGPT Era
AI chats may seem private, but legal experts warn they could become courtroom evidence. As AI integrates into legal work, professionals must rethink digital confidentiality and privilege risks.
The Shape of Justice: How Topological Network Mapping Could Transform Legal Practice
What if justice had a shape — not rigid scales or a blindfolded figure, but a living, dynamic map? Imagine causation as a multidimensional space, where influence, control, and responsibility could be mapped across a...
Epiphanies or Illusions? Testing AI’s Ability to Find Real Knowledge Patterns – Part Two
The moment of truth had arrived. Were ChatGPT’s insights genuine epiphanies, valuable new connections across knowledge domains with real practical and theoretical implications, or were they merely convincing illusions? Had the AI genuinely expanded human...
When AI Policies Fail: The AI Sanctions in Johnson v. Dunn and What They Mean for the Profession
The Johnson v. Dunn case marks a turning point in judicial tolerance for AI citation errors. Despite clear firm policies and experienced counsel, the court imposed severe sanctions, signaling that only individual verification, not institutional...
