Search
Authors
The AI Sanction Wave: $145K in Q1 Penalties Signals Courts Have Lost Patience with GenAI Filing Failures
Courts are escalating enforcement against AI-generated hallucinations in legal filings, with $145,000 in Q1 2026 sanctions. Key rulings, a growing judicial AI paradox, and emerging liability risks for developers signal a major shift for legal...
HaystackID Named Finalist for Intelligent Insurer’s Cyber Insurance Awards USA 2026 in Two Categories
HaystackID has been named a finalist in two categories at the Intelligent Insurer Cyber Insurance Awards USA 2026, recognizing its expertise in incident response, digital media authentication, and cybersecurity solutions.
Hallucination or Old-Fashioned Error? It Doesn’t Matter
In Quandel v. Hunt, the court made clear that lawyers cannot excuse nonexistent case citations and quotations by blaming AI, or anything else. The real issue, the court said, is the duty to verify authorities...
Suggested A.I. Rule – Suggested Amendment to Maryland’s Computer-Generated Evidence Rule
This article proposes revising Maryland Rule 2-504.3 to expressly cover generative AI evidence. The amendment would add notice, disclosure, discovery, pretrial hearing, and expert-related procedures for computer-generated evidence in civil cases.
From Digital Investigations to Business Resilience: Key Private Sector Trends for 2026
Digital investigations are evolving from reactive tools into core business capabilities. The 2026 private sector trends highlight growing data complexity, practical AI adoption, and a shift toward proactive resilience.
Important A.I. Work Product and Protective Order Decision: Application to Pro Se Litigant and Beyond?
Can using AI in litigation stay protected, or does it risk exposing strategy? A new decision in Morgan v. V2X draws a careful line, protecting AI-assisted work product while placing real limits on how confidential...
HaystackID® in the EDRM Illumination Zone: Michael Cammack and Stephanie Wienke
In this EDRM Illumination Zone episode, experts explore how AI governance becomes an operational discipline shaped by leadership, validation, and governance debt.
“Colorado policy could shield AI from complaints regarding unauthorized practice of law”
A new Colorado policy could limit enforcement of unauthorized practice of law claims against AI tools, even as litigation against OpenAI alleges such tools engage in UPL.
Five Faces of the Black Box: How AI ‘Thinks’ and Makes Decisions
Ralph Losey goes deep into how LLM’s work, with visuals to represent how hallucinations might happen. Written for a legal tech audience, this explanation and analysis is applicable to other disciplines.
When the Agent Goes Off-Script: Meta’s AI-Triggered Data Exposure Revives Old Security Fears
Meta’s March 2026 AI agent incident exposed sensitive internal data and highlighted a growing enterprise risk: autonomous systems acting beyond governance controls. For cybersecurity, legal, and eDiscovery professionals, the event signals a shift in how...
Nonsensical Spellings and Fabricated Authority Signal Improper Use of Artificial Intelligence
A federal court found that nonsensical spellings and fabricated authority revealed improper AI use, resulting in dismissal without leave to amend.
Illumination Zone: Episode 228 | Jon Robins of Level Legal sits down with Mary Mack and Holley Robinson
In this episode, Jon shares Level Legal’s approach to building systems, emphasizing the importance of asking the right initial questions and the value of being tool-agnostic, particularly in AI integration. The conversation also explores Level...

