
Topic
← Back to Blog
Search
Authors
Suggested A.I. Rule – Suggested Amendment to Maryland’s Computer-Generated Evidence Rule
This article proposes revising Maryland Rule 2-504.3 to expressly cover generative AI evidence. The amendment would add notice, disclosure, discovery, pretrial hearing, and expert-related procedures for computer-generated evidence in civil cases.
From Digital Investigations to Business Resilience: Key Private Sector Trends for 2026
Digital investigations are evolving from reactive tools into core business capabilities. The 2026 private sector trends highlight growing data complexity, practical AI adoption, and a shift toward proactive resilience.
Important A.I. Work Product and Protective Order Decision: Application to Pro Se Litigant and Beyond?
Can using AI in litigation stay protected, or does it risk exposing strategy? A new decision in Morgan v. V2X draws a careful line, protecting AI-assisted work product while placing real limits on how confidential...
HaystackID® in the EDRM Illumination Zone: Michael Cammack and Stephanie Wienke
In this EDRM Illumination Zone episode, experts explore how AI governance becomes an operational discipline shaped by leadership, validation, and governance debt.
“Colorado policy could shield AI from complaints regarding unauthorized practice of law”
A new Colorado policy could limit enforcement of unauthorized practice of law claims against AI tools, even as litigation against OpenAI alleges such tools engage in UPL.
Five Faces of the Black Box: How AI ‘Thinks’ and Makes Decisions
Ralph Losey goes deep into how LLM’s work, with visuals to represent how hallucinations might happen. Written for a legal tech audience, this explanation and analysis is applicable to other disciplines.
When the Agent Goes Off-Script: Meta’s AI-Triggered Data Exposure Revives Old Security Fears
Meta’s March 2026 AI agent incident exposed sensitive internal data and highlighted a growing enterprise risk: autonomous systems acting beyond governance controls. For cybersecurity, legal, and eDiscovery professionals, the event signals a shift in how...
Nonsensical Spellings and Fabricated Authority Signal Improper Use of Artificial Intelligence
A federal court found that nonsensical spellings and fabricated authority revealed improper AI use, resulting in dismissal without leave to amend.
Illumination Zone: Episode 228 | Jon Robins of Level Legal sits down with Mary Mack and Holley Robinson
In this episode, Jon shares Level Legal’s approach to building systems, emphasizing the importance of asking the right initial questions and the value of being tool-agnostic, particularly in AI integration. The conversation also explores Level...
HaystackID CoreFlex Wins 2026 Artificial Intelligence Excellence Award for Advancing GenAI-Driven Legal Workflows
HaystackID’s CoreFlex platform earns 2026 AI Excellence Award, highlighting its role in transforming legal data workflows with generative AI and automation.
The M&A Risk of Confusing Market Velocity with Marketing Capability
This analysis examines a critical M&A risk: mistaking market-driven momentum for durable marketing capability. It outlines how this misread impacts valuation, diligence, and integration outcomes.
Well-Reasoned “Hallucination” Analysis
A federal court decision in Brownfield v. Cherokee Co. School Dist. clarifies that failing to verify AI-generated citations can trigger Rule 11 sanctions, even for pro se litigants.
