
[EDRM Editor’s Note: The opinions and positions are those of Michael Berman.]
The movie Jaws famously contained the line “Just when you thought it was safe to go back in the water…”
Law360 reports Attys Beware: Generative AI Can Also Hallucinate Metadata – Law360 in a November 4th article by Daniel Garrie, Jennifer Deutsch, and Morgan Ward Doran. The article states:
When AI generates a document, it may quietly populate or modify hidden fields that are embedded in the document — called metadata — with fictitious or misleading information. These AI-generated hallucinations are just as dangerous, if not more so, as errors in the body of documents, because they are overlooked by most users, appear authentic, and can have significant implications in discovery, authentication and privilege disputes.
The authors note the publicity over hallucinated case citations, but add “relatively little attention has been paid to metadata hallucinations in those same documents.”
The article explains: “Metadata hallucinations arise from the way large language models generate output…. The result is a hybrid output in which the software may attribute authorship to one user, while the AI inserts conflicting or invented details into the metadata field ‘Author.’” It continues:
Hallucinations in AI-generated documents can surface across multiple layers of metadata. Importantly, these errors are not confined to obscure technical fields; they can appear both in the metadata that any user can readily display with a few mouse clicks, and in the deeper metadata that is only viewable using specialized forensic or e-discovery tools.
The results of their experiment counsels caution and additional diligence:
As research for this article, we directed several AI tools to generate office-type documents and examined whether they contained metadata hallucinations. They all did. Across document types, the AI tools fabricated information for several key metadata fields commonly found in office-type documents, including the date, author and comment fields.
The authors also “alarmingly” report fabricated hash values, version histories, and inaccurate time stamps. As a result, the article suggests that “legal and forensic practitioners must now assess whether it is necessary to cross-check each document’s metadata against other supporting evidence to determine its provenance and accuracy. This introduces a new elemental requirement into forensic analysis: one that is not generally known; has not been widely adopted; and necessitates education, time and resources to implement.”
Generative AI metadata hallucination is an underreported and underappreciated, but increasingly important, risk in this blossoming era of AI-driven legal practice.
Daniel Garrie, Jennifer Deutsch & Morgan Ward Doran, Law360 (Nov. 4, 2025).
The article concludes: “Generative AI metadata hallucination is an underreported and underappreciated, but increasingly important, risk in this blossoming era of AI-driven legal practice.” Thanks to Law360 for this important article.
Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.

