“Against an AI Privilege” – Are Prompts Discoverable?  Is Output?

“Against an AI Privilege” – Are Prompts Discoverable? Is Output? by Michael Berman, E-Discovery LLC.
Image: Holley Robinson, EDRM.

[EDRM Editor’s Note: This article was first published here on January 2, 2026. The opinions and positions are those of Michael Berman.]


Prof. Ira P. Robbins, published an important article, Against an AI Privilege – Harvard Journal of Law & Technology (Nov. 7, 2025).

Prof. Robbins poses the question of whether communications with artificial intelligence systems “deserve protection in court under the rules of evidence akin to attorney-client, psychotherapist-patient, or spousal privileges.”

He argues “that—at least under current technological, social, and institutional conditions—any such privilege would be premature, unworkable, and inconsistent with the historically rooted approach to evidentiary privileges.”

[A]t least under current technological, social, and institutional conditions—any such privilege would be premature, unworkable, and inconsistent with the historically rooted approach to evidentiary privileges.

Prof. Ira P. Robbins, “Against an AI Privilege,” Harvard Journal of Law & Technology (Nov. 7, 2025).

Prof. Robbins’ argument is that the philosophical bases for protecting other privileges are “wholly absent from AI interactions.”  He asserts that:  “Extending privilege to these communications would be both unnecessary and affirmatively harmful, undermining the truth-seeking function of the courts without delivering the human-centered benefits that justify traditional privileges. Recognizing an AI privilege would entrench corporate opacity precisely when courts need transparency.”

His test hypothesis is what he calls the strongest case for an A.I. privilege, “namely, that extending protection to certain AI interactions could promote candor, safeguard personal autonomy, and encourage more responsible use of emerging technologies….”  He ultimately rejects that theory, writing:

Two premises frame this Part. First, privileges are disfavored carve-outs from the truth-seeking function; courts recognize them only where secrecy is necessary to sustain a socially vital human relationship. Second, AI is not a relationship bearer; it is a commercial, code-mediated tool operating within data practices and policies that can change. Taken together, these premises place an AI privilege at odds with doctrine and design. Unlike human privileged relationships, an AI privilege would chiefly insulate providers and their systems from scrutiny, inverting the usual calculus at the public’s expense.

Prof. Ira P. Robbins, “Against an AI Privilege,” Harvard Journal of Law & Technology (Nov. 7, 2025).

The article suggests that: “Recognized privileges share four elements: (1) a trusting human relationship; (2) an enforceable duty of confidentiality on the recipient; (3) communications made for the protected purpose; and (4) a public interest sufficient to outweigh lost evidence.”  It argues that those elements are absent in artificial intelligence.  For example, the Professor points out that communications with an attorney are privileged because the duty holder has a legal obligation that is generally absent from A.I. providers.

Even if one charitably characterizes certain AI tools as “counselor-like” or “paralegal-like,” the law protects relationships, not functionalities. A calculator can compute like an accountant, but nobody suggests a calculator privilege. The same logic applies to a generative model that drafts a brief or offers a cognitive behavioral therapy exercise; simulation does not create a privileged tie.

Prof. Ira P. Robbins, “Against an AI Privilege,” Harvard Journal of Law & Technology (Nov. 7, 2025).

Further, because A.I. is not a “legal person,” it cannot owe a fiduciary duty. Prof. Robbins argues that: “Platform privacy promises do not fix accountability.” One key factor seems to be: “The expectation of secrecy remains contingent.”

The article draws an important distinction in the lawyer-client context:

“Where AI assists counsel, ordinary rules already apply. No new AI privilege is needed….” [emphasis added].

Where AI assists counsel, ordinary rules already apply. No new AI privilege is needed….

Prof. Ira P. Robbins, “Against an AI Privilege,” Harvard Journal of Law & Technology (Nov. 7, 2025).

However: “A [non-attorney] user who queries a standalone AI system outside a lawyer-client relationship is not communicating with counsel through an agent. Nor is the AI an extension of counsel’s judgment.”

The article also addresses many other privileges including mental health, clergy, reporter’s, and others. The professor argues: “AI fits none of these molds. It is neither clergy nor journalist.”

One recommendation made in the article is: “Legislatures can craft purpose-built confidentiality rules for contexts where users predictably reveal highly sensitive information to AI tools—for example, mental-health self-help, crisis support, or sexual-assault resources. These statutes could limit collection and ensure narrow judicial access.”  In that regard, the article makes a number of suggestions, such as short retention periods, and clear terms of service and privacy policies.

However, Prof. Robbins writes:

For lawyers, the existing framework suffices: communications are privileged when made between lawyer and client for the purpose of obtaining legal advice. If used under counsel’s direction, existing privilege rules apply. By contrast, a client’s independent conversations with a public chatbot remain outside privilege, unless integrated into counsel’s legal advice.

Prof. Ira P. Robbins, “Against an AI Privilege,” Harvard Journal of Law & Technology (Nov. 7, 2025).

He concludes: “An AI privilege is neither doctrinally justified nor normatively sound. Privilege doctrine protects relationships of human trust, not interactions with commercial systems. The law’s restraint here is not inertia but wisdom—preserving transparency until genuine relational and societal need emerges.”

Read the original article here.


Assisted by GAI and LLM Technologies per EDRM’s GAI and LLM Policy.

Author