Important A.I. Work Product and Protective Order Decision: Application to Pro Se Litigant and Beyond?

Important A.I. Work Product and Protective Order Decision: Application to Pro Se Litigant and Beyond? by Michael Berman, E-Discovery LLC.
Image: Holley Robinson, EDRM.

[EDRM Editor’s Note: The opinions and positions are those of Michael Berman.]


“AI is forcing litigants and courts to confront difficult questions about how and to what extent longstanding protections will apply when parties use AI to assist them in the litigation process. In particular, courts are beginning to wrestle with practical questions surrounding confidentiality, work product, and privilege. This dispute raises two such questions: (1) to what extent will work product protections apply to a pro se litigant’s use of AI, and (2) to what extent should a protective order expressly restrict the use of AI? The Court addresses each question in turn.” Morgan v. V2X, Inc., 2026 WL 864223 (D. Col. Mar. 30, 2026).

One very important holding is that, when “electronic interaction passes through third-party systems,” it does not forfeit a reasonable expectation of privacy. Intermediary access alone does not “extinguish” privacy. The issue of when use of AI is protected has become a salient issue.1

The court also held that a pro se plaintiff’s use of AI was work product. “Moreover, and in the context of a pro se litigant’s use of AI to assist with their litigation preparation, the use of AI closely resembles the kind of confidential, strategy-laden iterative work product that Rule 26(b)(3) was designed to protect.” However, the Morgan court required disclosure of the name of the AI tool. 

[I]n the context of a pro se litigant’s use of AI to assist with their litigation preparation, the use of AI closely resembles the kind of confidential, strategy-laden iterative work product that Rule 26(b)(3) was designed to protect.

Morgan v. V2X, Inc., 2026 WL 864223 (D. Col. Mar. 30, 2026).

The court also issued a protective order barring upload of confidential material to consumer-grade AI platforms. Protective orders are becoming increasingly common. In my opinion, they should routinely be considered. One other court has gone farther than Morgan and issued a protective order against uploading of materials not designated as confidential.2

In Morgan, the court granted in part a request to protect plaintiff under the work product rule, Fed.R.Civ.P. 26(b)(3). The decision began with a summary for the pro se plaintiff:

The Court is granting Defendant’s Motion, in part. Federal Rule of Civil Procedure 26(b)(3) can protect your mental impressions and litigation preparation materials, but you must disclose the name of any AI tool you used in connection with Confidential Information. This is because you have not demonstrated that identifying the tool itself will reveal your mental impressions or legal strategy, and Defendant needs that information to assess whether Confidential Information was compromised. The Court is also amending the Protective Order to address AI use. The practical effect is that you may not upload, input, or submit Confidential Information into any mainstream AI tool like standard ChatGPT, Claude, Gemini, or similar platforms. [emphasis added].

Morgan is an employment discrimination case. Plaintiff alleged wrongful termination among other claims. Defendant asserted that there were legitimate non-discriminatory grounds for termination.

The AI dispute centered on plaintiff’s request for defendant’s insurance policy. Defendant sought disclosure of plaintiff’s AI tool and restrictions on AI use before it would produce the policy. Plaintiff replied that this would create an unfair “technological gap” because defendant had a proprietary AI system.

The court wrote: “Both parties appear to be using AI in connection with their litigation work, but they disagree on how AI should or should not be used in connection with Confidential Information as defined in the current Protective Order.”

The first question was: “Do work product protections apply to pro se Plaintiff’s use of AI, and in particular, to his tool selection?”

The Morgan court analyzed the work product rule, Fed.R.Civ.P. 26(b)(3). It explained that work product applies to “any party” and added that “courts routinely interpret the Rule to apply not just to attorney work product, but to a pro se litigant’s work product as well…. Moreover, courts have broadly interpreted the rule to protect not just litigation preparation materials, but also the mental impressions, opinions, and theories of parties.” [emphasis in original].

The court added:

The importance of applying these protections to pro se litigants is magnified in the context of AI—one of the most powerful knowledge tools ever to become available to the masses. This is because pro se litigants are forced to act as both party and advocate, simultaneously.And for the first time in history, widespread access to powerful technology may make that dual role surmountable. A reading of Rule 26(b)(3) that conditions work product protection over AI materials on the involvement of counsel finds no support in the rule’s text and would further disadvantage unrepresented litigants. Pro se litigants are held to the same standard as represented litigants…. They should be afforded the same protections.

The court reviewed the two leading decisions. Cf. United States v. Heppner, 2026 WL 436479 (S.D.N.Y. Feb. 17, 2026) with Warner v. Gilbarco, Inc., 2026 WL 373043 (E.D. Mich. Feb. 10, 2026); see “Two Courts, Two Answers: When Does Using AI Waive Privilege?” (Mar. 3, 2026).

The Morgan court distinguished Heppner because here: “A pro se litigant is simultaneously the party and the advocate.” It agreed with Warner: “Plaintiff can assert work product protections in connection with his AI use. It is true that AI systems like ChatGPT, Claude, Gemini, and others widely available to the public, collect user data for training and other purposes. But in this Court’s estimation, that does not eliminate all expectations of privacy or automatically waive protections.”

Importantly, the Morgan court wrote:

Today, nearly all electronic interaction passes through third-party systems. Google, for example, hosts millions of accounts, and by extension, has access to millions of messages, emails, documents, videos, and more. Moreover, we now know that our phones, computers, in-home smart devices, and other electronics, collect information about us to offer more bespoke services. Does that mean that anyone with a Gmail account has forfeited all rights to confidentiality and privacy? In United States v. Warshak, the Sixth Circuit held—albeit in the context of a Fourth Amendment seizure analysis—that email subscribers have a reasonable expectation of privacy in the contents of emails stored with commercial internet service providers. 631 F.3d 266, 268 (6th Cir. 2010). The court rejected the idea that intermediary access alone extinguishes privacy expectations. Id. at 285-86. And though the Tenth Circuit did not fully weigh in on this issue in United States v. Ackerman, 831 F.3d 1292, 1308 (10th Cir. 2016), the Supreme Court has since held that the mere fact that information is held by a third-party intermediary, does not automatically extinguish a reasonable expectation of privacy in that information. Carpenter v. United States, 585 U.S. 296, 310-16 (2018).

Like Warner, the Morgan court looked at practicality: “In other words, even though AI use technically ‘discloses’ information to a third party, it is highly unlikely the information will fall into the hands of an adversary absent some legal process to compel it. Thus, AI interactions do not automatically compromise work product protections.”

AI interactions do not automatically compromise work product protections.

Morgan v. V2X, Inc., 2026 WL 864223 (D. Col. Mar. 30, 2026).

Having determined that the work product doctrine offered some protection: “The question is, how far will that protection extend?”

The second question was: Scope of Rule 26(b)(3) Protections.

The court explained that plaintiff wanted to shield not just outputs, but also the name of the AI tool. It wrote that some courts have interpreted the doctrine to shield processes. It then reasoned:

But the Court does not need to delve into that issue here. Even if it is possible that in some contexts disclosing an AI tool can reveal mental impressions or strategy, Plaintiff has not carried his burden to demonstrate that here…. At best, Plaintiff states, in conclusory fashion and without any legal or factual support, that “[t]he selection and use of specific litigation support tools reveal a party’s mental impressions, case strategy, and legal resource allocation.”

On that predicate, “the Court cannot discern from that, or any other information Plaintiff provides, how disclosing the name of an AI tool would reveal Plaintiff’s mental impressions or case strategy. Moreover, Defendant’s request for the name of the tool is legitimate and reasonable. If Plaintiff already submitted Confidential Information to some AI system—and it appears he has—Defendant is entitled to know which system.”

The third question was: “To what extent should a protective order restrict AI use?”

Defendant asked for the following:

Restrictions on Use of AI to process Confidential or Highly Confidential Information: Absent notice to and written permission from the producing party, any person or entity authorized to have access to Confidential Information under the terms of this Order:

a. shall not use or employ any application, service, or analytical software that will transfer, transmit, send or allow access to Confidential Information, in whole or in part – including metadata, unless such application, service or analytical software:

i. does not further transfer the Confidential Information to another provider, unless the receiving party has confirmed through due diligence that the security and privacy controls of and contractual obligations for such provider allow that party to comply with its obligations under this Protective Order; and

ii. provides the receiving party the ability to remove or delete from the system all Confidential information.

b. shall not permit any Confidential Information to be used to train any artificial intelligence tool. These restrictions apply to the use of advanced or generative AI tools from OpenAI’s GPT or ChatGPT, Harvey.AI, Google’s Bard, Anthropic’s Claude, and similar tools or applications.

Plaintiff counter-proposed:

Any party utilizing third-party software, cloud-based platforms, or artificial intelligence tools for the storage, processing, review, or analysis of Confidential Information must ensure that such tools operate within a secure, closed-circuit environment. No Confidential Information may be uploaded to any platform or service whose Terms of Service permit the provider to utilize the uploaded data for the training of Large Language Models (LLMs), machine learning algorithms, or for any internal human-in-the-loop review.

The Morgan court found: (1) plaintiff’s request did not offer sufficient guardrails; (2) defendant’s proposal was “better” but had “shortcomings”; and, (3) tailored the defense request to this dispute. It amended the protective order to state:

No party or authorized recipient may input, upload, or submit CONFIDENTIAL Information into any modern artificial intelligence platform, including any generative, analytical, or large language model-based tool (“AI”), unless the AI provider is contractually prohibited from: (1) storing or using inputs to train or improve its model; and (2) disclosing inputs to any third party except where such disclosure is essential to facilitating delivery of the service. Where disclosure to a third party is essential to service delivery, any such third party shall be bound by obligations no less protective than those required by this Order. In addition, the AI provider must contractually afford the party or authorized recipient the ability to remove or delete all CONFIDENTIAL information upon request. A party intending to use AI that it contends meets these requirements must retain written documentation of these contractual protections.

The Morgan court concluded:

The Court recognizes that practically speaking, and in light of the current state of AI, this provision will (at least for now) bar the parties from using most, if not all, mainstream low-to-no-cost AI to process Confidential Information. This type of restriction disadvantages pro se litigants. Enterprise-tier AI accounts that satisfy these requirements may be available only through organizational procurement processes, or at costs that a pro se litigant is unlikely to bear. But the Court cannot ignore the real risks associated with mainstream tools that persistently collect and store data and could compromise confidentiality.

To be clear, the Court does not intend to leave pro se Plaintiff without the benefits of AI. Modern AI tools may be used in many ways that do not involve uploading Confidential Information, and nothing in this particular Order restricts those uses. What this Order requires is that Confidential Information not be entrusted to platforms that lack the contractual safeguards described above, regardless of the sophistication or apparent trustworthiness of the tool.

In my opinion: (1) the Morgan court’s protective order language should serve as a guidepost; and, (2) the confidentiality analysis regarding transmittal through an intermediary is persuasive.


Notes

  1. See “Two Courts, Two Answers: When Does Using AI Waive Privilege?” (Mar. 3, 2026); A.I. Privilege, Heppner, and How Did the Court Learn About the Absence of Certain Attorney-Client Communications Between Mr. Heppner and His Attorneys? (Mar. 2, 2026); A.I. Documents Deemed Not Privileged (Feb. 12, 2026); A.I. Discovery (Dec. 22, 2025). A leading article is discussed in “Against an AI Privilege” – Are Prompts Discoverable?  Is Output? (Jan. 2, 2026). ↩︎
  2. See Protective Order Limited Uploading Discovery Responses to Open A.I. (Mar. 26, 2026); Order Prohibiting Upload of Confidential Discovery Documents to Artificial Intelligence (“AI”) (Nov. 3, 2025). ↩︎

Assisted by GAI and LLM Technologies per EDRM’s GAI and LLM Policy.

Author

  • Michael Berman

    Michael Berman is a practicing lawyer and an adjunct faculty member at the University of Baltimore School of Law.  He has published extensively, including as the editor-in-chief and a contributing author in “Electronically Stored Information in Maryland Courts” (Maryland State Bar Ass’n. 2020), co-editing two American Bar Association books on electronic discovery, as well as co-authoring law review and other articles regarding electronically stored information (“ESI”).  He has presented widely in venues ranging from local to National events and served as a Court Appointed ESI Discovery Supervisor for ESI Protocol.

    View all posts Owner, E-Discovery LLC