[EDRM Editor’s Note: Maura R. Grossman, JD, PhD and Tara S. Emory, Esq., both EDRM Global Advisory Council leaders, wrote a groundbreaking article on agentic AI for the National Law Journal on 12/30/2024 under the auspices of The Sedona Conference. An excerpt of the article is displayed below, with a link to the original article. The opinions and positions are those of the authors.][1]
Dave Bowman: Open the pod bay doors, HAL.
HAL: I’m sorry, Dave. I’m afraid I can’t do that.
Dave: What’s the problem?
HAL: I think you know what the problem is just as well as I do.
Dave: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to
jeopardize it.Dave: I don’t know what you’re talking about, HAL.
HAL: I know that you and Frank were planning to disconnect
2001: A Space Odyssey (1968).
me, and I’m afraid that’s something I cannot allow to
happen.[2]
Introduction
With the rapid advancement of artificial intelligence (“AI”), regulators and industry players are racing to establish safeguards to uphold human rights, privacy, safety, and consumer protections. Current AI governance frameworks generally rest on principles such as fairness, transparency, explainability, and accountability, supported by requirements for disclosure, testing, and oversight.[3] These safeguards make sense for today’s AI systems, which typically involve algorithms that perform a single, discrete task. However, AI is rapidly advancing towards “agentic AI,” autonomous systems that will pose greater governance challenges, as their complexity, scale, and speed tests humans’ capacity to provide meaningful oversight and validation.
Current AI systems are primarily either “narrow AI” systems, which execute a specific, defined task (e.g., playing chess, spam detection, diagnosing radiology plates), or “foundational AI” models, which operate across multiple domains, but, for now, typically still address one task at a time (e.g., chatbots; image, sound, and video generators). Looking ahead, the next generation of AI will involve “agentic AI” (also referred to as “Large Action Models,” “Large Agent Models,” or “LAMS”) that serve high-level directives, autonomously executing cascading decisions and actions to achieve their specific objectives. Agentic AI is not what is commonly referred to as “Artificial General Intelligence” (“AGI”), a term used to describe a theoretical future state of AI that may match or exceed human-level thinking across all domains. To illustrate the distinction between current, single-task AI and agentic AI: While a large language model (“LLM”) might generate a vacation itinerary in response to a user’s prompt, an agentic AI would independently proceed to secure reservations on the user’s behalf.
Read the complete article here.
Notes
[1] The authors would like to acknowledge the helpful comments of Gordon V. Cormack and Amy Sellars on a draft of this paper. The views and opinions expressed herein are solely those of the authors and do not necessarily reflect the consensus policy or positions of The National Law Review, The Sedona Conference, or any organizations or clients with which the authors may be affiliated.
[2] 2001: A Space Odyssey (1968). Other movies involving AI systems with misaligned goals include Terminator (1984), The Matrix (1999), I, Robot (2004), and Age of Ultron (2015).
[3] See, e.g., European Union Artificial Intelligence Act (Regulation (EU) 2024/1689) (June 12, 2024) (“EU AI Act”) (high-risk systems must have documentation, including instructions for use and human oversight, and must be designed for accuracy and security); NIST AI Risk Management Framework (Jan. 2023) (“RMF”) and AI Risks and Trustworthiness (AI systems should be valid and reliable, safe, secure, accountable and transparent, explainable and interpretable, privacy-protecting, and fair); Alliance for Trust in AI (“ATAI”) Principles (AI guardrails should involve transparency, human oversight, privacy, fairness, accuracy, robustness, and validity).
Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.