[Editor’s Note: EDRM is proud to publish Ralph Losey’s advocacy and analysis. The opinions and positions are Ralph Losey’s copyrighted work.]
In a landmark move towards the regulation of generative AI technologies, the White House brokered eight “commitments” with industry giants Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. The discussions, held exclusively with these companies, culminated in an agreement on July 21, 2023. Despite the inherent political complexities, all parties concurred on the necessity for ethical oversight in the deployment of their AI products across several broad areas.
Introduction
These commitments, although necessarily ambiguous, represent a significant step to what may later become binding law. The companies not only acknowledged the appropriateness of future regulation across eight distinct categories, they also pledged to uphold their ongoing self-regulation efforts in these areas. This agreement thus serves as a kind of foundation blueprint for future Ai regulation. Also see prior efforts by U.S.government that precede this blueprint, AI Risk Management Framework, (NIST, January 2023), and the White House Blueprint for an AI Bill of Rights, (October 2022).
The eight “commitments” are outlined in this article with analysis, background and some editorial comments. For a direct look at the agreement, here is a link to the “Commitment” document. For those interested in the broader legislative landscape surrounding AI in the U.S., see my prior article, “Seeds of U.S. Regulation of AI: the Proposed SAFE Innovation Act” (June 7, 2023). It provides a comprehensive overview of proposed legislation, again with analysis and comments. Also see, Algorithmic Accountability Act of 2022 (requiring self-assessments of AI tools’ risks, intended benefits, privacy practices, and biases) and American Data Privacy and Protection Act (ADPPA) (requiring impact assessments for “large data holders” when using algorithms in a manner that poses a “consequential risk of harm,” a category which certainly includes some types of “high-risk” uses of AI).
The document formalizes a voluntary commitment, which is sort of like a non-binding agreement, an agreement to try to reach an agreement. The parties statement begins by acknowledging the potential and risks of artificial intelligence (AI). Then it affirms that companies developing AI should ensure the safety, security, and trustworthiness of their technologies. These are the three major themes for regulation that the White House and the tech companies could agree upon. The document then outlines eight particular commitments to implement these three fundamental principles.
Read the entire paper by hovering over the PDF and scroll to advance the pages or download the PDF at the bottom of this post:
Ralph Losey Copyright 2023 – ALL RIGHTS RESERVED — (Published on edrm.net and jdsupra.com with permission.)
Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.