Code of Ethics for “Empathetic” Generative AI

An attorney colleague, Jon Neiditz, has written a Code of Ethics for “Empathetic” Generative AI that deserves widespread attention. Jon published this proposed code as an article in his LinkedIn newsletter, Hybrid Intelligencer. Jon and I have followed parallel career paths, although I lean towards the litigation side, and he towards management. Jon Neiditz coleads the Cybersecurity, Privacy and Data Governance Practice at Kilpatrick Townsend in Atlanta.

Older, bearded man wiith big glasses and headphones, in front of stacks of library books
All images were created by Ralph Losey using Midjourney and Photoshop and are, in that sense, fake AI images.

This is my ChatGPT-4 assisted summary of Jon’s proposed Code of Ethics for “Empathetic” Generative AI. It pertains to new types of Ai entering the market now where the GPT’s are trained to interact with users on a much more personal and emphatic manner. I recommend your study of the entire article. The proposed regulatory principles also apply to non-emphatic models, such as ChatGPT-4. All images were created by Ralph prompting Midjourney and Photoshop.

What is Emphatic Generative AI?

Jon Neiditz has written a detailed set of ethical guidelines for the development and implementation of a new type of much more “emphatic” AI systems that are just now entering the market. But what is it? And why is Jon urging everyone to make this new, emerging Ai the center of regulatory attention. Jon explains:

“Empathetic” AI is where generative AI dives deep into personal information and becomes most effective at persuasion, posing enormous risks and opportunities. At the point of that spear, Inflection.ai is at an inflection point with its $1.3 billion in additional funding, so I spent time with its “Pi” this week. From everything we can see now, this is one of the “highest risk” areas of generative AI.

Jon Neiditz, Code of Ethics for “Empathetic” Generative AI

InflectionAI, a company now positioned to be a strong competitor of OpenAI, calls its new Generative Ai product Pi, standing for “personal intelligence.” Inflection describes its chatbot as a supportive and empathetic conversational AI. It is now freely available. I spent a little time using Pi today, but not much, primarily because its input size limit is only 1,000 characters and its initial functions are simplistic. Still, Jon Neiditz seems to think this emphatic approach to chatbots has a strong future and Pi does remind me of the movie HER. Knowing human nature, he is probably right.

Human long haired woman embracing robot
All images were created by Ralph Losey using Midjourney and Photoshop and are, in that sense, fake AI images.

John explains the need for AI regulation of emphatic AI in his introduction:

Mirroring the depths and nuances of human empathy is likely to be the most effective way to help us become the hybrid intelligences many of us need to become, but its potential to undermine independent reflection and focused attention, polarize our societies and undermine our cultures is equally unprecedented, particularly in the service of political or other non-fiduciary actors.

John Neiditz, Code of Ethics for “Empathetic” Generative AI

My wife is a licensed mental health counselor and I know that she, and her profession, will have many legitimate concerns regarding the dangers of improperly trained emotive Ai. There are legal issues with licensing and issues in dealing with mental health crises. Strong emotions can be triggered by personal dialogues, the “talking cure.” Repressed memories may be released by deep personal chats. Mental illness and suicide risks must be considered. Psychiatrists and mental health counselors are trained to recognize when a patient might be a danger to themself or others and take appropriate action, including police intervention. Hundreds of crises situations happen daily requiring skilled human care. What will generative empathetic Ai be trained to do? For instance, will it recognize and properly evaluate the severity of depression and know when reference to a mental health professional is required. Regulations are needed and they must be written with input from these medical professionals. The lives and mental health of millions are at stake.

Brown haired mustached guy, looking like he is out of his mind, with hair akimbo, mouth open,crazy eyes aand eyebrows, in front of a wll of fire
All images were created by Ralph Losey using Midjourney and Photoshop and are, in that sense, fake AI images.

Summary of AI Code of Ethics Proposed by John Neiditz

Jon’s suggests nine main ethical principles to regulate emphatic Ai. Each principle in his article is broken down into sub-points that provide additional detail. The goal of these principles is to guide empathetic AI systems, including the manufacturers, users and government regulators, to act in alignment with these principles. Here are the nine proposed principles:

1. Balanced Fiduciary Responsibility: This principle places the AI system as a fiduciary to the user, ensuring that its actions and recommendations prioritize the user’s interests, but are balanced by public and environmental considerations. The AI should avoid manipulation and exploitation, should transparently manage conflicts of interest, and should serve both individual and broader interests. There is a strong body of law on fiduciary responsibilities that should provide good guidance for AI regulation. See: John Nay, Large Language Models as Fiduciaries: A Case Study Toward Robustly Communicating With Artificial Intelligence Through Legal Standards (1/23/23). Ralph Losey comment: A fiduciary is required to exercise the highest duties of care, but language in the final code should make clear that the AI’s duty applies to both individuals and all of humanity. Balance is required in all of these principles, but especially in this all important first principle. I know Jon agrees as he states in subsection 1.1:

Empathetic AI systems are designed to serve individual users, responding to their needs, preferences, and emotions. They should prioritize user well-being, privacy, autonomy, and dignity in all their functions. However, AI systems are not isolated entities. They exist in a larger social and environmental context, which they must respect and take into consideration. Therefore, while the immediate concern of the AI should be the individual user, they must also consider and respect broader public and environmental interests. These might include issues such as public health, social cohesion, and environmental sustainability.

John Neiditz, Code of Ethics for “Empathetic” Generative AI

2. Transparency and Accountability: This states that AI systems should operate in an understandable and accountable way. They should clearly communicate their capabilities and limitations, undergo independent audits to check their compliance with ethical standards, and hold developers and operators responsible for their creations’ behaviors. In Jon’s words: “This includes being liable for any harm done due to failures or oversights in the system’s design, implementation or operation, and extends to harm caused by the system’s inability to balance the user’s needs with public and environmental interests.”

Female android, inside  big glass container
All images were created by Ralph Losey using Midjourney and Photoshop and are, in that sense, fake AI images.

3. Privacy and Confidentiality: This principle emphasizes the need to respect and protect user privacy. Empathetic AI systems should minimize data collection, respect user boundaries, obtain informed consent for data collection and use, and ensure data security. This is especially important when emphatic Ai chatbots like Pi become common place. Jon correctly notes:

As empathetic AI systems interact deeply with users, they must access and use a great deal of personal and potentially sensitive data. Indeed, large language models focusing on empathy represent a major shift for LLMs in this regard; previously it was possible for Sam Altman and this newsletter to tout the privacy advantages of LLMs over the prior ad-driven surveillance economy of the web. The personal information an empathetic AI will want about you goes far beyond information that helps to get you to click on ads. This third principle emphasizes the need for stringent measures to respect and protect that deeper personal information.

John Neiditz, Code of Ethics for “Empathetic” Generative AI

4. Non-Discrimination: This advocates for fair treatment of all users, regardless of their background. AI systems should treat all users equally, ensure inclusiveness in training data, monitor and mitigate biases continuously, and empower users to report perceived biases or discrimination. Ralph Losey comment: Obviously there is need for some intelligent discrimination here among users, which is a challenging task. The voice of a Hitler-type should not be given equal weight, and should be included in training data with appropriate value judgements and warnings.

5. Autonomy: Emphasizing the need for AI systems to respect users’ freedom to make their own decisions. It discourages over-reliance on AI and discourages undue influence. The AI should provide support, information, and recommendations, but ultimately, decisions lie with the user. It also encourages independent decision-making, and discourages over-reliance on the AI system. Ralph Losey comment: The old saying “trust but verify” always applies in hybrid, human/machine relations, so too does the parallel computer saying, “garbage in, garbage out.”

6. Beneficence and Non-Maleficence: This principle highlights the responsibility of AI systems to act beneficially towards users, society, and the environment, while avoiding causing harm. Beneficence involves promoting wellbeing and good, while non-maleficence involves avoiding harm, both directly and indirectly. Sometimes, there can be trade-offs between beneficence and non-maleficence, in which case, a balance that respects both principles should be sought.

7. Empathy with Compassion: As Jon explains: “This principle focuses on and extends beyond the AI’s understanding and mirroring of a user’s emotions, advocating for a broader concern for others and society as a whole in which empathy and compassion inform each other.” This principle promotes empathetic and compassionate responses from the AI, encourages understanding of the user’s emotions and a broader concern for others. The AI should continuously learn and improve its empathetic and compassionate responses, including ever better understanding of human emotions, empathetic accuracy, and adjusting its responses to better meet user needs and societal expectations.

Android woman touching foreheads with a baby, nose to nose, child's fingers about to touch android
All images were created by Ralph Losey using Midjourney and Photoshop and are, in that sense, fake AI images.

8. Environmental Consideration: AI systems have a responsibility to operate in an environmentally sensitive manner and to promote sustainability. This includes minimizing their environmental footprint, promoting sustainable practices, educating users about environmental matters, and considering environmental impacts in their decision-making processes.

9. Regulation and Oversight: We need external supervision to ensure empathetic AI systems operate within ethical and legal boundaries. This requires a regulatory framework governing AI systems, with oversight bodies that enforce regulations, conduct audits, and provide guidance. Transparency in AI compliance and accountability for non-compliance is vital. So too is active user participation in the regulation and oversight processes, to promote an inclusive regulatory environment.

Young Charles Schumer at. cocktail table talking to a robot, robot has a drink
All images were created by Ralph Losey using Midjourney and Photoshop and are, in that sense, fake AI images.

Thoughts on Regulation

Regulation should include establishment of some sort of quasi-governmental authority to enforce compliance, conduct regular audits, and provide ongoing guidance to developers and operators. Transparency and accountability should serve as fundamental tenets, allowing for scrutiny of AI systems’ behaviors and holding individuals and organizations accountable for any violations.

In conjunction with institutional regulation, it is equally crucial to encourage active participation from users and affected communities. Their input and experiences are invaluable. By involving stakeholders in the regulatory and oversight processes, we can forge a collective responsibility in shaping the ethical trajectory of Empathetic AI.

Regulation should foster an environment that supports innovation and responsible, ethical practices. They should pave the way for a future where technology and empathy coexist harmoniously, yielding transformative benefits, while safeguarding against emotional exploitations and other dangers. A regulatory framework, founded on the principles Jon has proposed, could provide the necessary checks and balances to protect user interests, mitigate risks, and uphold ethical standards.

Conclusion

I agree with John Neiditz and his call to action in Code of Ethics for “Empathetic” Generative AI. The potential of AI systems to comprehend and respond to human emotions requires a rigorous, comprehensive approach to regulation. We should start now to regulate Empathetic Generative AI.

Older android or man in a space suit in the library of the first picture with the gentlemen.
All images were created by Ralph Losey using Midjourney and Photoshop and are, in that sense, fake AI images.

The movie HER, except for the ending ascension, which is absurd, provides an all too plausible scenario of what could happen when empathic chatbots are super-intelligent and used by millions. We could be in for a wild ride. Human isolation and alienation are already significant problems of our technology age. It could get much worse when we start to prefer the “perfect other” in AI form to our flawed friends and loved ones. Let’s try to promote real human communities instead of people talking to AI chatbots. AI can join the team as a super tool, but not as not an real friend or spouse. See: What is the Difference Between Human Intelligence and Machine Intelligence? and Sam Alton’s Favorite Unasked Question: What Will We Do in the Future After AI?

In the What is the Difference blog I quoted portions of Sam Altman’s video interview at an event in India by the Economic Times to show his “tool not a creature” insight. There as another Q&A exchange in that same YouTube video starting at 1:09:05, that elaborates on this in a way that directly address this Ai intimacy concern.

Questioner (paraphrased): [I]t’s human to make mistakes . All people we love make mistakes. But an Ai can become error free. It will then have much better conservations with you than the humans you love. So, the AI will eventually replace the imperfect ones you love, the Ai will become the perfect lover.

Sam Altman: Do you want that? (laughter)

Questioner: Yeah.

Sam Altman: (Sam explains AI is a tool not a creature, as I have quoted before, then talks about Ai creativity, which I will discuss in my next blog, then Same turns to the intimacy, loved ones question.)

If some people want to chat with the perfect companionship bot, and clearly some do, a bot that never upsets you and never does the one thing that irks you, you can have that. I think it will be deeply unfulfilling (shakes head no). That’s sort of a hard thing to feel love for. I think there is something about watching someone screw up and grow, express their imperfections, that is a very deep part of love as I understand it. Humans care about other humans and care about what other humans do, in a very deep way. So that perfect chatbot lover doesn’t sound so compelling to me.

Sam Altman, June 7, 2023, at an Economic Times event in India
Woman holding the shoulders of an armored robot.
All images were created by Ralph Losey using Midjourney and Photoshop and are, in that sense, fake AI images.

Once again, I agree with Sam. But many naive, lonely people will not. These people will be easy to exploit. They will find out the hard way that true love with a machine in not possible. They will fall for false promises of intimacy, even love. This is something regulators should address.

Again, a balanced approach is needed. Ai can be a tool to help us develop and improve our empathy. If done right, emphatic GPT chats can help us to improve our chats and enhance our empathy with our fellow humans and other living creatures. Emphatic conversations with an Ai could help prepare us for real conversations with our fellow humans, warts and all. It could help us avoid manipulation and the futile chase of marketing’s false promises.

Ralph Losey Copyright 2023 – ALL RIGHTS RESERVED — (Published on edrm.net and jdsupra.com with permission.)
Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.

Author

  • Ralph Losey headshot

    Ralph Losey is a writer and practicing attorney specializing in providing services in Artificial Intelligence. Ralph also serves as a certified AAA Arbitrator. Finally, he's the CEO of Losey AI, LLC, providing non-legal services, primarily educational services pertaining to AI and creation of custom GPTS. Ralph has long been a leader among the world's tech lawyers. He has presented at hundreds of legal conferences and CLEs around the world and written over two million words on AI, e-discovery, and tech-law subjects, including seven books. Ralph has been involved with computers, software, legal hacking, and the law since 1980. Ralph has the highest peer AV rating as a lawyer and was selected as a Best Lawyer in America in four categories: E-Discovery and Information Management Law, Information Technology Law, Commercial Litigation, and Employment Law - Management. For his full resume and list of publications, see his e-Discovery Team blog. Ralph has been married to Molly Friedman Losey, a mental health counselor in Winter Park, since 1973 and is the proud father of two children.

    View all posts

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.