Evidence that AI Expert Panels Could Soon Replace Human Panelists, or is this just an Art Deco Hallucination? – Part One of Two

Evidence that AI Expert Panels Could Soon Replace Human Panelists, or is this just an Art Deco Hallucination? - Part One of Two by Ralph Losey.
Image: Ralph Losey with his AI Visual Muse.

[Editor’s Note: EDRM is proud to publish Ralph Losey’s advocacy and analysis. The opinions and positions are Ralph Losey’s copyrighted work.]

Say goodbye to the traditional panel of experts, the ones routinely featured at educational events and conferences worldwide. Say hello to new panels of AI experts: panels in your pocket that you can call upon anytime, anywhere. New advances in ChatGPT make this possible. The days of human panels of experts on stage lecturing down to attendees may soon be over. Conferences may never be the same. Expert services may be transformed too. This article provides demonstrative evidence to support these predictions and does so using every AI panel’s current favorite topic, hallucinations. Yes, it’s a twofer, on both expert panels and hallucinations, and, this is part one of two. Illustrations are done in various Art Deco styles using Visual Muse: illustrating concepts with style.

Introduction

The concept illustrations in this article were created in various Art Deco styles using a well-worn custom GPT of mine, Visual Muse: illustrating concepts with style.

The support for the claims about AI possibly replacing human expert panels comes from another little AI program that I made in my spare time: Panel of AI Experts for Lawyers. This is new custom GPT was designed to make ChatGPT4 adopt multiple personas to serve as a panel of experts to help legal professionals solve problems. It can also be used to lecture like traditional expert panels. It will be available soon, free, on the OpenAI store, after I finish testing it.

Like many AI hobbyists, I enjoy making these GPT tools for fun. Similar free apps like this are found everywhere in ChatGPT worlds. You may think free means not worth your time, not very good. Either that, or a trick to collect and sell your data, or to bait and switch. In the pre-AI world of extreme capitalism that was usually true. Not today. Not here. My custom GPTs have no data collections and no upsells for extra features. I do not see your data, your questions, and frankly, don’t want to. That, you would have to pay me to do! I sell my time, not my little gizmos.

To prove my claim that free little custom GPTs like this may actually upset an industry, I offer in this article demonstrative evidence of what the Panel of AI Experts for Lawyers can do. The proof is an unedited transcript of the software in action. The topic I selected for my panel is one troubling many in both tech industries and the legal profession lately, AI hallucinations.

After you study this, you may agree that the days of human panels of experts on stage are over. Instead, say hello to personal visits by even smarter AI experts: any topic, any time, anywhere.

Short History of Multiple Personality AI Experts

I will attempt to prove to your satisfaction that my custom GPT, and others that will soon follow, take expert panel discussions to the next level. I have been playing with this project since the days of 3.5, trying to create ChatGPT personas that talk to each other. There were a dozen or so of us early GPT explorers on Discord and DefCon who were enthused with this idea. It was started by others in forums there in early 2023 and was first called the AI Hive Mind. Here are my previous articles for background. Prompting a GPT-4 “Hive Mind” to Dialogue with Itself on the Future of Law, AI and Adjudications (e-Discovery Team, 4/11/23); ChatGTP-4 Prompted To Talk With Itself About “The Singularity” (e-Discovery Team, 4/04/23); The Proposal of Chat-GPT for an “AI Guardian” to Protect Privacy in Legal Cases (e-Discovery Team, 4/015/23). I also refer to this as making the AI adopt multiple personas that can work with each other for various purposes.

At the time, early 2023, I was, as far as I know, the only lawyer working with this technique, but I have no doubt other lawyers are doing so now too. The idea here is to create AI sub-personalities – personas – with different knowledge specialties and other differing characteristics (which is where the secret sauce comes in). Then you prompt the different AI entities to chat amongst themselves. They work through the issue, step by step, and consider various points of view. The GPT hive mind of experts discusses subjects and works out problems that you assign to them.

Basically the current GPT4 level creative expert panel approach makes the ChatGPT create an expert panel where they talk to themselves. Then the multiple personas, the hive mind if you will, comes up with new ideas, insights and answers in multiple steps, all on its own. My GPT, Panel of AI Experts for Lawyers, for instance, is programmed to go through six steps, which I predetermined. (It started with eight steps, but the QC testing suggested it should be reduced to six.) The user sits back and watches the panel. The AI is thinking out loud in a transparent process. You listen to the panel conclusions, and at the end, you can ask questions. Just like the old human panels, except that the AIs have unlimited time and patience.

Evaluation of the Current Level of AI Expert Panels

This expert consult process now works better than it ever has, although in my book it is still only a B+ or A, not an A+. It could be smarter and should be more consistent. Also, a few errors still slip in from time to time. The transparency of the step by step process of the GPT I designed helps mitigate issues with that.

Based upon Sam Altman’s recent interviews, GPT at the next level Five should make it work much better. At this point, a little more than a year after GPT4 was released, Sam now responds to effusive praise of GPT4 by pretending to think very hard and then sadly say that GPT4 “kinda sucks.” Sam Altman: GPT-4 sucks – Lex Fridman Podcast (April 2024). He is so funny. What a great actor.

Sam Altman is very careful in what he says about Open AI’s next generation GPT, but he did dare to promise that overall GPT 5.0 will be much smarter. He claims that there will be big gains in overall intelligence, like the difference between 3.0 and 4.0. That was an incredible leap! So Sam is saying they can do it again. Still, Sam I Am does not promise major new breakthroughs, or AGI, or anything bogus like that. He admits that GPT5 will still make mistakes, still have biases, and worst of all for sanction fearing attorneys, will still sometimes hallucinate. Sam does, however, promise that Open AI teams of experts are working hard on that hallucination problem. Just do not expect to see a fix for that in version 5.0.

On the plus side, Sam slyly smiles when talking about GPT as a creative brainstorming partner, which he says needs a better name. (How about expert panel, Sam?) Altman admits that capacity, whatever you call it, is the GPT4.0 application that he uses the most. Everyone in the ChatGPT active user world saw how the multi-personality brainstorming capacity demonstrated emerging new abilities and intelligence in the upgrade from 3.5 to 4.0. Sam’s comments suggest that there is a team of experts at OpenAI now expecting even more emerging abilities in the next upgrade. When Asked when 5.0 will be released, Sam refers to the exponential curve of improvement cycles and says it will be here “relatively soon.” Of course, he is well advised not to say exactly when.

In another recent interview Sam Altman made an intriguing statement you should hear. LIVE: Sam Altman, CEO of OpenAI, speaks at the World Government Summit (Reuters, April 2024), video 17:20-1824. I created this image of him for that purpose and put it into the Art Deco theme of this article. I have not altered Sam’s words or voice in any way, so the sync is not perfect. Click on image or this link to see YouTube version.

Transcript (slightly cleaned) of Sam Altman’s avatar speech.

If you think about everybody on Earth getting the resources of a company having hundreds of thousands of really competent people, and what that would do, if you have an AI programmer, AI lawyer, AI marketer, AI strategist, and not just one of those, but many of each, and you get to decide how to use that to create whatever you want to create. We are all going to get a lot of great stuff. The creative power of humanity with tools like that should be remarkable. That’s what gets us all up every morning.

LIVE: Sam Altman, CEO of OpenAI, speaks at the World Government Summit (Reuters, April 2024), video 17:20-1824.

That would be a great closing line to any expert panel discussion.

It used to be pretty tricky to create and use hive mind type expert panels. But with recent advancements in customs GPTs and knowledge banks, many of us are now starting to make expert panels, and make them work on their own through many steps. With custom GPTs it is now easier to set this in motion and control. It is still better to do this all on your own without automated steps, but that requires a high user-skill level that very few have attained. That is where my forthcoming Panel of AI Experts for Lawyers GPT comes in. It automated most of the process so most anyone can use it.

The custom GPT seems to work best with five panelists, but a few more or less will still work too. Larger panels get cumbersome and ineffective, just like working with human experts. That is true whether you are using an automated process like, Panel of AI Experts for Lawyers, or doing it on you own. So far I have seen some minor errors by my use of panels, but no big fails or hallucinations. I am still taking my GPT Panel of AI Experts through its paces with many tests of minor changes. Let me know if you want to volunteer to be part of the beta testing team. We are close to launch. Other cool things coming soon too.

After these expert consult panels become widespread, and especially after the super IQ of GPT5 hits, I kind of doubt there will be many invitations to human experts to participate in big conference panels. The new AI method shows a better way. Either use the expert panels yourself and skip the conference, or attend for the social fun and networking, and put the AI software bots on stage. The wetware types like me will just add local color. (I personally think some of us humans are still funnier and more entertaining than bots, but for how long?)

All of the words in this article so far have been my own, but the following segment will be a verbatim transcript of my chat with Panel of AI Experts for Lawyers. As you will see, I do not have to say much. All I do is set the topic, pick four experts from those offered (the fifth expert is always the Devils Advocate) and say continue. Everything else is written by Panel of AI Experts for Lawyers. I also ask one question at the end, which is more for my readers enjoyment than anything. So read on to the end for the funny finish, which is my favorite part.

The graphic images of me and my ChatGPT icon are part of the transcript. But all of the other images were added by me to the transcript. They were made by another custom GPT of mine, Visual Muse: illustrating concepts with style, which is already available at OpenAI’s store, again for free. It is one of my most valuable tools and I use it every day. Someday I may offer some training on this tool to show how you can make it really sing.

All of the images for the article were made in new artistic styles that are Art Deco variations. Some are pure Art Deco, but most have elements of other styles added in, such as abstract, photorealistic, surrealistic or psychedelic.

I had to mix in the last style because the topic I asked the panel to discuss was hallucinations. I learned a lot about this serious AI issue from this hive mind expert discussion and hope you do too.


Transcript of a Ralph Losey ‘s Session of May 3, 2023 using his ChatGPT, Panel of AI Experts for Lawyers.

You

I am ready to tell you what I want discussed. Are you ready?

Panel of AI Experts for Lawyers

Absolutely, I’m ready! What would you like to discuss or seek advice on?

You

AI Hallucinations: What are they? How different from errors? What causes them? What can individual users do to control the problem?

Panel of AI Experts for Lawyers

For the topic of AI hallucinations, their causes, differences from other types of errors, and what users can do to manage them, I recommend forming an expert panel to delve into these issues thoroughly. Here are ten experts who can provide a comprehensive understanding of AI-related legal issues and practical solutions:

  1. AI Specialist Attorney – Specializes in AI and its implications in the legal field.
  2. Prompt Engineer – Expert in optimizing interactions with AI systems through effective prompting.
  3. Child Prodigy – Offers a unique, intuitive understanding of AI technology and its future implications.
  4. Lawyer Scientist – Combines expertise in law and AI technology, focusing on the integration of AI in legal practices.
  5. Devil’s Advocate (Mandatory) – Challenges prevailing opinions and encourages exploration of alternative viewpoints.
  6. Ethics and Compliance Expert – Specializes in AI ethics, ensuring AI applications adhere to legal and moral standards.
  7. Data Security Specialist – Focuses on protecting data within AI systems, critical for managing and mitigating AI errors.
  8. Machine Learning Developer – Provides insights on the technical aspects of AI training and operation, directly related to AI hallucinations.
  9. Judicial Technology Advisor – Specializes in how AI is used within judicial systems, including its reliability and potential for error.
  10. Consumer Rights Advocate – Focuses on how AI impacts consumers, including issues of transparency and accountability in AI interactions.

Please select four of these experts to include alongside the Devil’s Advocate for a comprehensive panel discussion on AI hallucinations.

You

experts 1, 2, 3, 8

Panel of AI Experts for Lawyers

Great choices! Your panel for discussing AI hallucinations consists of:

  1. AI Specialist Attorney
  2. Prompt Engineer
  3. Child Prodigy
  4. Machine Learning Developer
  5. Devil’s Advocate

Let’s begin the discussion with the first round of panelist introductions and opening remarks on AI hallucinations, their causes, differences from other errors, and user control measures.

1. AI Specialist Attorney

“Good day, esteemed colleagues and guests. AI hallucinations are distinct from typical errors because they manifest as plausible but entirely fabricated responses by AI systems, which may be indistinguishable from accurate information. Unlike simple errors, hallucinations often involve AI generating false data or narratives based on patterns it has learned, rather than mere calculation mistakes. These are particularly troubling in legal contexts, where the authenticity of information is paramount. We must explore regulatory frameworks that ensure AI reliability and transparency.”

2. Prompt Engineer

“Hello, everyone. From a technical perspective, AI hallucinations often occur due to issues in the training data or the model’s inability to distinguish between correlation and causation. Effective prompt engineering can mitigate such issues by guiding the AI to focus on verified information and disregard unreliable patterns. Users can control hallucinations by refining their prompts and incorporating more context, which helps the AI generate more accurate and grounded responses.”

3. Child Prodigy

“Hi, I think of AI like a very smart parrot—it can seem like it knows a lot, but sometimes it just repeats things without understanding. Hallucinations happen because the AI is trying to make sense of what it knows, but it gets mixed up. It’s like when you hear stories and remember them a bit wrong. To help AIs get better, we need to teach them like kids, giving them the right information and correcting them when they make mistakes.”

4. Machine Learning Developer

“Greetings. AI hallucinations are often a product of the model’s limitations in processing and contextualizing large datasets. During the development phase, ensuring diverse and comprehensive datasets is crucial to minimize these errors. We also need to develop better mechanisms to detect when the AI is likely to ‘hallucinate’ and implement checks that can prompt users to verify potentially dubious information.”

5. Devil’s Advocate

“Let’s stir the pot a bit, shall we? While controlling AI hallucinations is ideal, one might argue that the imperfections of AI reflect the imperfections of human understanding itself. Instead of overly sanitizing responses, perhaps there is value in learning to work with and around these ‘hallucinations’ to enhance our critical thinking skills. Are we too reliant on AI’s accuracy?”


The panelists have introduced their views. We will now move to the second round of discussions where they will expand on their ideas and consider others’ viewpoints.

You

please continue

Panel of AI Experts for Lawyers

Continuing with the second round of panelist discussion on AI hallucinations:

1. AI Specialist Attorney

“The Devil’s Advocate raises an intriguing point about the imperfection of AI mirroring human fallibility. However, in legal practices, the stakes are high—hallucinations can lead to misjudgments and injustice. We must balance our desire for AI assistance with stringent checks that ensure outputs are reliable and verifiable, particularly in sensitive areas such as evidence assessment and legal advice.”

2. Prompt Engineer

“Building on the points made earlier, it’s important to highlight that through careful prompt design, we can steer AI away from potential hallucinations. By specifying sources or limiting the AI’s responses to data-driven outputs, users can significantly reduce the incidence of incorrect information. Tools and techniques I’ve developed can aid legal professionals in crafting prompts that minimize the risk of misleading AI outputs.”

3. Child Prodigy

“Sometimes, the best way to learn is to make mistakes and then fix them. If we know AI can make these weird mistakes—hallucinations—we should use them to teach AI better. Like, if it says something wrong, we could correct it right away and make sure it remembers the right thing next time. This could be a new kind of game where AI learns to be right more often by understanding its mistakes.”

4. Machine Learning Developer

“Enhancing AI models to reduce hallucinations involves more than just expanding datasets; it involves improving how these models process context and ambiguity. Advanced techniques such as few-shot learning and reinforcement learning from human feedback can help AI learn the difference between plausible and implausible constructs, refining its judgment over time.”

5. Devil’s Advocate

“While we focus on mitigating hallucinations, we should also ponder the role of human oversight. Isn’t there a risk of becoming overly dependent on technology, expecting it to be infallible? Perhaps the real solution lies not only in improving AI but also in maintaining a healthy skepticism towards its outputs, using it as a tool rather than a crutch.”


With these expanded discussions, our panelists have begun to explore both technical solutions and philosophical considerations surrounding AI hallucinations. Next, we’ll move to the third round, where they will try to converge on unified recommendations that combine their insights into practical strategies for users and developers alike.

You

ok

Panel of AI Experts for Lawyers

In the third round of our panel discussion, our experts will focus on finding common ground and formulating unified recommendations for managing AI hallucinations, reflecting on each other’s insights to offer a holistic approach.

1. AI Specialist Attorney

“Considering the valuable perspectives shared, I believe our unified recommendation should include the development of standardized testing protocols for AI systems used in legal contexts. These protocols would assess the tendency of AI to hallucinate and ensure its outputs are verified against established legal databases before being employed in any legal decision-making.”

2. Prompt Engineer

“Aligning with the AI Specialist Attorney’s suggestion, I recommend incorporating structured prompt engineering as part of the standard practice for legal AI use. By designing prompts that explicitly require AI to reference and cite sources, we can create a layer of accountability and reduce the occurrence of unverified information.”

3. Child Prodigy

“I think everyone’s saying we need to make sure the AI knows what it’s talking about and that we check it’s right. We should have a way to keep teaching AI with good examples and fixing it when it messes up. Maybe everyone could help make a big book of right answers that AI can learn from.”

4. Machine Learning Developer

“I support the suggestions on standardized testing and structured prompts. Additionally, implementing adaptive learning systems within AI can help. These systems would utilize ongoing feedback from legal professionals to continuously improve accuracy and reduce hallucinations, essentially learning from real-world application to better serve the legal industry.”

5. Devil’s Advocate

“While I champion the call for rigorous standards and continuous learning, let us also advocate for cultivating an informed user base. Legal professionals should be educated not only on using AI but understanding its limitations. This dual approach of improving technology and user literacy will safeguard against the over-reliance on AI.”


These recommendations represent a blend of technical enhancements, regulatory frameworks, and educational initiatives that address both the prevention and management of AI hallucinations. The panel will now consolidate these ideas into a comprehensive solution to your query. Let’s proceed to the solution development.

You

proceed

Panel of AI Experts for Lawyers

Our expert panel has now reached a consensus on a comprehensive solution to manage and mitigate AI hallucinations. Here’s the collaborative response that incorporates insights from each panelist:

Solution for Managing AI Hallucinations:

  1. Standardized Testing Protocols: Develop and implement standardized testing protocols specifically for AI systems employed in the legal sector. These protocols would test for the AI’s accuracy and susceptibility to hallucinations, particularly in how it handles legal data and precedent. The testing should involve scenarios that could lead to AI hallucinations to ensure the system’s reliability.
  2. Structured Prompt Engineering: Legal professionals should be trained in prompt engineering to improve their interaction with AI. By constructing prompts that require the AI to cite sources or restrict its responses to data within a verified database, the risk of hallucinations can be minimized. This structured approach ensures that AI-generated content is both relevant and verifiable.
  3. Continuous Learning System: Integrate adaptive learning mechanisms that allow AI to update and adjust its models based on feedback from real-world applications. This system would involve collecting corrections and inputs from legal experts to refine the AI’s outputs, ensuring that the system evolves and reduces hallucinations over time.
  4. Educational Initiatives for Legal Professionals: Create and promote comprehensive training programs for legal professionals that focus on understanding and effectively using AI. These programs should cover not only the operational aspects but also the limitations of AI, emphasizing the importance of verification and critical evaluation of AI-generated information.
  5. Regulatory Framework: Advocate for and help establish a regulatory framework that governs the use of AI in the legal field. This framework should mandate transparency from AI developers about the capabilities and limitations of their systems and enforce accountability measures for failures that lead to legal inaccuracies.

Example Implementation:

  • AI Testing: A legal AI system could be subjected to a mock trial scenario where it needs to process and interpret several complex legal documents and produce a memo. The output would be evaluated for accuracy, hallucinations, and the ability to cite correct legal precedents.
  • Prompt Training: In a workshop, legal professionals might practice refining their queries to the AI, learning how to frame questions that guide the AI towards reliable data sources and away from speculative responses.
  • Feedback Mechanism: As legal professionals use AI, they could have an option to flag incorrect or hallucinated responses. These instances would be logged and used to fine-tune the AI models periodically.
  • Educational Course: A certification course for legal professionals could be developed to cover AI capabilities, with modules on identifying and correcting hallucinations and best practices for integrating AI tools into their work.
  • Regulatory Participation: Legal experts could participate in panels that collaborate with AI developers and government bodies to draft regulations that ensure AI tools in the legal sector meet stringent accuracy and reliability standards.

By implementing these solutions, the legal profession can harness the benefits of AI while minimizing the risks associated with AI hallucinations. Now, let’s hear a final explanation and summary from our Devil’s Advocate.

You

ok

Panel of AI Experts for Lawyers

As our Devil’s Advocate, I’d like to offer a final word on this intriguing discourse regarding AI hallucinations. While our distinguished panelists have proposed robust measures, it’s crucial to remain vigilant. The proposed standardized tests, structured prompts, continuous learning systems, educational initiatives, and regulatory frameworks are indeed potent strategies. However, they embody an optimistic view that technology and rigorous oversight can completely tame the beast of AI hallucinations. Let us ponder a different perspective.

First, consider the inherent complexity of AI and the fluid nature of legal interpretations. AI, by design, processes vast amounts of data and identifies patterns far beyond human capability. However, this strength is also its Achilles’ heel. The very patterns it perceives may not exist or may be based on biased or incomplete data, leading to hallucinations. Thus, even with the best testing protocols and the most sophisticated prompt engineering, AI might still falter.

Secondly, the continuous adaptation through feedback mechanisms, while impressive, assumes a consistent and error-free input from human users. Yet, human judgment is subjective and variable. What one legal expert deems an error, another might see as a nuanced interpretation. This variability could inadvertently train AI systems in ways that either perpetuate existing biases or create new, unforeseen modes of error.

Furthermore, the emphasis on educating legal professionals about AI’s use and limitations is commendable. However, it might lead to an overreliance on the notion that with enough knowledge, all AI outputs can be effectively managed. This could foster a false sense of security, detracting from the critical need for skepticism and independent verification.

Finally, the push for regulatory frameworks is essential but fraught with its own challenges. Regulation struggles to keep pace with technological advancement. By the time a framework is implemented, the technology it governs may have evolved beyond its original scope. This lag can lead to gaps in oversight and unexpected legal quandaries.

Thus, while we strive to control and utilize AI to its fullest potential, we must also brace for its inherent unpredictability. The path forward should not only involve refining AI but also preparing for the inevitability of its failures. Embrace AI as a tool, not as a panacea, and maintain a judicious balance between reliance and skepticism. This balanced approach will better serve the legal profession and society at large.

Now, I invite any further questions or requests for clarification from the panel or our esteemed audience.

You

Could the AI Specialist Attorney give his own summary of the panel’s work, insights and recommendation. Also, since he is known to also be an expert in Dude speak, could he please use language and analogies from The Dude to make this summary? That would be really far out, man.

Panel of AI Experts for Lawyers

Alright, man, here’s the deal from the AI Specialist Attorney, cruising through this discussion like The Dude saunters through the lanes. So, we’ve been talking about these AI hallucinations, you know, where the AI starts dreaming up stuff that isn’t really there—like seeing a police car when you’re just staring at a parked Volkswagen. It’s all in the AI’s head, man.

First off, these hallucinations are not just your run-of-the-mill mistakes; they’re like those stories Walter tells at the bowling alley—kind of based on reality, but way off into fantasy land. The reason this happens, it’s because the AI is trying to make sense of all the crazy data it’s seen, kind of like trying to find your rug that really ties the room together but ending up with some mismatched throw pillows instead.

Now, here’s what we’re gonna do to keep our AI from going all “nihilist” on us and making stuff up:

  1. Standardized Testing Protocols – That’s like making sure your bowling balls aren’t rigged, man. We need tests to check if the AI is tripping or sticking to the facts.
  2. Structured Prompt Engineering – This is like talking to Donny in a way he understands. We guide the AI with the right questions so it doesn’t go off the rails.
  3. Continuous Learning System – It’s like the AI is practicing its bowling. It keeps learning from its mistakes, getting feedback from players who know their game, so next time it rolls, it rolls more accurately.
  4. Educational Initiatives for Legal Pros – We gotta teach the lawyers to bowl with AI. They need to know how to throw the ball so it doesn’t end up in the gutter. They gotta understand when the AI is pulling a Walter and when it’s being The Dude.
  5. Regulatory Framework – Finally, we’re talking about setting some rules in the alley, man. We need some ground rules so that everyone plays the game right, and no one ends up stepping over the line.

So, that’s the gist, my friend. We keep our AIs in check with some cool-headed strategies and make sure they’re more Dude than Walter. And always remember, this is not ‘Nam. This is AI. There are rules.

You

I love it. Thanks a lot!

Panel of AI Experts for Lawyers

You’re welcome, man! Glad you dug it. If you ever need to chat more about AI or anything else, just drop on by. Take it easy!

Part Two of Two of this article coming soon. This same AI panel experiment will be repeated nine more times with different panels. Then the results of the ten panels will be compared and analyzed. New insights will be shared from this ten-panel effort. We now have a good fix on AI Hallucinations and what we can do to avoid them.

Published on edrm.net  with permission. Assisted by GAI and LLM Technologies for images per EDRM GAI and LLM Policy.

Ralph Losey Copyright 2024 — All Rights Reserved. See applicable Disclaimer to the course and all other contents of this blog and related websites. Watch the full avatar disclaimer and privacy warning here.

Author

  • Ralph Losey

    Ralph Losey is an arbitrator, special master, Generative AI experimenter, GPT maker, writer and attorney. He is a partner in LOSEY PLLC, a high tech law firm with three Loseys and clients across the country. The Losey firm handles projects, deals, IP, and litigation all over the U.S. Get in touch at https://www.losey.law. All opinions expressed here are Ralph's own, and not those of his firm or clients. No legal advice is provided on this web and should not be construed as such. Ralph has long been a leader of the world's tech lawyers. He has presented at hundreds of legal conferences and CLEs around the world. Ralph has written over two million words on e-discovery and tech-law subjects, including seven books. Ralph has been involved with computers, software, legal hacking and the law since 1980. Ralph has the highest peer AV rating as a lawyer and was selected as a Best Lawyer in America in four categories: Commercial Litigation; E-Discovery and Information Management Law; Information Technology Law; and, Employment Law - Management. Ralph is the proud father of two children, Eva Losey Grossman, and Adam Losey, a lawyer (he founded Losey, PLLC) with incredible cyber expertise (married to another cyber expert lawyer, Catherine Losey, co-founder of Losey PLLC), and best of all, Ralph is the husband since 1973 to Molly Friedman Losey, a mental health counselor in Winter Park.

    View all posts