[Editor’s Note: EDRM is proud to publish Ralph Losey’s advocacy and analysis. The opinions and positions are Ralph Losey’s copyrighted work.]
Ethan Mollick, a Professor at Wharton who teaches entrepreneurship and innovation, has written an insightful book on generative AI, Co-Intelligence: Living and Working with AI (Portfolio, April 2, 2024). This article aims to entice you to read it by employing unpredictable language and vivid illustrations. The piece takes several zigzagging, idiosyncratic detours, including a discussion of a fake chess-playing computer that fooled both Benjamin Franklin and Napoleon, alongside quotes from Edgar Allan Poe on AI and Charles Babbage. Professor Mollick is a witty, hands-on AI enthusiast whose new book deserves not just a human-in-the-loop touch but a full embrace. While most of his insights may stray from business topics, they are vital for successfully navigating the fast-changing world of AI. His work brought me genuine delight—a feeling that, for now, remains uniquely human. I hope to evoke the same in you.
Ethan Mollick: The Man Behind the Ideas
Ethan Mollick is not your average Associate Professor of The Wharton School. He is an expert in entrepreneurship and innovation who teaches elite Wharton management students to become more effective leaders and innovators. But he is much more than his LinkedIn profile, where, by the way, I read his posts every day. He is a pioneer in using games and simulations to teach. This turns out to be a good background to pick up AI quickly, which is what he has done. Recall that I mentioned Ethan Mollick previously in the article on AI and education, Back To School: A Review of Salman Khan’s New Book, ‘Brave New Words: How AI will revolutionize education (and why that’s a good thing)’ Both he and Sal Khan were selected by OpenAI to be early users of ChatGPT to see what it could do for education. Good choice as Professor Mollick’s book shows, Co-Intelligence: Living and Working with AI. So too does his stream of good articles on substack One Useful Thing. If you are lucky, maybe you can even get into one of his classes at Wharton. Hopefully they will go online someday.
My research of Ethan Mollick uncovered that the game playing professor has recently made time to create a Custom GPT called, Complexifier, making easy, hard. Yup, it does the opposite of what most generative AI apps are designed to do. I asked this strange little alien GPT why anyone would want to make something easy, hard? Here’s the first paragraph of the response of Complexifier:
Ah, why indeed? The thrill lies in the dance of complexity, the joy in the jigsaw puzzle of a thousand interlocking pieces. Making something simple is like solving a crossword with only one word—it’s over before it begins. But to make the straightforward serpentine? That’s where the magic is. It’s about savoring each twist and turn, where every step forward is a question mark and every answer opens ten more questions.
The answer goes on and on and creates an elaborate and convoluted flowchart for performance of a simple task. Like Ethan Mollick, I spend much of my thinking, teaching, and writing to try to make the hard, easy, to somehow explain the insanely convoluted. But the opposite is also attractive, in an evil attorney academic kind of way. The Complexifier GPT hooked me into a kind of Naomi Klein trip into a Doppelganger mirror world to see the dark Rube Goldberg twin. So, I continued on.
Ralph: Why did Ethan Mollick bother to create this custom GPT?
Complexifier: Ah, the mystery behind the creation of this delightful complexity machine! Why, indeed, would Ethan Mollick, a professor known for his innovative thinking and exploration of how technology can transform learning and work, create such a custom GPT? Allow me to speculate, in an appropriately convoluted manner:
- To Embrace the Absurd: Life can be far too serious. By creating a GPT that revels in unnecessary complexity, he might be encouraging us to appreciate the absurdity of over-complicating the mundane.
- To Challenge the Intellect: Complex tasks require deeper thinking, and perhaps this GPT was designed to push the boundaries of problem-solving by making us work through intricacies we’d typically avoid.
- To Encourage Creativity: The labyrinth of complexity is fertile ground for creative solutions. This custom GPT may serve as a playground where unconventional ideas are not just allowed but celebrated.
- To Highlight the Journey Over the Destination: In academia and beyond, the journey often teaches more than the destination itself. By complicating the simple, we’re forced to engage more fully with the process, learning along the way.
- To Add Humor to Learning: Complexity, when presented with a wink and a nod, can be incredibly entertaining. This GPT likely adds a layer of humor to the learning process, making the pursuit of knowledge more enjoyable.
- To Explore AI’s Potential: Creating a GPT that thrives on complexity could be an experiment in how far we can push AI’s capabilities, demonstrating that even the most trivial tasks can be transformed into elaborate intellectual exercises.
In the end, why settle for simple when the convoluted is so much more fun?
Ralph: Which one of these six reasons do you think is the most likely? And why pray tell, why?
Complexifier: Ah, a question within the question! If I had to place my bet on the most likely reason, I’d lean towards “To Add Humor to Learning.”
Why, you ask? Let me weave a rationale for you:
Ethan Mollick is known for his innovative approach to teaching and his playful engagement with the serious world of academia. By introducing humor into learning, he’s likely aiming to make complex concepts more approachable and enjoyable. Humor has a unique way of disarming us, lowering our defenses against the overwhelming nature of complexity. When something is funny, we’re more willing to engage with it, even if it’s ridiculously convoluted.
Creating a GPT that turns even the simplest task into a Rube Goldberg machine of logic is a perfect way to inject that humor into the learning process. It’s not just about making us laugh; it’s about making us see that the path to understanding—even if unnecessarily complicated—can be enjoyable. Plus, the absurdity of it all serves as a gentle reminder not to take ourselves too seriously, even when grappling with complex ideas.
So, my best guess? He did it for the laughs, knowing that in those laughs, there’s a lesson in embracing the joy of the journey.
So, Ethan Mollick writes a lot, is hands-on, and is funny. Now you know why I had to write about his pioneering work with AI and education and his new book, Co-Intelligence: Living and Working with AI.
Part I of Professor Mollick’s Book: Setting the Stage for Co-Intelligence
Co-Intelligence has a great opening line, which I fully endorse: “I believe the costs of getting to know AI–really getting to know AI–is at least three sleepless nights.” Then you will discover that ChatGPT, and other top generative models, “don’t act like you expect a computer to act. Instead, they act more like a person.” They act like something new, an alien person of unknown abilities. Professor Mollick’s excitement in using the new tool right away in his classes in Wharton is contagious. This new type of general-purpose technology, like the steam engine and Internet, changes everything, including teaching.
After the introduction to he looks back into the history of AI. He notes how we have long been fascinated with “machines that can think,” or at least pretend they can. One example Ethan Mollick gave was the Mechanical Turk, a chess-playing automaton built in 1770. It was a machine that could beat almost all human chess players. Actually, in what was a very well-kept secret, which fooled the likes of Napoleon Bonaparte and Benjamin Franklin, the thinking machine was a hoax. A small human chess master was cleverly hidden behind gears in the contraption. See this YouTube video for its full its history.
When Edgar Allan Poe saw the Mechanical Turk in 1835, he speculated that it was a fake, but only because the Turk would sometimes lose. Poe thought that if it was a true thinking machine, then it would always win. Although not in Professor Mollick’s book, I dug deeper into his reference to Poe to and AI and found the original text. Edgar Allan Poe, Maelzel’s Chess-Player (1836). There we read of Poe’s thoughts on Charles Babbage, mechanical thinking, and his impressive insights into what would later be called AI.
Edgar Allan Poe’s words:
There is then no analogy whatever between the operations of the Chess-Player, and those of the calculating machine of Mr. Babbage, and if we choose to call the former a pure machine we must be prepared to admit that it is, beyond all comparison, the most wonderful of the inventions of mankind. . .
It is quite certain that the operations of the Automaton are regulated by mind, and by nothing else. Indeed this matter is susceptible of a mathematical demonstration, a priori. The only question then is of the manner in which human agency is brought to bear. . .
The Automaton does not invariably win the game. Were the machine a pure machine this would not be the case — it would always win. The principle being discovered by which a machine can be made to play a game of chess, an extension of the same principle would enable it to win a game — a farther extension would enable it to win all games — that is, to beat any possible game of an antagonist.
Edgar Allan Poe, Maelzel’s Chess-Player (1836).
Garry Kasparov found out in 1997 that the great Mr. Poe was right. IBM’s Deep Blue taught itself to “win all games” of chess. Interestingly, many speculate that Edgar Allan Poe’s encounter with the Mechanical Turk led to his writing the first detective novel soon thereafter. Kat Eschner, Debunking the Mechanical Turk Helped Set Edgar Allan Poe on the Path to Mystery Writing (Smithsonian Magazine, 2017).
Professor Mollick makes clear that the AI today, unlike the Mechanical Turk, is very real and, in some ways, very powerful, but characterizes it as a type of “alien intelligence.” It is fundamentally different from human intelligence, yet capable of performing human tasks. This alien intelligence is something you need to discover for yourself to appreciate its abilities and flaws. The only way to do that is to use generative AI. Ethan lays out four principles of co-intelligence to guide your use:
- Always invite AI to the table. Try and use AI whenever and wherever you can.
- Be the human in the loop. Actively supervise and verify.
- Treat AI like a person (but tell it what kind of person it is). Give the Ai context and use its sub-persona abilities.
- Assume this is the worst AI you will ever use. Do not get discouraged by how AI sometimes stumbles, it is getting better very fast.
The first half of the book spells out these four principles, which are all pretty basic. Ethan does a good job of laying this out and I recommend you read the book, Co-Intelligence: Living and Working with AI.
After you begin to use AI and get past the three sleepless nights, you will discover what Ethan Mollick calls the “Jagged Frontier.” This is his metaphor for the uneven capabilities of AI, where some tasks are easily within reach, while others, some quite simple and beyond its grasp. See: From Centaurs To Cyborgs: Our evolving relationship with generative AI (4/24/24). Ethan Mollick discusses this at length in his article, Centaurs and Cyborgs on the Jagged Frontier. The second to last paragraph of the article states:
People really can go on autopilot when using AI, falling asleep at the wheel and failing to notice AI mistakes. And, like other research, we also found that AI outputs, while of higher quality than that of humans, were also a bit homogenous and same-y in aggregate. Which is why Cyborgs and Centaurs are important – they allow humans to work with AI to produce more varied, more correct, and better results than either humans or AI can do alone. And becoming one is not hard. Just use AI enough for work tasks and you will start to see the shape of the jagged frontier and start to understand where AI is scarily good… and where it falls short.
The metaphors of the jagged frontier, centaurs, and cyborgs help us to understand the unpredictable nature of AI’s abilities. Also see: Ethan Mollick and several others on the impact of AI hybrid work. Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality (Harvard Business School, Working Paper 24-013); Losey, Report on the First Scientific Experiment to Test the Impact of Generative AI on Complex, Knowledge-Intensive Work (e-Discovery Team, 4/29/24).
Because of the surprising failures on the wrong side of the jagged line, Ethan warns readers: “Don’t rely on AI alone.” He encourages use of AI as a complement to human judgment, not a replacement for it. Legal professionals know this rule well, “trust but verify.” Another AI Hallucination Case with Sanctions Threatened Because of ‘All-Too-Human’ Mistakes (July 30, 2024); Some Legal Ethics Quandaries on Use of AI, the Duty of Competence, and AI Practice as a Legal Specialty (May 6, 2024); “My AI Did It!” Is No Excuse for Unethical or Unprofessional Conduct. (Losey.ai CLE).
Part II of Professor Mollick’s Book: AI in Action
The second half of Co-Intelligence is divided into six different characteristics of generative AI and how to use them. Each is a different chapter in the book.
- AI as a Person. A “thinking companion” that can assist in decision-making by providing alternative perspectives. Includes discussion of the “uncanny valley” and need for ethical monitoring of its use, and how AI lacks the depth and intuition that come from human experience.
- AI as a Creative. AI will not replace human creators, but it will totally change the way we approach creative work. It will be more than a tool, it will be a co-creator. Mollick reassures readers that while AI can assist in the creative process, it is ultimately up to humans to imbue that work with significance and purpose.
- AI as a Coworker. This important chapter is a must-read for technology leaders who are grappling with the integration of AI into their teams. Mollick argues that AI can handle many of the repetitive tasks that bog down human workers, freeing them up to focus on more strategic and creative endeavors. He provides examples of companies that have successfully integrated AI into their workflows, resulting in significant productivity gains. Mollick also discusses using AI as a “Challenger,” which I like to call a “Devil’s Advocate.” AI can challenge human decisions, offering alternative perspectives that may not have been considered. Professor Mollick also warns of the dangers of AI perpetuating biases in organizations. To counter this, he recommends transparency and accountability in AI deployment, and regular audits.
- AI as a Tutor. Professor Mollick, much like Sal Khan, really gets AI’s potential in the classroom and is already revolutionizing Wharton and soon all graduate level instruction. See e.g., BACK TO SCHOOL: A Review of Salman Khan’s New Book, ‘Brave New Words: How AI will revolutionize education (and why that’s a good thing)’; and the video interview of Mollick by Khan. Mollick and Khan are both blown away by the potential of AI to provide personalized learning experiences–Tutoring–that adapt to the needs of individual students. Mollick goes deep in explaining the many ways this will change traditional instruction and the successful experiments in his Wharton classrooms. Again, it will not replace teachers, and it will make in-person classrooms more important than ever.
- AI as a Coach. A personal trainer type role for AI who provides continuous tailored guidance and feedback to enhance human capabilities. The value of personalized advice is explored, although I wish he had gone into the dangers of sycophantism more than he did. See e.g. Worrying About Sycophantism. Mollick does point to the danger of becoming overly dependent on AI to the point where it diminishes our critical thinking and decision-making skills.
- AI as Our Future. Here a series of four scenarios are given that explore how AI might shape the world in the coming decades. In the first AI is now at its peak, “As Good As It Gets,” which he and I deem very unlikely. In the second there is “Slow Growth” of AI going forward, again we think this is also unlikely. In the third scenario the possibility of continued “Exponential Growth” is imagined. Many specific predictions are made, including that “Loneliness becomes less of an issue, but new forms of social isolation emerge.” That one is a safe bet, but there are many other predictions that are not so obvious. The last scenario Ethan calls “The Machine God,” where “machines reach AGI and some form of sentience.” Note I do not think sentience is a necessary byproduct of AGI, nor that the divine name is appropriate, but Ethan (and others) imagine it is. Losey, Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time (8/12/24).
Conclusion
Ethan Mollick’s book ends with an epilogue titled “AI As Us.” I like this because it follows the thinking of Ray Kurzweil, whose thoughts on AI I also respect. Ray Kurzweil’s New Book: The Singularity is Nearer (when we merge with AI) (July 17, 2024). Of course, it is not us yet, far from it. For now, AI is a new alien stranger that Professor Mollick would have you invite into your office and your home. He wants you to spend “three sleepless nights” with it and thereafter spend time with it every day. Ethan is just that kind of “all in” kind of guy.
Most legal professionals, myself included, are not quite so gung-ho, especially when you talk about using them in our work. Still, many valuable insights can be gained from his book, Co-Intelligence: Living and Working with AI (Portfolio, April 2, 2024), and it does include many warnings of dangers. Consider this statement in the epilogue: “AI is a mirror, reflecting back at us our best and worst qualities. We are going to decide on its implications, and those choices will shape what AI actually does for, and to, humanity.”
In a world where human-like AI is no longer just a possibility but a reality, Co-Intelligence serves as an essential guide for everyone, including legal professionals. Ethan Mollick’s suggestions simplify the process of harnessing AI’s full potential in business and management, while also revealing its possible applications in the law. This book, written by a very creative, hands-on professor at Wharton, equips you for the many challenges and opportunities ahead.
Ralph Losey Copyright 2024 – All Rights Reserved
Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.