[Editor’s Note: EDRM is proud to publish Ralph Losey’s advocacy and analysis. The opinions and positions are Ralph Losey’s copyrighted work.]
In 2024, the dawn of the AI revolution shifted from science fiction to daily reality. Now, as we step into 2025, Sam Altman—the architect behind OpenAI’s rapid rise—reflects on this transformation in his essay, Reflections. His message is more than a recounting of OpenAI’s milestones; it is a call to action for all of us, including lawyers, to confront the realities of a world reshaped by artificial intelligence. How should we, as users, engage with this transformation? The way we answer this question will determine whether this new era is bright and promising—or dark and ominous.
Sam Altman started the year 2025 by writing an essay, Reflections, on his blog. It begins by celebrating the second birthday of ChatGPT. The essay “share(s) some personal thoughts about how it has gone so far, and some of the things I’ve learned along the way.” He then begins his analysis the way any good thinker should, by acknowledging all that is still unknown and remains to be understood. Socrates would have approved.
As we get closer to AGI, it feels like an important time to look at the progress of our company. There is still so much to understand, still so much we don’t know, and it’s still so early. But we know a lot more than we did when we started.
Sam Altman, “Reflections” (samaltman.com, 1/5/25).
Then Sam Altman discusses the history of OpenAI, starting almost nine years ago “on the belief that AGI was possible, and that it could be the most impactful technology in human history.” At the time most everyone thought they were foolish dreamers with no chance of success.
In 2022, OpenAI was still a quiet research lab but had made remarkable progress known to only a few. That all changed November 30th of 2022 when ChatGPT was publicly launched. As everyone by now knows that launch kicked off a growth curve like nothing ever seen before. Sam then says that two years later: “We are finally seeing some of the massive upside we have always hoped for from AI, and we can see how much more will come soon.” The sun of AI intelligence is now rising fast.
Then Sam talked about how hard it has been, building new stuff is hard, working with people, the world, etc., etc. He spends a lot of time lamenting his personal challenges last year, getting fired by a renegade Board of Directors and everything. One interesting point in his complaint session, and hey we all have them, is his observation, which is a key point in my own writings:
There is no way to train people for this except by doing it, and when the technology category is completely new, there is no one at all who can tell you exactly how it should be done.
Sam Altman, “Reflections” (samaltman.com, 1/5/25).
Now for me that is a perfect learning challenge because for decades I’ve been teaching myself new tech (and law) and by now prefer to learn by doing. But most people are not like me, they don’t enjoy hacking around with tech (and law) always having far more questions than answers. That is one reason I have been laboring on a prompt engineering course for legal professionals on generative AI, but I digress.
Finally, Sam moves on to what he has to be grateful about in 2024, how OpenAI went from about 100 million weekly active users to more than 300 million, how the AI has gotten better, etc. He then affirms that OpenAI’s vision of AGI remains, unchanged but “our tactics will continue to evolve.” He then explains with examples the need for new tactics.
For example, when we started we had no idea we would have to build a product company; we thought we were just going to do great research. We also had no idea we would need such a crazy amount of capital. There are new things we have to go build now that we didn’t understand a few years ago, and there will be new things in the future we can barely imagine now.
Sam Altman, “Reflections” (samaltman.com, 1/5/25).
Then he talks about his “release early-break and fix things approach” to AI safety, one that many people think is too risky, including the 2024 Nobel Prize winner, Geoffrey Hinton, the so-called “Godfather of AI.” See e.g., Losey, Key AI Leaders of 2024: Huang, Amodei, Kurzweil, Altman, and Nobel Prize Winners – Hassabis and Hinton. Although Sam Altman talks about safety all the time, many people, not just Hinton, think it is all talk, that the core of his “safety philosophy” is to release it into the world and then learn from experience, including mistakes. I agree you do learn quickly that way, and such an approach is now favored by the likes of Elon Musk, but are the risks acceptable? Can we really afford to break things and try to fix them fast? Time will tell.
We do need the benefits of AGI to fix the overwhelming problems humanity now faces. Plus, Altman and others seem to be well aware of many of the risks. The Intelligence Age (9/23/24 Altman blog on positive vision of AI), Who will control the future of AI? (7/25/24 Washington Post editorial by Altman on political risks). Still, I wonder if the billionaires might overestimate their ability to pivot out of mistakes that quick decisions inevitably bring?
Altman concludes his Reflections essay with a forward-looking vision, expressing confidence that OpenAI is on track to build AGI. As a key milestone, he outlines the development of advanced AI “agents” as the immediate focus for 2025—a crucial interim step toward his ultimate goal. Yet, it’s clear that Altman’s real aspiration lies beyond AGI, toward the creation of superintelligence, a phase he boldly calls the glorious future. He believes we’ll get there sooner than most expect. Personally, I remain skeptical. Bold visions are inspiring, but history teaches us that progress, especially in AI, is rarely linear or predictable.
We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.
Sam Altman, “Reflections” (samaltman.com, 1/5/25).
Conclusion
It will take a lot of work by everyone, lawyers included, for a glorious future to emerge. An awful future could just a easily emerge, one where we fail to overcome the many AI dangers ahead. A world where we make the wrong choices along the way to the ever-promised paradise. Let’s hope the Universe you and I end up in is better than what we have today, far better. Quantum Leap: Google Claims Its New Quantum Computer Provides Evidence That We Live In A Multiverse (JD Supra, 1/7/25).
For lawyers, this challenge is both daunting and exhilarating. We are the architects of legal frameworks, the stewards of ethical boundaries, and the guardians of societal trust. Obviously AI wrote that last sentence, which I left in for laughs. Sounds great, but the reality is we are also the hard-working saps that have to clean up the messes that others make.
Beyond triage and twisted torts, we also face the monumental task of keeping the law relevant and effective amid the dawn’s early light of emergent AI. We are forced to confront fundamental questions: What does personhood mean in an age of machines? How do we balance innovation with accountability? And who will bear the burden of mistakes made by entities far beyond human comprehension?
The answers to these questions will not emerge from theory alone but through practice and perseverance. Like Altman’s own philosophy of learning by doing, the legal profession must embrace AI as both a challenge to be met and a tool to be mastered. Lawyers must be ready to ask hard questions, propose daring, creative solutions, and confront uncomfortable truths about the role of the legal profession.
Ultimately, whether we arrive at a better world will depend not on technology alone but on humanity’s ability to guide it with wisdom, compassion, and justice. Lawyers must help steer that course. The future is neither inevitable nor fixed. It is, as always, a series of choices.
Let us ponder deliberately, consider all of the facts, all of the equites, and do our best to make the right decisions.
Now listen to the EDRM Echoes of AI’s podcast of the article, Echoes of AI on Sam Altman’s 2024 Year End Essay: REFLECTIONS. Hear two Gemini model AIs talk about this article, plus response to audience questions. They wrote the podcast, not Ralph.
Ralph Losey Copyright 2025 – All Rights Reserved
Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.