Will AI Take My Job? OpenAI’s New Policy, Rising Cybersecurity Risks, and What Comes Next

Will AI Take My Job by Ralph Losey
Image: Ralph Losey with Gemini AI tools.

[EDRM Editor’s Note: EDRM is proud to publish Ralph Losey’s advocacy and analysis. Writing and images in the article are by Ralph Losey using Gemini AI. Originally published on EDRM.net. This article remains the intellectual property of Ralph Losey and is shared with permission.]


Introduction: The Urgency of the Question

Will AI take my job?

That question is no longer speculative. It is now front-page relevant, driven not only by rapid advances in AI, but by two recent events that reveal how quickly things are changing. On April 6, 2026, OpenAI released its Industrial Policy for the Intelligence Age, openly warning that the transition to superintelligence is already underway. Just days earlier, a human error at Anthropic briefly exposed the source code of one of the world’s most advanced AI systems. It was quickly copied and distributed before the mistake was corrected. It is reportedly now in the hands of criminal hackers and enemy states worldwide. Together, these developments make one thing clear: the future of work is arriving faster, and far less predictably, than most expected.

In my recent article, What People Want To Know About AI: Top 10 Curiosity Index, Gemini AIs and I analyzed global search patterns and online discussions to identify the public’s most urgent concerns. The number one question, How does AI work? was addressed in my follow-up article, Five Faces of the Black Box: How AI ‘Thinks’ and Makes Decisions, where we explained the technology across five levels, from a child’s guessing game to matrix algebra.

But the second question is different.

Will AI take my job—and what should I do about it?

This accounted for roughly 18% of all inquiries. And unlike the first question, it is not driven by curiosity. It is driven by anxiety, something I hear and feel in conversations about AI with all kinds of people.

Crowded futuristic city street, humans looking up at lighted signs for the workforce evolution, robots on the street as well
All images in this article are by Ralph Losey using Gemini AI tools.

This article focuses on that anxiety: economic security and the future of work. It also confronts the issue people increasingly want answered but rarely get: the timeline. When might AI reach a level capable of performing most cognitive work better than us? Because if that point is near, and recent signals suggest it is, then the implications are profound. Most knowledge-based jobs would be affected, and the resulting disruption to the economy and social order could be significant.

The Policy Response: OpenAI’s Industrial Blueprint


The urgency of this economic question is not limited to the public. It is also front and center for the corporations building the technology. On April 6, 2026, OpenAI released Industrial Policy for the Intelligence Age (“Policy Statement”) and it is likely that other leading AI companies will soon follow. This document moves beyond engineering into economic and social policy. It begins with a blunt premise: the transition to superintelligence is already underway and will reshape how organizations operate, how knowledge is created, and how people find meaning and opportunity.

The Policy Statement does not minimize the disruption ahead, or the speed at which it may arrive. It acknowledges that AI will disrupt jobs and reshape entire industries at a scale and pace unlike any prior technological shift. At the same time, OpenAI’s leadership emphasizes that the outcome is not predetermined. Whether this transformation leads to shared prosperity or to concentrated wealth and widespread displacement will depend on decisions made now, by governments, corporations, institutions, and individuals.

I encourage you to read the Policy Statement in full. It addresses far more than job security. My focus here is narrower: the economic implications. On pages 3 and 4, the Policy Statement explains:

The Case for a New Industrial Policy. Society has navigated major technological transitions before, but not without real disruption and dislocation along the way. While those transitions ultimately created more prosperity, they required proactive political choices to ensure that growth translated into broader opportunity and greater security. For example, following the transition to the Industrial Age, the Progressive Era and the New Deal helped modernize the social contract for a world reshaped by electricity, the combustion engine, and mass production. They did so by building new public institutions, protections, and expectations about what a fair economy should provide, including labor protections, safety standards, social safety nets, and expanded access to education. 

History shows that democratic societies can respond to technological upheaval with ambition: reimagining the social contract, mediating between capital and labor, and encouraging broad distribution of the benefits of technological progress while preserving pluralism, constitutional checks and balances, and freedom to innovate. The transition to superintelligence will require an even more ambitious form of industrial policy, one that reflects the ability of democratic societies to act collectively, at scale, to shape their economic future so that superintelligence benefits everyone…

On this path to superintelligence, there are clear steps we need to take today. People are already concerned about what AI will mean for their lives—whether their jobs and families will be safe, and whether data centers will disrupt their communities and raise energy prices. AI data centers should pay their own way on energy so that households aren’t subsidizing them; and they should generate local jobs and tax revenue. Governments should implement common-sense AI regulation—not to entrench incumbents through regulatory capture but to protect children, mitigate national security risks, and encourage innovation. 

OpenAI released a companion video the same day as the Policy Statement, titled Sam Altman on Building the Future of AI (“Video“). At 26:08, the discussion turns directly to jobs. Joshua Achiam, OpenAI’s Chief Futurist, addresses the issue candidly:

On getting workers involved in AI, I actually, I kind of want to back up and just acknowledge an elephant in the room, which is that a lot of workers are concerned about AI; they’re worried about what AI means for them. They are not immediately excited at the prospect of figuring out, all right, how are we going to use AI in our workplace? They’re thinking, oh my gosh, is the AI going to replace me?

The public is no longer satisfied with abstract reassurances. People want timelines. They want industry-specific forecasts. They want to know whether their job will still exist in five years. Both the Policy Statement and the Video point in the same direction: highly capable AI systems are coming quite soon, much faster than most expected. 

Better get it right, Sam.
Images: Ralph Losey using Gemini AI tools.

More Training Now for Job Security Tomorrow?

For many years my usual answer to the jobs question has been more training now. That answer may not cut it today for a majority of people, especially if AI advances too fast, too far. For instance, in Can AI Really Save the Future? A Lawyer’s Take on Sam Altman’s Optimistic Vision (Oct. 2024) I opined:

AI will create entirely new jobs. For instance, for lawyers, new jobs pertaining to AI regulations are emerging. AI will also change existing jobs for the better. It is already replacing the most boring parts of our work, leaving us to focus on the more rewarding and human aspects. Moreover, it is true that no worker will be replaced by an AI, they will be replaced by a human that knows how to use AI.

Now I am not so sure, and neither is Sam Altman. The prospect of superintelligence is no longer a distant future. It is a planning horizon.

To address the question of human employment in a world of increasingly powerful AI, an issue well beyond my unaided ability to resolve, I turn to a Panel of AI Experts. For this exercise, I use OpenAI-based models that I have fine-tuned for analysis across multiple disciplines. They are not superintelligent, but they are highly capable and broadly informed. They created a five AI-persona expert panel to try to answer these issues. The only persona I required is the “devil’s advocate” persona because I have found that AI type is indispensable to brainstorming exercises like this. I did not specify any other character, even the first character chosen, The “CentaurProfessional, although I must admit he sounds just like me.

Human in a lit circle with centaur, sin eater, advocate, human edge and startup in a box.  Intense man, with fingers outstretched wth light going to each facet.
The Human in the Loop should remain in charge and verify AI work.
Image: Ralph Losey using Gemini AI tools.

Voice 1: The “Centaur” Professional (The Hybrid Advocate)

Persona: The pragmatic professional who has fully integrated AI, but remains firmly in control. For background see my From Centaurs To Cyborgs: Our evolving relationship with generative AI (April 2024). Except for the citations that follow, all of the language from here to the Conclusion was written by the AIs, not me.

The Perspective: Let’s begin with a reality check. You’re more likely to lose your job to someone using AI than to AI itself. That single sentence cuts through most of the noise.

The fear of immediate, total automation misunderstands how work actually happens. We do not operate on smooth technological curves, we operate on what researchers call a “jagged frontier.” AI excels at certain tasks and fails at others, often unpredictably. This is why hybrid human-AI teams—Centaurs—consistently outperform both humans alone and AI alone. Recent research suggests improvements approaching 70% in certain knowledge-work domains. [See e.g.The New Stanford–Carnegie Study: Hybrid AI Teams Beat Fully Autonomous Agents by 68.7%Also see research and reports of top expert teams in Navigating the Jagged Technological Frontier (Working Paper 24-013, Harvard Business School, Sept. 2023) and my Navigating the AI Frontier (Oct. 2024).]

In law, AI can draft a brief in seconds. But it cannot sign it. It does not carry malpractice insurance. It does not stand before a judge. It cannot be sanctioned—or disbarred.

In medicine, AI may catch patterns a doctor misses. But patients do not sue algorithms—they sue physicians.

Sam Altman himself has described using AI to analyze medical data more effectively than his own doctor. Yet no serious observer concludes from this that doctors are obsolete. The conclusion is simpler:

Doctors who use AI will replace doctors who do not. The same applies across professions.

The future belongs to the Centaur—the professional who augments judgment with machine intelligence, but never abdicates responsibility.

Your job is not disappearing. The drudgery is.

As I explained in The Great AI Transition: From Tool to Teammate (June 2024), “the real shift is from doing the work to supervising it, humans move up the chain of responsibility, not out of the system.” [The AI hallucinated this article and cite, which obviously was supposed to refer to one of my articles. I am embarrassed to say that the title and quote sounded so plausible that I had to look it up to be sure. I then called the AI on this, and it admitted to the hallucination and apologized.]

AI Professional looking attorney n a courtroom wth a jury.
The “Centaur” Professional (The Hybrid Advocate).
Image: Ralph Losey using Gemini AI tools.

Voice 2: The “Sin-Eater” (AI Risk & Accountability Officer)

Persona: The human firewall—absorbing legal and ethical responsibility for AI outputs.

The Perspective: The Centaur is right—but incomplete. Because every gain in capability creates a parallel demand for accountability.

Wharton’s Ethan Mollick coined the term “Sin-Eater” to describe a new role: the human who vouches for AI-generated work and bears the consequences when it fails. That role is not theoretical—it is inevitable.

As AI systems scale from minutes to months-long projects, the need for verification, auditing, and compliance will explode. OpenAI’s own policy proposals emphasize the need for an “AI trust stack”—auditing regimes, validation systems, and human oversight at every layer.

And then there is cybersecurity. Our current software ecosystem is already vulnerable. AI will amplify both offense and defense—but offense often scales faster. Sam Altman has warned openly: AI will become extraordinarily good at identifying software vulnerabilities. That means bad actors will too.

This creates a massive new labor demand. Not for passive users—but for active defenders. We will need an army of human-AI teams to audit, test, and secure critical systems. This is not optional. It is civilizational maintenance.

Sin eater in hooded robe looking down, for accountability auditing and  an AI bias map.
The “Sin-Eater” (AI Risk & Accountability Officer).
Image: Ralph Losey using Gemini AI tools.

Voice 3: The “Startup-in-a-Box” Entrepreneur

Persona: The solo builder with the leverage of a 100-person company.

The Perspective: Why is the conversation so focused on saving existing jobs? We are on the verge of the largest expansion of individual capability in human history.

Sam Altman has spoken repeatedly about a future where one person can build what once required an entire company. AI agents will handle coding, marketing, accounting, logistics—everything that currently creates friction.

The barriers to entry are collapsing.

Today, a brilliant nurse or mechanic might never start a business—not because of lack of skill, but because of administrative overhead. Tomorrow, that overhead disappears.

This is the rise of the micro-entrepreneurial economy.

Access to powerful AI tools—what some call a “Right to AI”—may become as foundational as access to electricity. With it, millions can create, compete, and innovate independently.

Yes, large bureaucracies may shrink. But they will be replaced by networks of highly capable individuals.

The question is not just “Will I lose my job?” It is also: “What could I build if friction disappeared?”

Young entrepreneur with AI tools like a revenue forecaster, pitchdeck generator, PR etc.
The “Startup-in-a-Box” Entrepreneur.
Image: Ralph Losey using Gemini AI tools.

Voice 4: The “Human Edge” Advocate

Persona: The guardian of empathy, presence, and human connection.

The Perspective: The technologists talk about efficiency. But humans do not live on efficiency alone.

There is a category of value AI cannot replicate: being human with another human.

Josh Achiam of OpenAI described visiting a fully automated café. It worked perfectly. It was fast. It was efficient. It was also—his word—unfulfilling. He missed the human interaction.

This is the “Human Edge.”

As AI absorbs administrative and analytical burdens, the value of human-centered work will rise dramatically. Teaching, nursing, therapy, caregiving—these are not residual categories. They are foundational. Today, these professions are underpaid and overburdened. That is a policy failure—not a market inevitability.

If AI delivers the productivity gains promised, then society must redirect that surplus toward elevating human-centered roles. The OpenAI policy proposals explicitly call for this—training pipelines, wage support, and institutional investment.

The future of work is not less human. It is more human than ever.

Human Edge advocate with heart, scales, ethics
The “Human Edge” Advocate.
Image: Ralph Losey using Gemini AI tools.

Voice 5: The Contrarian “Devil’s Advocate”

Persona: Skeptic. Realist. Immune to hype.

The Perspective: Let’s stop romanticizing. Disruption is coming—and it will be uneven, painful, and fast. Telling displaced workers to “become entrepreneurs” or “pivot to caregiving” is not a plan. It is a slogan. We need structural reform.

If AI concentrates wealth in capital rather than labor, then the tax system must adapt. That means shifting the burden toward capital gains, corporate profits, and possibly automated labor.

We must also decouple basic security from employment. Healthcare, retirement, and income stability cannot depend entirely on whether your job survives automation.

And yes—let’s talk about infrastructure. We have a decaying physical world and a fragile digital one. Meanwhile, AI companies are generating enormous wealth. That wealth must be reinvested.

A modern New Deal—focused on infrastructure, cybersecurity, and energy—is not just desirable. It is necessary.

This is not anti-capitalist. It is pro-stability.

Cranky looking devil's advocate, with red devel on shoulder.
The Contrarian “Devil’s Advocate”.
Image: Ralph Losey using Gemini AI tools.

Conclusion: Responsibility at the Edge of Superintelligence

This panel reveals a truth that resists simplification: the future of work in the age of AI is difficult to predict. At this point it could go either way.

Personally, I am now more inclined to agree with the curmudgeon Contrarian than the mini-me Hybrid Advocate. That is a change for me. It reflects a growing concern that the risks may be advancing faster than the benefits. The real question is whether we, and our institutions, can adapt quickly enough.

The practical advice is straightforward. Begin serious AI training now. At the same time, explore work where the human edge still matters. You may find not only greater security, but greater satisfaction.

Above all, hold the new centers of power, economic and technological, to their obligations. Stand for both human rights and progress. We should be able to do both. In today’s world, we have no choice. It is too dangerous to stand still.

Superintelligence may drive the engine of the future. But I continue to insist that humanity must remain firmly and responsibly at the wheel.

All of the previous characters sitting down for a panel discussion in front of an audience, the AI Centaur Professional standing to speak.
Image: Ralph Losey using Gemini AI tools.

Ralph Losey Copyright 2026 – All Rights Reserved. Originally published on edrm.net.
Assisted by GAI and LLM Technologies per EDRM’s GAI and LLM Policy.

Author

  • Ralph Losey

    Ralph Losey is a lawyer, tech researcher, educator and writer. After 45-years of legal practice with several local and national law firms., Ralph retired in 2026 . He continues his service to the profession as CEO of Losey AI, LLC, providing non-legal educational services on AI and quantum law.

    Ralph has long been a leader among the world's tech lawyers. He has presented at hundreds of legal conferences and CLEs around the world and written over two million words on AI, e-discovery, quantum and other tech-law subjects, including seven books.

    Ralph has been involved with computers, software, legal hacking, and the law since 1980. Ralph had the highest peer AV rating as a lawyer and was consistently selected as a Best Lawyer in America in four categories: E-Discovery and Information Management Law, Information Technology Law, Commercial Litigation, and Employment Law - Management. For his full resume and list of publications, see his e-Discovery Team blog.

    Ralph has been married to Molly Friedman Losey, a mental health counselor in Winter Park, since 1973 and is the proud father of two children and grandfather of two more.

    View all posts