Panel of Experts for Everyone About Anything – Part Three: Demo of 4o as Panel Driver on New Jobs

Panel of Experts for Everyone About Anything – Part Three: Demo of 4o as Panel Driver on New Jobs by Ralph Losey
Image: Ralph Losey with AI.

[EDRM Editor’s Note: EDRM is proud to publish Ralph Losey’s advocacy and analysis. The opinions and positions are Ralph Losey’s copyrighted work. All images in the article are by Ralph Losey using AI. This article is published here with permission.]

This is the conclusion to the article, Panel of Experts for Everyone About Anything – Part One and Part Two. Here we give another demonstration of the software described in Part One. Part Two provided a demonstration where the software used the advanced reasoning model of OpenAI, ChatGPT 4.5. The demonstration included expert discussion of the hot issue of future employment. Here we use the free version, ChatGPT 4o, to see how it does and get another perspective about new jobs that will be needed with AI. The results are surprising.

Steampunk themed discussion with about a dozen passionate advocates around a tabble
Expert panels debating. All images in article are in Steampunk style this time and by Ralph Losey using AI.

Introduction

In Panel of Experts for Everyone About Anything – Part Two, we demonstrated how the Panel software works to analyze an article. We picked a NYT Magazine article that predicted 24 new jobs that would be created by AI. The demonstration used ChatGPT 4.5 as the engine. OpenAI now allows a user to select any model to which they have access, to drive a custom GPT. (For reasons unknown, this feature currently does not work on the Mac desktop version of ChatGPT.) The 4.5 model used in Part Two now requires an upgraded subscription, whereas model 4o is still free.

The cost of 4.5 may be beyond the reach of legal aid groups and some in the public using it for non-law-related questions. Thus, to be consistent with the public service intent behind the free software mentioned in Part One, we here demonstrate the software using ChatGPT 4o. For ease of comparison, and because the question of new jobs created by AI is so important, we repeat the issue again but throw in a few variations to keep it interesting.

Ten people around a table arguing with a dirigible in the sky overhead.
Panels of Experts do battle to see who is the best. Image by Ralph Losey using AI.

How well will the ChatGPT 4o expert team do against the allegedly better 4.5 model? Will we learn anything new about jobs created by AI in Part Three that we did not already know from Part Two? Which panel is the winner? Which gave the best demonstration?

Read on and discover using classic steampunk style images, because, after all, once Einstein discovered Relativity, and Science quickly jumped into Quantum, shouldn’t we have continued to advance at that accelerated rate and discovered AI long ago? Think where we’d be today if we did, eh Mr. Babbage?

Babbage near pipes framing a circuit.
Charles Babbage (1791-1871), ‘father of computers’ shown in steampunk style by Losey. What took us so long?

First Demo of 4o Model Version

In some respects, I like the 4o version of the session better than the Model 4.5 version shown in Part Two. Here are the key excerpts from the response of the Panel of Experts for Everyone About Anything when we repeated the same initial User Prompt, namely to analyze the NYT’s article by Robert Capps, A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You.

4o PANEL: Let’s curate a panel to explore the implications of the NYT article “A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You”. Before we begin the panel discussion, I’ll propose eight candidate experts based on the article’s focus on AI disruption, new job creation, and socio-technical dynamics. You’ll choose four, and I’ll automatically add the mandatory Contrarian.

🎯 Article Focus:

  • Trust-based roles (e.g., AI auditors, ethicists)
  • Integration roles (e.g., AI plumbers, integrators)
  • Taste and creative decision-making (e.g., world designers)
  • Socioeconomic shifts due to AI (entrepreneurial entry-level roles)

👥 Suggested Panelists:

  1. Ethical AI Lawyer – interprets legal responsibility and liability frameworks for trust-based roles.
  2. AI Engineer – evaluates technical feasibility and skill demands of new AI-centric jobs.
  3. Creative Storyteller – articulates the evolution of taste-driven roles like world designers and content curators.
  4. Risk Officer – identifies governance and regulatory gaps in emerging hybrid AI-human workplaces.
  5. Data Visionary – maps future employment and systemic transformation under AI’s rise.
  6. Litigation Strategist – examines legal risks of roles like “legal guarantor” or “AI auditor” under current jurisprudence.
  7. Public Health Advocate – reflects on AI’s impact on human dignity, trust, and socio-emotional labor.
  8. Quantum Analyst – cross-reads how AI role disruption mirrors past technology shifts (like CAD/CAM or simulation in science).

Please select four experts from this list. I’ll then add the Contrarian and begin the panel.

USER: I choose 1, 2, 3 and 5.

4o PANEL: . . .

Happy revelers (4) shaking hands with one robot.
Panel on new jobs created by AI in steampunk style by Losey.

The article’s treatment of trust-based roles like “AI auditors,” “legal guarantors,” and “trust authenticators” directly engages legal norms surrounding delegation of accountability. These roles resemble constructs already debated in U.S. case law like Mata v. Avianca Inc., where counsel submitted AI-generated briefs with hallucinated citations, prompting sanctions and affirming that humans remain the accountable party despite automation.

The concept of the “sin eater” — someone who absorbs A.I.’s mistakes — raises sharp questions about the limits of vicarious liability and professional standards. No matter how advanced the AI, current jurisprudence requires traceability to a human agent. This aligns with the Fed. R. Evid. 901(b)(9) authentication requirements for complex systems, demanding explainability and provenance.

Further, the emergence of “AI translators” and “ethics boards” intersects with legal debates over due process in algorithmic decision-making, especially under frameworks like Article 22 of the GDPR and proposals for AI-specific tort law in the EU AI Liability Directive. These developments could create novel duties of disclosure and internal governance.

Bottom line: AI may generate drafts, workflows, or evaluations, but legally enforceable trust must still be human-certified.

Man dissecting a robot.
Engineer dissecting AI in steampunk style by Ralph Losey.

🧠 AI Engineer — Dissecting Feasibility and Tech-Skill Transitions

The article outlines plausible near-future roles such as AI plumbers, virtual twin managers, and integration specialists — all of which track well with emerging enterprise needs. From a systems engineering standpoint, each represents a layering of human interface over increasingly agentic AI.

A role like “AI plumber” is especially apt: as we move to multi-agent, multi-modal systems, debugging becomes less about code and more about tracing decision nodes through dynamic graphs. Tools like LangSmith or OpenTelemetry for AI will support this, but human oversight must close the loop.

Equally pragmatic is the “AI trainer” role, particularly in retrieval-augmented generation (RAG) settings where corporate data must be contextualized with extreme care. Hallucination risk escalates when source curation is poor — an engineer versed in prompt architectures and data labeling becomes indispensable.

However, I’d caution that some roles (e.g., “AI personality director”) may prove transient or cosmetic. Unless this impacts UX or regulatory tone (e.g., health chatbots), it will likely be a rebranded comms function.

Still, the job market is aligning. “AI consultant” and “integration lead” roles are skyrocketing per LinkedIn data. These aren’t speculative—they’re already hiring.

Half a dozen onlookers over the shoulder of a goggled engineer guiding the AI
Human taste guiding AI image by Losey guiding AI.

✨ Creative Storyteller — Reframing Jobs Around Taste, Not Tasks

This article is really about the return of judgment as labor. The shift from doing to curating, from generating to refining, is the beating heart of the “taste economy” the author envisions.

Take the imagined roles of “story designer,” “differentiation designer,” or “world builder.” These all recast creativity as orchestration — selecting from near-infinite AI outputs to assemble emotionally resonant or brand-coherent narratives. In storytelling terms, we’re moving from penmanship to direction.

The Rick Rubin anecdote nails it: craft may fade, but curation ascends. Rubin’s “confidence in his taste” mirrors what’s being asked of tomorrow’s knowledge workers. We may not sculpt every sentence, but we’ll decide which sentences shape the message.

This resonates with media trends already: TikTok creators “direct” AI-generated visuals, musicians remix AI stems, and even journalists act more like narrative editors than sole authors. The job isn’t dying—it’s mutating.

And taste isn’t just aesthetic. It’s ethical, strategic, and human. As the article wisely closes: “We are the designers of our AI future. Let’s hope we have great taste.”

Two human puppeteers with three robot puppets.
Robot Puppet Theatre in Steampunk style by Losey’s AI puppets.

🛰 Data Visionary — Mapping the Labor Realignment

The piece offers more than speculative roles—it maps a nascent transformation in the labor-capital interface. It posits three durable human moats: trust, integration, and taste. This triptych aligns with observed market inflections.

Trust anchors around accountability and transparency: humans ensure AI outputs meet institutional thresholds. Integration speaks to workflow orchestration—systems-thinking roles that bridge software and organizational context. Taste, the most abstract, becomes a differentiator in saturated generative landscapes.

This trifocal framework is not utopian. It mirrors what Erik Brynjolfsson calls the “complementarity imperative”: value will accrue where humans can amplify machine leverage, not where they replicate it. The World Economic Forum’s projection of 11 million new jobs by 2030 versus 9 million displaced jobs captures the macro realignment. [EDITOR: Future of Jobs Report 2025 (WEF, 01/7/25) at pg. 25 (“trends in AI and information processing technology are expected to create 11 million jobs, while simultaneously displacing 9 million others, more than any other technology trend.“)]

We must recognize, however, the uneven geography of adoption. As the article’s robotics example shows, access to “AI integrators” could determine regional economic divergence. Without policy scaffolding (training pipelines, local incentives), AI may exacerbate inequality.

Still, this is no mere displacement story. It is a diversification story, seeded with entrepreneurial possibility.

Fabricated 2030 US map showing 11 million new jobs
Fictitious vision of where new jobs may appear and old be replaced. Image by Ralph Losey using AI.

⚔️ The Contrarian

The panel indulges in optimism but skirts real constraints. “AI trainer”? Perhaps. But many orgs will under-invest in proper RAG or hallucination testing. “AI plumber”? Realistically, that’s a role for the already-employed IT person, not a new hire.

The legal take? Idealistic. In practice, AI auditors will lack statutory teeth unless regulation mandates it—and current efforts like the EU AI Act remain unenforced in key domains.

And “taste” as job defense? That’s privileged territory. It assumes stability in jobs where creative discretion is rewarded. But what of the displaced clerk, paralegal, or content writer who once did real intellectual work?

Lastly, from an ethical risk lens:

  • Bias: Are these new roles equally accessible? Or do they reinforce existing inequities?
  • Privacy: Will AI integrators mishandle sensitive data pipelines?
  • Regulatory: Could roles like “AI translator” mask liability via plausible deniability?

We risk over-indexing on shiny titles while ignoring structural frictions.

Cranky looking man staring at a glass half full.
Contrarians tend to see the glass as half empty. Image by Losey.

Would you like to follow up, clarify, or pivot?


USER: Review each of the new jobs that the article mentions may be created by AI.

4o PANEL:

Here’s a comprehensive job-by-job breakdown of the new roles suggested in “A.I. Might Take Your Job. Here Are 22 New Ones It Could Give You”. The article groups these under three themes: Trust, Integration, and Taste. I’ve organized the roles accordingly and added brief descriptions for each.

🛡 TRUST-Based Roles

Roles focused on ensuring transparency, accountability, and human oversight:

  1. AI Auditor
    Verifies what AI systems are doing, why, and whether outcomes meet legal or operational standards.
  2. AI Translator
    Bridges technical and business domains by explaining AI behavior and outcomes to non-expert stakeholders.
  3. Fact Checker (AI-Aided)
    Verifies AI-generated content for errors, hallucinations, and compliance issues.
  4. Compliance Officer (AI-Aware)
    Reviews AI-generated outputs in sensitive domains like law, finance, and health to ensure regulatory alignment.
  5. Trust Authenticator / Trust Director
    Ensures users can rely on AI-generated outputs, enforcing policies for fairness, accuracy, and traceability.
  6. AI Ethicist / Ethics Board Member
    Constructs defensible logic for decisions made by AI, ensuring ethical oversight and transparency.
  7. Legal Guarantor
    A licensed or certified person who legally vouches for AI-assisted work, assuming liability (akin to a “sin eater”).
  8. Consistency Coordinator
    Validates uniformity of AI output across systems (e.g., ensuring consistent product visuals or messaging).
  9. Escalation Officer
    Handles cases where AI fails emotionally or functionally—especially in customer service and education.
Two business men talking with two robots and three lab people.
Buyer and seller must establish trust in robots to make sale. Combined styles in image by Losey.

🛠 INTEGRATION-Oriented Roles

Roles aimed at connecting AI capabilities to organizational workflows and technical systems:

  1. AI Integrator
    Designs and implements AI adoption strategies within specific business functions or verticals.
  2. AI Plumber
    Diagnoses and fixes errors in layered AI systems, often dealing with agentic or multi-step workflows.
  3. AI Assessor
    Continuously evaluates different AI models for specific uses (e.g., determining hallucination rates or performance drifts).
  4. AI Trainer (Domain-Specific)
    Feeds proprietary data into AI systems and fine-tunes model outputs for organizational accuracy.
  5. AI Personality Director
    Curates the “voice” and tone of an organization’s AI agents to match brand identity or ctustomer expectations.
  6. Drug-Compliance Optimizer
    Uses AI to monitor and improve medication adherence—potentially in hospitals, insurance, or elderly care.
  7. AI-Human Evaluation Specialist
    Identifies tasks best suited for humans, AI, or hybrid approaches, especially in safety-critical industries.

🎨 TASTE-Driven Roles

Creative and strategic roles where human judgment and vision are irreplaceable:

  1. Product Designer (AI-Enabled)
    Expands beyond visuals to include brand strategy, user interaction, and narrative consistency, directing AI tools creatively.
  2. Article Designer
    Orchestrates content generated by AI—selecting structure, tone, and messaging across platforms.
  3. Story Designer
    In film, TV, or gaming, crafts the underlying plot or world that AI then fleshes out.
  4. World Designer
    Creates fictional or themed environments for marketing, gaming, or entertainment—used by AI to generate media.
  5. Human Resources Designer
    Shapes company culture, training, and employee experiences using AI-curated content and interfaces.
  6. Differentiation Designer
    Strategizes how a company distinguishes itself when competitors use the same powerful AI tools—aligning product, brand, and communication.
Man with checklist, 4 robots lined up, 4 people happily cheering.
Steampunk taste team selected! Click to see celebration. Images, video by Losey.

Would you like me to visualize these 22 roles in a chart or cluster them by skillsets or industries?


USER: I would like the whole panel to discuss the “Sin Eater” job proposed by Wharton Business School Professor and noted AI expert, Ethan Mollick.

Ethan Mullick steampunk style.
Ethan Mollick in Steampunk style by Losey.

USER: I would like the whole panel to discuss the “Sin Eater” job proposed by Wharton Business School Professor and noted AI expert, Ethan Mollick.

4o PANEL:

The “Sin Eater” role, as Mollick frames it, carries significant legal implications. It reflects the enduring legal doctrine that accountability cannot be delegated to a machine. In tort law, liability flows to the agent of record—currently, only humans or legal entities can occupy that role.

This aligns with precedents such as Mata v. Avianca Inc., where AI misuse led to sanctions because no human properly vetted its output. The sin eater, therefore, functions not just as a compliance actor but as a liability firewall, absorbing both reputational and legal risk.

But this framing raises concerns. Can one be a guarantor without actual authorship? If AI makes the decision and the sin eater merely signs off, we risk creating paper accountability. Regulatory models like the EU AI Act are moving toward traceability mandates, but as yet, U.S. law lacks statutory recognition of such roles.

In short, the sin eater concept is both legally plausible and ethically fraught—it acknowledges human limits while asking them to bear machine risk.

AI Ethics attorney
Sin Eaters will employ many lawyers. Image by Ralph Losey AI.

🧠 AI Engineer — Technical Risk ≠ Human Blame

From a systems view, the “Sin Eater” is a human checksum: the last human in the loop before AI action becomes irreversible. It’s a reactive model of risk management—and one that’s arguably too brittle.

We need better forward defenses, not just cleanup personnel. Sin eaters are useful when explainability fails or outputs are inherently opaque. But the real problem is epistemic opacity: even experts can’t always trace LLM decisions.

Instead, we should engineer auditable AI stacks, use differential logging, and isolate decision modules for forensic review. Relying on a human “approver” as a fix-all invites rubber-stamping, not real scrutiny.

And technically, sin eaters will need tooling—model trace maps, hallucination flags, metadata dashboards. Without this infrastructure, we’re just asking someone to trust what they can’t parse.

✨ Creative Storyteller — A Mythic Role, Made Literal

“Sin Eater” is such an evocative term because it draws from ritual and folklore. Historically, sin eaters consumed bread over a corpse, symbolically absorbing the sins of the dead. In this context, the sins are hallucinated citations, algorithmic bias, or wrongful denials.

That metaphor frames the sin eater as both guardian and scapegoat. They bear blame to protect the system’s perceived legitimacy. That’s powerful—but also dangerous. Mythic roles lack clear scope. Is the sin eater a validator? A fall guy? A moral compass?

We risk theatrical accountability: jobs that feel weighty but do little to prevent harm. The storytelling matters, because it shapes how we see responsibility. Are they heroes, martyrs, or cover-ups?

The role’s narrative weight is real, but it must be grounded in operational clarity.

Man prone, with eyes closed, two men eating above him, label "Sin eaters."
Combined steampunk and medieval manuscript styles of historical Sin Eaters by Losey.

🛰 Data Visionary — Institutionalizing the “Last Human Standing”

Mollick’s metaphor reveals a deeper systemic truth: as AI scales, we need roles that enforce last-mile responsibility. The sin eater becomes the institutional locus of that burden.

In distributed systems, there’s a similar construct: the circuit breaker. When cascading failures occur, this mechanism stops the chain. The sin eater is a social circuit breaker. But who designs the thresholds? Who funds the liability insurance?

Scaling this requires:

  • Certification bodies to credential sin eaters
  • Policy frameworks (e.g., “AI final-signoff registries”)
  • Liability shields akin to whistleblower or auditor protections

Done well, this role could professionalize AI accountability, akin to CPAs or actuaries. But without those structures, it’s a fig leaf—blaming humans for systemic opacity.

Man pulling electrified lever
Last Man Standing role shown in steampunk style by Losey. Only brave humans need apply for this job.

⚔️ The Contrarian

This entire discussion sanitizes an ugly reality: the sin eater is a liability sink, not a solution. It externalizes blame from the system designer to a human end-point. It masks architectural unaccountability.

And let’s check the ethical risk boxes:

  • Bias: Who becomes the sin eater? Likely junior, marginalized, or contractual workers—sacrificed for optics.
  • Regulatory: There’s no statute affirming this role. It’s an HR workaround, not a legal safeguard.
  • Hallucination: Will they detect model lies in high-stakes contexts (e.g., medical or legal reasoning)? Doubtful, unless they are domain experts.

In sum, the sin eater is not a visionary role—it’s institutional cowardice disguised as ethics.

Would you like to follow up, clarify, or pivot?

AI Contrarian lawyer
Nobody likes the contrarian lawyer. Steampunk image by Losey.

New Request to the Panel

USER: Each panelist should imagine and describe in detail an entirely new line of work for humans that will likely be created by AI. It should not come within the scope of the job types mentioned in the article.

4o PANEL: Let’s break new ground. Each panelist below proposes a novel, AI-enabled human job that was not covered in the NYT article. These roles are speculative yet grounded in emerging technological and social trajectories.

🎓 Ethical AI Lawyer — Probabilistic Liability Architect

As AI systems increasingly operate on statistical inference (rather than deterministic logic), courts and regulators will need new doctrines for “probabilistic harm.” Enter the Probabilistic Liability Architect.

This legal role designs frameworks to allocate risk and assign partial liability when AI systems cause harm but no single actor is clearly negligent. For example:

  • A predictive policing system misidentifies a suspect with 70% confidence.
  • An autonomous delivery drone injures a bystander during path optimization.

The architect would:

  • Quantify and model AI system confidence thresholds.
  • Define shared responsibility regimes across vendors, deployers, and auditors.
  • Draft model legislation and policy for AI tort reform.

This job combines legal theory, policy design, and statistical reasoning—making it a bridge between law and machine logic.

Probabilistic liability architect
Steampunk image by Losey.

🧠 AI Engineer — Synthetic Interaction Architect

As LLMs evolve into multi-agent swarms, orchestrating their interactions will become a full-time role. The Synthetic Interaction Architect manages the interplay of dozens or hundreds of autonomous AI agents assigned to a complex task (e.g., planning an international supply chain or modeling a political campaign).

Key functions:

  • Configuring inter-agent protocols (who talks to whom, when, and how).
  • Monitoring emergent behaviors and pruning dysfunctional collaboration patterns.
  • Designing reward structures that simulate human incentive alignment (e.g., competition, consensus, escalation).

Think of this as managing a hive mind—an ensemble of AI intelligences operating like a virtual team. The architect ensures synergy, not chaos.

Man looking at what looks like an interactive communication map.
AI Interaction Architect conducting a swarm of interactive AIs. Shown on steampunk style by Losey.

✨ Creative Storyteller — Emotion Wrangler

As AI becomes increasingly conversational and emotionally fluent, it will sometimes simulate emotions better than it understands them. This creates the need for an Emotion Wrangler—a specialist who trains, tests, and fine-tunes the emotional accuracy and appropriateness of AI systems.

They would:

  • Script nuanced emotional arcs for AI in education, therapy, or customer service.
  • Conduct “empathy audits” to ensure AIs express concern, humility, or apology convincingly—but not manipulatively.
  • Design cultural calibration modules (e.g., how grief or politeness differs across geographies).

This job safeguards the emotional realism of AI while guarding against artificial affect that deceives or coerces.

Woman at computer, with "Emotion wrangler" label on her uniform.
Humans can feel emotions, AI can’t. Emotion Wrangler in steampunk style by Losey.

🛰 Data Visionary — Synthetic Ecology Steward

Generative AI is already creating simulated environments for science, entertainment, and climate modeling. The Synthetic Ecology Steward curates and manages these ecosystems as long-running simulations with economic, environmental, and ethical stakes.

Examples:

  • A virtual planet used for testing urban planning at planetary scale.
  • A simulated biosphere for biotech AI to evolve new proteins.
  • An economic microcosm for experimenting with new market policies or governance rules.

This steward oversees:

  • The ethical boundaries of simulation (e.g., are emergent sentient entities possible?).
  • Cross-disciplinary calibration (from physics to sociology).
  • Integration with real-world policy levers or educational systems.

The role marries design, governance, and philosophy at the frontier of synthetic reality.

Man looking at steampunk terrarium in the shape of a globe. Man at control panel.
Syn Ecology Stewart image by Visual Muse & Ralph Losey.

⚔️ The Contrarian — Moral Exit Facilitator

In the coming wave of AI-induced identity dislocation, some people will choose to opt out—not from society, but from systems that make them feel obsolete, surveilled, or coerced. The Moral Exit Facilitator helps these individuals ethically detach.

It’s not just a career counselor. It’s a combination of:

  • Data minimization advocate (helping users erase or obscure their digital footprint).
  • Self-sovereignty designer (tools for autonomy outside AI ecosystems).
  • Philosophical guide (navigating moral disengagement from pervasive automation).

The role critiques the system while enabling graceful withdrawal—offering humans a structured way to say: No, thank you.

Man smiling at woman who looks overjoyed, in front of an exit sign.
Helping people get tech-free, escape AI addictions. Steampunk style by Losey.

Would you like these roles visualized in a chart or used to seed a scenario?


USER: “Seed a scenario?” Not sure what that means, so sure, go ahead and do that, seed a scenario.

4o PANEL: Great — “seeding a scenario” means I’ll take those new job ideas and embed them in a realistic future situation, crafting a brief scenario that explores how these roles might interact, conflict, or evolve. Think of it as a mini-future vignette for testing policy, tech, or narrative ideas.

🌍 Scenario Seed: “The Eden Loop Disruption”

📅 Year: 2031

🧭 Setting: A high-fidelity AI simulation called Eden Loop, used globally for climate-risk modeling, educational immersion, and synthetic biodiversity experimentation.

The Eden Loop Disruption, wih four people around a computer, with a globe showing ocean currents or wind patterns.
Steampunk image by Losey using Visual Muse.

Something goes wrong in the Eden Loop and each of the new human roles worked together to deal with the problems. Dealing with the hypothetical would be interesting in a role-play type workshop, but not for this essay.

Conclusion

History tells us that all technology breakthroughs cause disruptions to existing society, and all put some people out of work, at first, but overall, they always create more new jobs than they destroy. There is no reason to think it will be any different this time with AI. And there are many reasons to think the new jobs created will be far more interesting, creative and enjoyable than the jobs lost. It may even change the negative meaning that comes with the word work for most people. Work will have far fewer boring drudgery tasks because that will all be done by the robots. Work and play may become synonymous.

Eight happy artisans around a project, with a very large AI robot doing work on the project.
Image of AI doing the hard dirty work while humans do the fun jobs. By Ralph Losey with his AI doing the boring drawing parts.

In the article that ChatGPT 4o version of the Panel of Experts found, Future of Jobs Report 2025 (WEF, 01/7/25), the World Economic Forum predicted a significant shift in how both humans and machine work will change. In 2025 employers reported 47% of work tasks are performed by humans alone, 22% by machines and algorithms alone, and 30% by a hybrid combination. By 2030, employers expect these task-delivery proportions to be nearly evenly split across all categories: one-third human only, one-third machine only and one third hybrid.

The World Economic Forum report explains that this is only a task-delivery proportionality measurement, it does not take into account the amount of work getting done in each of the three task categories. The Future of Jobs Report 2025 at page 27 goes on to state:

The relevance of the third category approach, human-machine collaboration (or “augmentation”) should be highlighted: technology could be designed and developed in a way that complements and enhances, rather than displaces, human work; and, as discussed further in the next chapter (Box 3.1), talent development, reskilling and upskilling strategies may be designed and delivered in a way to enable and optimize human-machine collaboration.34 It is the investment decisions and policy choices made today that will shape these outcomes in the coming years.35

Man happy about jobs report, robot reading, five people looking on.
Hybrid Man and Machine New Jobs Report is looking good. Image by Losey in Steampunk style

The global employers polled favored a hybrid approach, as did the World Economic Forum expert analysis. Generative AI requires human supervision now and certainly for the next five years. To again quote the WEF Report, at the mentioned Box 3.1 at page 44:

Skills rooted in human interaction – including empathy and active listening, and sensory processing abilities – and manual dexterity, endurance and precision, currently show no substitution potential due to their physical and deeply human components. These findings underscore the practical limitations of current GenAI models, which lack the physicality to perform tasks that require hands-on interaction – although advances in robotics and the integration of GenAI into robotic systems could impact this in the future.

After that, as the AI skills improve, there will in our opinion and that of the WTF, still be a need for human collaboration. The need and human skills required will change but will always remain. That is primarily because we are corporal, living beings, with emotion, intuitions and other unique human powers. Machines are not. See my essay, The Human Edge: How AI Can Assist But Never Replace (1/30/25). Even if Ai were to someday become conscious, and have some silicon based corporality, perhaps even a kind of chemical based feelings, it would need to work with us to gain our unique human feel, our vibes and our evolutionary connection with life on Earth. Real life in time and space, not just a simulation.

Shiny factory floor with steam escaping,
Factory floor with cool AI features in steampunk style. Click here to see video by Losey.

PODCAST

As usual, we give the last words to the Gemini AI podcasters who chat between themselves about the article. It is part of our hybrid multimodal approach. They can be pretty funny at times and have some good insights, so you should find it worth your time to listen. Echoes of AI: Panel of Experts for Everyone About Anything – Part Three: Demo of 4o as Panel Driver. Hear two fake podcaster talk about this article for 18 minutes. They wrote the podcast, not me. For the first time we also offer a Spanish version here.

Echoes of AI, "Panel of Experts for Everyone About Anything - Part 3"directed and verified by Ralph Losey.
Click here to listen to the English version of the podcast.

Ralph Losey Copyright 2025 – All Rights Reserved


Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.

Author

  • Ralph Losey headshot

    Ralph Losey is a writer and practicing attorney specializing in providing services in Artificial Intelligence. Ralph also serves as a certified AAA Arbitrator. Finally, he's the CEO of Losey AI, LLC, providing non-legal services, primarily educational services pertaining to AI and creation of custom GPTS. Ralph has long been a leader among the world's tech lawyers. He has presented at hundreds of legal conferences and CLEs around the world and written over two million words on AI, e-discovery, and tech-law subjects, including seven books. Ralph has been involved with computers, software, legal hacking, and the law since 1980. Ralph has the highest peer AV rating as a lawyer and was selected as a Best Lawyer in America in four categories: E-Discovery and Information Management Law, Information Technology Law, Commercial Litigation, and Employment Law - Management. For his full resume and list of publications, see his e-Discovery Team blog. Ralph has been married to Molly Friedman Losey, a mental health counselor in Winter Park, since 1973 and is the proud father of two children.

    View all posts