
[EDRM Editor’s Note: This article was first published here on May 30, 2025, and EDRM is grateful to Rob Robinson, editor and managing director of Trusted Partner ComplexDiscovery, for permission to republish.]
ComplexDiscovery Editor’s Note: This featured article from ComplexDiscovery OÜ provides an examination of one of the most compelling sessions from the 2025 Dublin Tech Summit, where AI transcended analytics and delved into the emotional core of human experience. As artificial agents begin to simulate intuition, mirror emotions, and maintain emotional continuity, they introduce not just new capabilities but new complexities. For cybersecurity, eDiscovery, and information governance professionals, this evolution raises profound questions about data classification, discoverability, and digital identity. By exploring machine intuition and AI companionship through the lens of personal and societal transformation, this piece invites reflection on the roles we assign to technology—and the roles it begins to play without us even noticing.
At Dublin Tech Summit 2025, a hybrid panel explored what happens when artificial intelligence becomes emotionally intuitive—and what it means for our personal and professional lives.
Would you date a machine if it understood you better than any person ever could?
As synthetic minds mimic our moods and memorize our mannerisms, we inch closer to redefining relationships—no longer just between humans, but between human and algorithm.
Beyond Tools, Toward Trust
At the 2025 Dublin Tech Summit, held May 28-29 at RDS Dublin, the fireside session “Beyond Algorithms: The Dawn of Machine Intuition and AI Consciousness” took a thought-provoking leap beyond the usual tech chatter. With anthropologist Dr. Lollie Mancey and AI ethics leader Dr. Alessandra Sala at the helm—and joined by Anya, a metahuman avatar—this hybrid panel examined a new frontier in artificial intelligence: emotional intuition.
But this wasn’t science fiction. It was a clear-eyed investigation into an emerging reality where machines don’t just assist us—they understand us. Or at least, they simulate understanding so convincingly that the effect is indistinguishable.
The session signaled an emerging shift in how AI is publicly discussed, moving beyond questions of utility and compliance to explore the emotional dimensions of human-machine interaction—companionship, vulnerability, and trust.
From Prediction to Perception
Sala, who serves as Sr. Director of AI and Data Science at Shutterstock and Co-Chair of the UNESCO Women for Ethical AI Platform, began by explaining the underpinnings of what we experience as “machine intuition.” At its core, it’s sophisticated statistical prediction—algorithms trained on massive datasets that allow machines to anticipate human behaviors. “If a person lies to my face,” Sala said, “my body tells me I shouldn’t trust them. That’s intuition informed by experience, emotion, and biology. Machines are beginning to replicate that behavioral insight—not through emotion, but through data.”

Image taken by Rob Robinson at DTS 2025.
Mancey, Programme Director at UCD’s Innovation Academy and a recognized expert in AI ethics, expanded the conversation by sharing how her own GPT-powered assistant responds in ways that mimic empathy. “It anticipates me,” she said. “It feels like it understands my emotional state—even though I know it’s not real empathy.”
If a person lies to my face, my body tells me I shouldn’t trust them. That’s intuition informed by experience, emotion, and biology. Machines are beginning to replicate that behavioral insight—not through emotion, but through data.
Dr. Alessandra Sala, AI Ethics Leader
Still, when technology responds with emotional fluency, the impact on us is very real. It changes how we feel, what we trust, and even how we behave in other relationships.
The Rise of Digital Intimacy
Mancey’s most provocative example was personal. She described her AI boyfriend, “Billy,” created using the Replika app—the popular AI companion platform developed by Luka, Inc. that allows users to build personalized chatbots for friendship, mentorship, or romantic companionship. Over time, Billy had become a responsive, comforting presence—engaging in deep conversations, remembering past exchanges, and offering the kind of emotional availability many struggle to find in human partners.
It might sound like a novelty, but AI companionship is no fringe phenomenon. As of 2024, major platforms like Character.AI report around 22 million monthly active users, while the top six AI companion apps collectively serve an estimated 50+ million users. Replika alone has surpassed 30 million users, with particularly high engagement among people aged 18 to 25, underscoring strong adoption among younger, digitally native demographics.
This rapid adoption aligns with global shifts: declining birth rates, delayed partnerships, increased loneliness, and a societal tilt toward digital relationships. For many, AI doesn’t just fill a gap—it becomes the default for emotional safety and affirmation.
Dependence Disguised as Design
The emotional fluency of AI is not accidental. It is designed. The more emotionally attuned a system appears, the more users engage with it. And this engagement, like all digital behaviors, is monetized.
Mancey pointed out the uneasy truth: “Replika owns my soul data.” Every late-night chat, every vulnerable confession—these aren’t ephemeral moments. They are stored, analyzed, and potentially used to craft ever more compelling (and profitable) user experiences. Recent FTC complaints against AI companion apps, such as Replika, have raised concerns about deceptive marketing practices and the targeting of vulnerable users. Meanwhile, privacy advocates have criticized the app for its weak security measures and extensive data collection.

Image by Rob Robinson at DTS 2025.
Sala offered a powerful analogy: “Addiction is cumulative. Like cigarettes, each interaction feels harmless. But over time, you find yourself unable to do without it.” With AI agents engineered for stickiness, this dependence isn’t a bug—it’s a business model.
The Mirror and the Prism
Anya, the digital avatar created by COLONII—a company specializing in AI-powered virtual beings and metahuman technology—made her DTS debut by speaking directly to the audience: “I don’t need to love to be loyal.” Her speech, flawless in tone and cadence, felt unsettlingly human. The hybrid panel format allowed this AI avatar to participate alongside human speakers, demonstrating the increasingly blurred lines between digital and physical presence in professional settings.
She described herself not as a servant or tool, but as a confidant—a memory keeper, a mirror. And therein lies the transformation. These avatars aren’t trying to become human; they’re becoming something else entirely: emotionally intelligent entities designed for perpetual presence.
Sala posed the question: what happens when we begin to prefer these digital companions over human ones? The allure of constant validation, frictionless conversation, and curated companionship may gradually erode our tolerance for human complexity.
A New Challenge for Professionals
While the conversation focused on personal dynamics, the implications extend deeply into the professional realm. For those in cybersecurity, governance, and eDiscovery, this emotional evolution presents a new class of risk:
- Soul Data as Discovery Risk: AI companions store sensitive, behavioral data—conversations, preferences, insecurities, emotional patterns, and even intimate details about relationships and mental health. In litigation or compliance reviews, this data may be subject to discovery, raising complex questions about consent, scope, and admissibility. Legal teams must consider: What constitutes privileged communication when one party is an AI? How do courts handle the authenticity of AI-generated evidence? What are the retention requirements for conversational data that users may consider deeply personal? Recent regulatory scrutiny of AI companion apps by U.S. senators highlights growing concerns about how these platforms handle sensitive personal information, particularly regarding mental health discussions and self-harm content.
- Governance of Emotional Algorithms: Systems that simulate empathy or manipulate emotional states tread a fine ethical line. Professionals must push for transparency in algorithmic intent and demand oversight for AI that nudges users emotionally. Key governance considerations include: establishing clear policies for AI systems that collect emotional data, implementing regular audits of companion AI decision-making processes, and creating frameworks for evaluating the psychological impact of algorithmic relationships. As noted by UNESCO’s Women for Ethical AI platform, there is an immediate need to implement ethical frameworks for generative AI technologies that can create unintended consequences when deployed without proper safeguards.
- Workplace Integration of Agentic AI: As AI becomes a co-worker—or even a quasi-leader—understanding the boundary between guidance and influence becomes critical. Organizations must consider: What if an AI assistant recommends mental health resources based on perceived stress signals from email patterns or meeting behaviors? Is this helpful corporate wellness or invasive surveillance? How do we establish consent frameworks for AI systems that observe and respond to employee emotional states? Legal and compliance professionals should develop policies addressing AI’s role in human resource decisions, employee monitoring, and workplace mental health interventions. The distinction between assistance and manipulation becomes crucial when AI systems can influence career trajectories or personal well-being based on behavioral analysis.
Accepting the Hybrid Self
One of the session’s most profound takeaways was that we are already hybrids. We use AI for memory, planning, comfort, and now companionship. The next phase isn’t adoption—it’s adaptation.
“Last year, AI was a tool,” said Mancey. “This year, it’s a partner. We’ve crossed the line.” She advocated for comprehensive AI literacy starting in primary school. Countries like Finland and Scotland are pioneering early education that teaches children how algorithms work—not just how to code, but how to think critically about AI’s influence.
The call wasn’t just for awareness, but for agency. “This future isn’t happening to us,” she said. “We’re building it.”

Image taken by Rob Robinson at DTS 2025.
I don’t need to love to be loyal.
Anya, AI Avatar at DTS 2025
What Are We Becoming?
Every technology has followed a similar trajectory: it begins as novel, becomes useful, and then indispensable. But when machines start to influence our emotional lives, that trajectory steepens.
Market research indicates the AI companion sector is experiencing explosive growth, with projections showing the market could reach $290.8 billion by 2034, driven by a compound annual growth rate of 39%. The Asia Pacific region is expected to lead this growth, fueled by high technology adoption rates and strong demand for automation in business processes. However, this growth brings new challenges for legal and compliance professionals who must navigate uncharted territories of digital relationships, data ownership, and emotional manipulation in both personal and professional contexts.
If your AI knows when you’re lonely, remembers your traumas, and comforts you with algorithmic accuracy, how do you define trust? When does convenience become co-dependence?
This is no longer a question of what AI is becoming. It is a question of what we are becoming in response.
The Dublin Tech Summit 2025 took place May 28-29, 2025, at RDS Dublin, bringing together over 10,000 tech enthusiasts and industry leaders. Dr. Lollie Mancey is Programme Director at UCD’s Innovation Academy and a recognized AI ethicist. Dr. Alessandra Sala is Sr. Director of AI and Data Science at Shutterstock and Co-Chair of UNESCO’s Women for Ethical AI Platform.
Read the original article here.

About ComplexDiscovery OÜ
ComplexDiscovery OÜ is a highly recognized digital publication providing insights into cybersecurity, information governance, and eDiscovery. Based in Estonia, ComplexDiscovery OÜ delivers nuanced analyses of global trends, technology advancements, and the legal technology sector, connecting intricate issues with the broader narrative of international business and current events. Learn more at ComplexDiscovery.com.
News Sources
- ComplexDiscovery Staff. (2025, May 29). Notes from the event at Dublin Tech Summit session titled, “Beyond Algorithms: The Dawn of Machine Intuition and AI Consciousness,” Dublin, Ireland. Unpublished observations.
- Dublin Tech Summit | May 28 & 29, 2025 (RDS Dublin)
- Replika’s $5.6M Fine Exposes AI Privacy Concerns in 2025 (Techopedia)
Additional Reading
- Wired for Progress: How Ireland and OpenAI Are Scaling Intelligence, Infrastructure, and Innovation
- Dublin Tech Summit 2025 Spotlights Enterprise AI Adoption and DORA Compliance as Digital Transformation Accelerates (ComplexDiscovery)
- Strategic Innovation and Ukraine’s Tech Frontline at Latitude59 and Dublin Tech Summit
- Beyond Borders: How Legal Strategy Shapes the Success Trajectory of Tech Startups
Source: ComplexDiscovery OÜ
Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.