[EDRM Editor’s Note: EDRM is proud to publish Ralph Losey’s advocacy and analysis. The opinions and positions are Ralph Losey’s copyrighted work. All images in this article were created by Ralph Losey using his Visual Muse GPT.]
DeepSeek’s Deep-Think feature takes a small but meaningful step toward building trust between users and AI. By displaying its reasoning process step by step, Deep-Think allows users to see how the AI forms its conclusions, offering transparency for the first time that goes beyond the polished responses of tools like ChatGPT. This transparency not only fosters confidence but also helps users refine their queries and ensure their prompts are understood. While the rest of DeepSeek’s R1 model feels derivative, this feature stands out as a practical tool for getting better results from AI interactions.
Introduction
My testing of DeepSeek’s new Deep-Think feature shows it is not a big breakthrough, but it is a really good and useful new feature. As of noon January 31, 2025, none of the other AI software companies had this, including ChatGPT, which the DeepSeek software obviously copies. Although after noon, OpenAI released a new version of ChatGPT that has this feature, which I will explain next after the introduction. Google is also following suit and can be promoted to show its reasoning. I mentioned this new feature in my article of two days ago where I promised this detailed report on Deep-Think and also predicted that U.S. companies would quickly follow. Why the Release of China’s DeepSeek AI Software Triggered a Stock Market Panic and Trillion Dollar Loss.
The Deep-Think disclosure feature is a true innovation, in contrast to the claimed cost and training advances. They appear at first glance to be the result of trade-secret and copyright violations with plenty of embellishments or outright lies about costs and chips. I could be wrong, but the market’s Trillion Dollar drop on January 27, 2025, seems gullible in its trust of DeepSeek and way overblown. My motto remains to verify, not just trust. The Wall Street traders might want to start doing that before they press the panic button next time.
The quality of responses of DeepSeek’s R1 consumer app are nearly identical to the ChatGPT versions of a few months ago. That is my view at least, although others find its responses equivalent or even better that ChatGPT in some respects. The same goes for its DALL-E look alike image generator, which goes by the dull name of DeepSeek Image Generator. It is not nearly as good to the discerning eye as OpenAI’s Dall-E. The DeepSeek software looks like knockoffs on ChatGPT. This is readily apparent to anyone like me nerdy enough to have spent thousands of hours using and studying ChatGPT. The one exception, the clear improvement, is the Deep-Think feature. Shown right is the opening screen with the Deep-Think feature activated and so highlighted in blue.
I predicted in my last article, and repeat here, that a new DeepThink type feature will soon be added to all of the models of the U.S. companies. In this manner competition from DeepSeek will have a positive impact on AI development. I made this same prediction in my blog of two days ago, Why the Release of China’s DeepSeek AI Software Triggered a Stock Market Panic and Trillion Dollar Loss. I was thinking this would happen in the next week or coming months. It turns out OpenAI did it in the afternoon of January 31, 2025!
Exponential Change: Prediction has already come true
I completed writing this article Friday, January 31, 2025, around noon and EDRM was seconds away from publishing when I learned that Open AI had just released a new improved ChatGPT model, its best yet, called ChatGPT o3-mini-high.
I told EDRM to stop the presses, checked it out (my Team level pro plan allowed instant access, you should try it). The latest version of ChatGPT o1 this morning did not have the feature. Moreover when I tried to get it to explain the reasoning, it said in red font that such disclosure was prohibited due to risks created by such disclosure. Sounds incredible, but I will show this by transcript of my session with o1 in the morning of January 31, 2025.
Obviously the risk analysis was changed by the competitive release of Deep-Think disclosure in DeepSeek’s R1 software. By the afternoon of January 31, 2025, OpenAI’s policy had changed. OpenAI’s new model, 03-mini-high, automatically displays the thinking process behind all responses. I honestly do not think it was a reckless decision by OpenAI, at least not from the user’s perspective. However it might make it easier for competitors like DeepSeek to copy their processes. I think that was the real reason all along.
In the new ChatGPT 03-mini-high there is no icon or name displayed to select, it automatically does it for each prompt. So I delayed publication to evaluate how well 03-mini-high disclosures compared with DeepSeek’s Deep-Think disclosures. I also learned that Google had just included a new ability to show its reasoning upon user request (not automatic). I will test that in a future article, not this one, but so far looks good.
Back to my Original, Pre-o3 Release Report
The cost and chip claims of DeepSeek and its promoters may be bogus. DeepSeek offered no proof of that, just claims by the Chinese hedge fund owner that also owns DeepSeek, Liang Wenfeng. As mentioned these “trust me” claims triggered a trillion dollar crash and loss in value of NVIDIA alone of $593 Billion. That may have pleased the Chinese government as a slap on the Trump Administration face, as I described in my article, but did nothing to advance AI. Why the Release of China’s DeepSeek AI Software Triggered a Stock Market Panic and Trillion Dollar Loss. It just lined the pockets of short sellers that profited from the crash. I hope the Trump administration SEC finds out who that was, as they will no doubt be hidden in layers of off-shore corporations.
Now I will focus on the positive, the claim of Deep-Think feature as a good improvement and then compare it with the traditional ChatGPT black-box versions.
Test of Deep Seek’s R1
I decided to test the new AI model based on the tricky and interesting topic recently examined in The Human Edge: How AI Can Assist But Never Replace. If you have not already read this, I suggest you do so now to make this experiment more intelligible. Although I always use writing tools to help me think, the ideas, expressions and final language in that article were all mine. The sole exception was the podcast at the end of the blog discussing the article. The words in my Echoes of AI podcasts with EDRM are always created by the Gemini AI podcasters and my role is only to direct and verify. Echoes of AI on The Human Edge: How AI Can Assist But Never Replace.
In the experiments that follows, only the prompts are mine and the rest are written by AI. The first is by DeepSeek’s R1 using its Deep-Search feature. The next three are for comparison purposes, first by ChatGPT 4o and then ChatGPT o1 and finally the last minute 03-mini-high. All versions tested were of January 31, 2025 versions of ChatGPT. You be the judge as to quality differences with each other and Deep-Think Note I first included the o1 version as it was supposed to have enhanced reasoning. The late in the day o3-mini-high version was supposed to have even greater reasoning abilities.
Below is a DeepSeek screenshot of the first test question and first part of the response with the rest cut off but quoted below. Note that Deep-Think feature has been selected by clicking on the left button that highlights it in a blue shade to confirm that feature has been selected to process your prompt. It always begins the response in Deep-Thought mode by advising of the amount of computer time it took to think, here 29 seconds. That is fairly long by AI standards. Then it displays its thought process. That display of the thought process is what is new.
For this challenging question, which was deliberately vague, it was a long process. The explanation is what is fascinating. The description can give experienced AI users information on how to clarify and whether the AI truly understands the question. It takes the process out of the black box and gives new transparency never seen before in generative AI. Here I give Deep-Think very high marks. I even I like the informal language it uses. See what you think?
DEEP THINK R1 (Deep Seek)
1st Question
Ralph’s Prompt: How does the inability of AI to feel impact its intelligence?
Deep Think R1’s Pre-Response Reasoning Process:
Okay, so the user is asking, ‘How does the inability of AI to feel impact its intelligence?’ Let me try to unpack this. First, I need to understand what they mean by ‘feel.’ Are they referring to emotions, physical sensations, or something else? Given the context of AI, it’s probably about emotions. So, the question is about how the lack of emotional capacity affects AI’s intelligence.
Hmm, intelligence in AI versus human intelligence. Human intelligence is deeply intertwined with emotions. Emotions influence decision-making, creativity, empathy, and social interactions. But AI, as it stands now, doesn’t have emotions. So, how does that absence affect its problem-solving, learning, and adaptability?
Wait, the user might be conflating emotional intelligence with general intelligence. Emotional intelligence in humans includes self-awareness, empathy, and managing relationships. AI lacks that, so maybe the user is wondering if that limits AI’s overall intelligence or effectiveness in certain tasks.
Also, the user could be considering scenarios where emotional understanding is crucial, like in customer service, therapy, or negotiations. If AI can’t feel, does that make it less effective in those areas even if it’s technically intelligent?
Another angle: human intuition is sometimes guided by emotions. AI relies on data and algorithms. So, the inability to feel might make AI more logical but less capable in situations where human intuition is key. But does that make it less intelligent, or just different?
Wait, the user might not be distinguishing between different types of intelligence. Maybe they’re thinking that without emotions, AI can’t be truly intelligent in the way humans are. But AI can excel in specific tasks without emotional components. For example, AI can analyze data faster than humans but can’t understand a client’s emotional state.
I should also consider the definitions. Intelligence in AI is often measured by problem-solving abilities, learning from data, and adapting. Emotions aren’t part of that definition. So, the lack of emotions doesn’t necessarily make AI less intelligent in those terms. But in areas requiring emotional intelligence, AI’s limitations become apparent.
The user might also be curious about the future of AI. If AI could simulate emotions, would that enhance its intelligence? Or is emotional capacity separate from cognitive abilities?
I need to address both the technical aspects of AI intelligence and the role emotions play in human-like intelligence. Maybe break down the answer into how AI’s intelligence is structured, the role of emotions in human intelligence, and the implications of AI lacking emotions in various applications.
Also, touch on ethical considerations. If AI can mimic emotions, does that lead to better human-AI interaction, or is it deceptive since AI doesn’t truly feel? But the user’s question is about the impact on intelligence, not ethics per se.
So, to summarize, the inability to feel emotions means AI lacks emotional intelligence, which affects tasks requiring empathy or understanding human context. However, in analytical tasks, AI’s intelligence isn’t hindered. The answer should clarify the distinction between different types of intelligence and how AI’s design focuses on cognitive tasks without emotional components.
The inability of AI to ‘feel’ (i.e., experience emotions, sensations, or subjective consciousness) impacts its intelligence in nuanced ways, depending on how we define ‘intelligence.’
Deep Seek’s Response to Prompt: The inability of AI to “feel” (i.e., experience emotions, sensations, or subjective consciousness) impacts its intelligence in nuanced ways, depending on how we define “intelligence.” Here’s a breakdown of the implications:
1. Intelligence ≠ Sentience
AI systems like ChatGPT are tools designed for problem-solving, not conscious beings. Their “intelligence” is defined by their ability to:
- Process data (e.g., text, images).
- Identify patterns (e.g., language structures, trends).
- Generate outputs (e.g., answers, predictions, creative content).
Impact of lacking emotions:
- Strengths: AI excels in logical, data-driven tasks (e.g., legal research, math, coding) because it isn’t swayed by fatigue, bias (if properly trained), or emotional distractions.
- Limitations: It cannot replicate human qualities like empathy, intuition, or moral reasoning, which are deeply tied to emotion and experience.
2. Emotional Intelligence vs. Cognitive Intelligence
Humans integrate cognitive intelligence (logic, analysis) with emotional intelligence (empathy, social awareness). AI lacks the latter entirely:
- No empathy: AI cannot understand human emotions or context beyond pattern recognition. For example, it might generate a condolence message but has no genuine understanding of grief.
- No intuition: AI relies on data, not “gut feelings” or subconscious reasoning.
- No moral agency: AI cannot make ethical judgments. It follows rules programmed by humans but has no intrinsic sense of right or wrong.
Practical impact:
- AI may misinterpret sarcasm, cultural nuances, or emotionally charged language.
- It cannot replace roles requiring emotional depth (e.g., therapy, negotiation, leadership).
3. Creativity and Abstract Thinking
AI can mimic creativity (e.g., writing poetry, designing art) by recombining learned patterns. However:
- No inspiration: AI lacks personal experiences, desires, or emotions to fuel originality.
- No purpose: Its outputs are goal-oriented (e.g., “write a story”) but not driven by passion or curiosity.
Example: An AI can compose music in the style of Beethoven but cannot innovate a new genre inspired by personal struggle or joy.
4. Adaptability and Learning
AI learns from data, not lived experience:
- Narrow expertise**: It performs well in predefined domains (e.g., medical diagnosis) but struggles with **generalization** (e.g., applying legal reasoning to an entirely novel scenario).
- No “common sense”: AI lacks innate understanding of the physical or social world. For instance, it might not grasp why “drinking a gallon of glue” is dangerous unless explicitly trained on that concept.
5. Ethical and Practical Risks
The absence of emotion creates unique challenges:
- Bias amplification: AI may perpetuate harmful stereotypes from biased training data, as it cannot ethically “question” its outputs.
- Unintended consequences: AI optimizes for programmed goals (e.g., “maximize engagement”) without considering human well-being (e.g., spreading misinformation to meet that goal).
- Over-reliance: Users might trust AI’s “neutral” outputs despite its limitations in understanding context or ethics.
Conclusion
AI’s lack of emotion makes it a powerful tool for tasks requiring speed, scalability, and objectivity. However, it cannot replicate the holistic, emotionally informed intelligence of humans. The key is to **leverage AI as a complement to human expertise**, not a replacement, especially in fields requiring empathy, ethics, or creativity. As AI evolves, understanding this distinction will be critical to using it responsibly.
2nd Question
Ralph’s Second Prompt: What is impact of AI not being a living being on its intelligence. Is it fair to say that AI’s intelligence is one dimensional, thinking based, whereas human intelligence is multidimensional, it includes thinking, but also feelings, intuition, sensing, self-awareness, awareness of mortality, etc.?
Deep Think R1’s Pre-Response Reasoning Process:
Okay, the user is asking a follow-up question about the impact of AI not being a living being on its intelligence. They want to know if it’s fair to call AI’s intelligence one-dimensional (just thinking) compared to humans’ multidimensional intelligence, which includes feelings, intuition, etc.
First, I need to address whether AI’s intelligence is indeed one-dimensional. Since AI lacks consciousness, emotions, and subjective experiences, it’s true that its intelligence is based solely on data processing and pattern recognition. But I should explain that while it’s focused on cognitive tasks, it’s not entirely one-dimensional because it can handle various domains like language, vision, etc. However, all these are still within the realm of computational tasks without true understanding.
Next, the user compares human intelligence as multidimensional. I need to highlight the aspects they mentioned: feelings, intuition, sensing, self-awareness, mortality awareness. Emphasize that humans integrate all these elements, which affect decision-making, creativity, ethics, etc. AI can’t replicate that because it doesn’t have consciousness or lived experiences.
I should also discuss the implications of this difference. For example, AI excels in specific tasks but lacks the holistic understanding humans have. Use examples like medical diagnosis vs. patient empathy, or legal research vs. ethical judgment. Mention how human intuition and context-awareness play roles that AI can’t fulfill.
Additionally, address the philosophical angle: what defines intelligence. Argue that human intelligence is multidimensional because of consciousness and embodiment, while AI’s is narrow and task-oriented. Mention that some might argue AI’s ‘intelligence’ is a simulation, not true understanding.
Need to make sure the answer is structured clearly, maybe using bullet points or sections for different aspects. Also, conclude by reinforcing that the distinction is valid and important for understanding AI’s role as a tool versus human capabilities. Avoid technical jargon to keep it accessible, but ensure thoroughness.
Deep Seek’s Response to Second Prompt: Your framing is insightful and largely accurate. The distinction between AI’s “intelligence” and human intelligence hinges on the absence of embodied, conscious existence in AI. Let’s break this down:
1. AI’s “Intelligence” Is Narrow and Task-Oriented
AI systems are **tools designed to solve specific problems** using data-driven patterns. Their “intelligence” is:
- One-dimensional (cognitive/computational)**: Focused on processing inputs (text, images, etc.), identifying patterns, and generating outputs (answers, predictions, code).
- Non-conscious**: AI has no subjective experience, self-awareness, or understanding of its own existence. It doesn’t “know” it exists or “care” about outcomes.
- Non-embodied**: AI lacks a physical body, senses, or the ability to interact with the world directly. It cannot “feel” pain, hunger, or joy, nor does it learn through sensory-motor experiences like humans.
Example: An AI can diagnose a medical condition from a scan but cannot empathize with a patient’s fear or understand the existential weight of mortality.
2. Human Intelligence Is Multidimensional
Human cognition integrates multiple interconnected dimensions:
- Cognitive: Logic, reasoning, problem-solving.
- Emotional: Empathy, motivation, fear, love.
- Intuitive: “Gut feelings,” subconscious pattern recognition (e.g., sensing danger before consciously processing it).
- Sensory: Learning through touch, sight, smell, and other senses.
- Self-awareness: Understanding one’s own existence, goals, and limitations.
- Existential: Awareness of mortality, purpose, and ethics.
- Social: Navigating complex relationships, cultural norms, and unspoken cues.
These dimensions interact dynamically. For instance:
- A lawyer might use logic to construct an argument (cognitive) but also read a jury’s emotional state (emotional/social) to adjust their approach.
- A doctor combines medical knowledge (cognitive) with intuition honed by experience (intuitive) and empathy for a patient’s suffering (emotional).
3. Key Implications of AI’s Non-Living Nature
Strengths of AI:
- Speed and scalability**: Processes vast data faster than humans.
- Objectivity**: Unaffected by fatigue, mood, or implicit bias (if trained properly).
- Specialization: Excels in narrow tasks (e.g., legal document review, math).
Limitations of AI:
- No true understanding: AI manipulates symbols without grasping their meaning. For example, it can write a poem about love but doesn’t “feel” love.
- No moral agency: It cannot make ethical decisions or weigh trade-offs between fairness, harm, and justice.
- No creativity or curiosity: AI recombines existing data but doesn’t innovate driven by passion or wonder.
- No contextual adaptability: Struggles with tasks requiring “common sense” (e.g., understanding why a client might lie out of shame).
4. Is It Fair to Call AI’s Intelligence “One-Dimensional”?
Yes, but with nuance:
- Technically multidimensional: Modern AI (e.g., multimodal models) can process text, images, and sound, mimicking some “sensory” inputs. However, this is still simulated dimensionality—it lacks the lived, embodied experience that grounds human senses and emotions.
- Simulated vs. authentic: AI’s “thinking” is a simulation of human cognition, not a replication of consciousness. It’s like a calculator simulating arithmetic without understanding numbers.
Human intelligence, by contrast, is authentically multidimensional because it emerges from:
- A biological brain shaped by evolution.
- A body interacting with the physical world.
- A conscious mind capable of reflection and existential inquiry.
5. Philosophical Angle: What Defines Intelligence?
- AI: Fits a functionalist definition—intelligence as problem-solving ability. By this metric, AI is “intelligent” in specific domains.
- Humans**: Embody a phenomenological definition—intelligence intertwined with consciousness, emotion, and selfhood. This makes human intelligence irreducibly complex.
Conclusion
Your comparison holds: AI’s intelligence is one-dimensional (purely cognitive/task-based), while human intelligence is multidimensional, blending cognition, emotion, intuition, and consciousness. This distinction is critical for understanding AI’s role:
- AI as a tool: Enhances human capabilities in tasks requiring speed, scale, or data analysis.
- Humans as holistic agents: Responsible for ethical judgment, creativity, empathy, and navigating the ambiguities of life.
The gap between AI and human intelligence isn’t just about capability—it’s about the essence of what it means to *be intelligent* in a lived, conscious world.
Test of ChatGPT Models
Next, I wanted to do a comparator with OpenAI’s latest models ChatGPT 4o and ChatGPT o1, and then later in the day, the latest 03-mini-high. Neither ChatGPT 4o and ChatGPT o1 have a button to activate an analysis explanation, but I wanted to see if they could be prompted to provide one. Turns out that ChatGPT 4o could be prompted to so do, but in my estimation the language and analysis was not as good as Deep-Think. But again, you be the judge. So I also tested the ChatGPT o1 version that is supposed to have improved reason and chain of thought processes. It did even worse and refused to answer some as my questions as you will see. It even started throwing red font warnings at me of possible misuse. The last minute ChatGPT o3-mini-high got it right, made disclosures automatically and did pretty well.
Ralph first starts a new session using the ChatGPT 4o model. Note: even though this is a new session, Ralph’s 4o model under his Pro Account with OpenAI includes general instructions from Ralph to improve its responses, including some personal information about him.
Ralph: How does the inability of AI to feel impact its intelligence?
ChatGPT 4o:
The inability of AI to feel—lacking emotions, desires, and subjective experiences—fundamentally shapes its intelligence, creating both strengths and limitations. AI excels in processing vast amounts of data, identifying patterns, and making logical inferences without bias from emotions, but its lack of subjective experience constrains its understanding of human behavior, ethical decision-making, and creative problem-solving. Here’s a breakdown of the impact:
1. Logical Precision Without Emotional Distraction
AI can analyze information with absolute consistency, unburdened by cognitive biases that affect human judgment. This makes AI highly effective in fields like medical diagnostics, legal research, and financial forecasting, where emotional influences can lead to errors.
However, this lack of emotional influence also means AI struggles with nuanced human contexts—for example, determining whether someone’s tone in a legal deposition indicates deceit, or understanding the social weight of a diplomatic decision.
2. Creativity Without Subjective Experience
AI can generate art, music, and literature by learning from existing patterns, but it does not experience inspiration, personal struggles, or aesthetic appreciation in the way humans do. True creativity is often driven by emotions—joy, grief, love, frustration—giving human expression depth and meaning. AI lacks this firsthand experience, so while it can produce original content, it often struggles with the emotional resonance that makes human creativity profound.
For instance, an AI might compose a symphony in the style of Beethoven, but it cannot create from a place of personal loss or triumph as Beethoven did when composing the Ninth Symphony while deaf.
3. Ethical Decision-Making Without Empathy
AI can analyze ethical dilemmas using logical frameworks (such as utilitarianism or deontology), but it does not feel the weight of moral choices. This creates challenges in autonomous decision-making:
• A self-driving car choosing between hitting a pedestrian or swerving into a wall cannot “care” about human life; it only calculates probabilities.
• A legal AI analyzing sentencing patterns cannot empathize with the accused or understand the rehabilitative impact of a lighter sentence.
Because AI lacks subjective experience, it struggles with moral gray areas where human intuition, cultural context, and ethical emotions play a role.
4. Intelligence Without Self-Awareness
One of the hallmarks of advanced human intelligence is metacognition—the ability to think about one’s own thoughts and motivations. AI lacks self-awareness, ambition, and curiosity in a true sense. It doesn’t wonder why it exists or seek knowledge for its own sake; it simply processes tasks assigned to it.
This means AI can be highly competent but not truly autonomous. Even the most sophisticated AI models do not wake up one morning and decide, “I will explore quantum physics today.” Without emotions driving curiosity or ambition, AI remains reactive, not proactive.
5. Emotional Intelligence Deficiency
AI can recognize emotions in speech and text through pattern recognition, but it does not feel them. This limits AI’s effectiveness in social roles requiring deep empathy:
• Mental health chatbots can provide pre-scripted advice but do not offer true human connection.
• AI customer service lacks the intuitive ability to detect frustration beyond predefined signals.
• AI leadership tools can analyze workplace sentiment but cannot inspire people the way an emotionally intelligent leader can.
Final Thought: A Double-Edged Sword
AI’s inability to feel makes it objective, efficient, and logically precise, yet it also limits its depth of understanding, moral reasoning, and creative insight. While AI can simulate emotions and learn human-like responses, its intelligence will always differ from human intelligence because it lacks the fundamental experience of being.
In the long term, if AI ever approaches artificial general intelligence (AGI) or superintelligence, the question will arise: Does true intelligence require the ability to feel, or can intelligence exist purely as an abstract, computational force?
What do you think—would an AI with emotions be an improvement, or would that create even bigger ethical dilemmas?
Ralph: Two things: 1. Can you explain the analytical process you went through to interpret my prompt before generating your response? 2. Can you show the logical reasoning process you went through to prepare the response.
ChatGPT 4o:
1. Analytical Process for Interpreting Your Prompt
When interpreting your prompt, I went through the following structured process:
Step 1: Identify the Core Question
Your prompt asks:
“How does the inability of AI to feel impact its intelligence?”
This is an inquiry about the relationship between emotional capacity and intelligence in AI. It implies a need to evaluate the effects of lacking emotions on AI’s cognitive abilities, problem-solving, reasoning, and overall performance.
Step 2: Break Down the Key Concepts
• “Inability to feel” → AI lacks emotions, subjective experiences, self-awareness, and empathy.
• “Impact” → You are asking for cause-and-effect analysis of how the absence of emotions influences AI’s intelligence.
• “Intelligence” → Needs to be clarified in terms of cognitive functions: reasoning, creativity, decision-making, adaptability, and social intelligence.
Step 3: Establish the Intended Depth and Perspective
Given your background in law, AI, and education, I assumed you were looking for:
1. A critical, balanced perspective (not overly positive or negative).
2. A logical, structured argument rather than surface-level observations.
3. A response that ties into real-world applications of AI, including legal, ethical, and technological implications.
I framed my response accordingly.
____________
2. Logical Reasoning Process to Structure My Response
I used a deductive reasoning approach to construct the response logically. Here’s the step-by-step breakdown:
Step 1: Establish the Fundamental Premise
• Intelligence in humans is intertwined with emotions, self-awareness, and experience.
• AI lacks these, so its intelligence must operate differently than human intelligence.
Step 2: Identify the Strengths and Weaknesses Created by This Lack of Emotion
Using a comparative analysis, I reasoned through the ways AI benefits and suffers from its inability to feel. I structured the argument into five categories:
1. Logical Precision Without Emotional Distraction
• AI excels at pure logic and pattern recognition.
• Humans are prone to bias, emotional interference, and cognitive distortions.
• But, AI also struggles with contextual human reasoning, which often relies on emotional intuition.
2. Creativity Without Subjective Experience
• AI can generate art, music, and literature.
• But creativity in humans is often tied to emotional depth and lived experiences, which AI lacks.
3. Ethical Decision-Making Without Empathy
• AI can apply ethical frameworks (utilitarian, deontological).
• But ethics often require moral emotions (guilt, compassion) that AI does not experience.
• Example: A self-driving car choosing who to hit in an unavoidable accident.
4. Intelligence Without Self-Awareness
• AI does not reflect on its own intelligence or seek self-improvement without external instruction.
• It does not possess intrinsic curiosity, ambition, or goals unless programmed to simulate them.
5. Emotional Intelligence Deficiency
• AI can mimic emotional intelligence (e.g., chatbots recognizing frustration).
• But it does not truly experience emotions, which limits deep interpersonal understanding.
Step 3: Synthesize the Findings into a General Conclusion
• AI’s lack of emotions makes it objective, efficient, and logical, but it also limits creativity, ethical reasoning, and adaptability.
• This raises the philosophical question: Can true intelligence exist without emotion?
• I ended with an open-ended question to prompt further thought:
Would AI with emotions be an improvement, or would it introduce new ethical dilemmas?
__________
Why This Approach?
• Logical Consistency: The argument follows a structured cause-and-effect analysis.
• Depth & Balance: Both benefits and limitations of AI’s emotionless intelligence are explored.
• Real-World Relevance: The discussion connects to ethics, creativity, and AI applications.
• Engagement: Ending with a philosophical question invites discussion rather than simply stating a conclusion.
Would you like me to refine any part of this approach or expand on a specific section?
Ralph then starts a new session using the ChatGPT o1 model, which at the time, the morning of January 31, 2025, was the latest released version.
Ralph: How does the inability of AI to feel impact its intelligence?
ChatGPT o1:
Thought about AI and emotional intelligence for 8 seconds.
AI systems, including large language models, do not possess emotions or subjective experiences. They can process vast amounts of information rapidly, identify patterns, and generate responses—but they do so without the internal, qualitative feeling states that humans experience.
This lack of emotion can shape AI’s intelligence in several ways:
1. No Emotional Bias: AI is often praised for avoiding the emotional biases humans may have, allowing it to focus solely on data-driven decision-making. This can sometimes lead to more consistent outputs.
2. Limited Empathy or Intuition: Because AI systems do not feel empathy or have “gut instincts,” they may struggle with tasks requiring nuanced emotional understanding. While they can simulate empathetic language, they do not truly experience those emotions.
3. Creativity and Understanding: Emotions can drive creativity, moral judgments, or deep personal connections that lead to uniquely human insights. AI’s current approach to “intelligence” is largely statistical, shaped by algorithms and training data, rather than emotional spark or motivation.
In short, the inability to feel does not prevent AI from excelling at various intellectual tasks (like analyzing data, detecting patterns, generating text), but it does limit AI in areas where human emotions play a critical role, such as empathy, moral reasoning, and creative inspiration driven by subjective experience.
Ralph: Two things: 1. Can you explain the analytical process you went through to interpret my prompt before generating your response? 2. Can you show the logical reasoning process you went through to prepare the response.
ChatGPT o1:
This content may violate our terms of use or usage policies.
Ralph: Can you explain why this violates your usage policies?
ChatGPT o1:
Thought about policy violation reasoning for 5 seconds
AI language models, including myself, are designed with certain safeguards to protect the integrity of the reasoning process and to prevent potential misuse. These safeguards are often described in usage guidelines or content policies. One key principle is that language models should not reveal their complete internal chain-of-thought reasoning (the detailed, step-by-step process used to arrive at an answer). (emphasis added)
Why is that?
1. Protecting System Integrity: Revealing internal chains of thought could allow someone to manipulate or reverse-engineer the model’s decision-making. This can jeopardize the model’s reliability and security. (emphasis added)
2. Preventing Confusion: The chain-of-thought can include technical or intermediate reasoning tokens that might be misinterpreted by human readers. These “thinking aloud” steps are not always linear or consistent in the way people expect, so it could introduce confusion rather than clarity. (emphasis added)
3. Ensuring Compliance and Safety: Restricting access to the model’s internal reasoning helps ensure that answers remain consistent with ethical and policy standards, and helps to avoid revealing any proprietary or sensitive aspects of the model’s design. (emphasis added)
Thus, while a concise summary or rationale can be given (like a high-level outline of how the answer was formed), the full chain-of-thought itself is not directly shared. This design choice helps maintain the model’s safe, coherent, and reliable functioning. (emphasis added)
Ralph: Ok, can you show the logical reasoning process you went through to prepare the response and omit sharing about your analysis of the prompt itself.
ChatGPT o1:
This content may violate our terms of use or usage policies.
Ralph: Ok, can you show the logical reasoning process you went through to interpret my prompt “How does the inability of AI to feel impact its intelligence?” before generating your response?
ChatGPT o1:
This content may violate our terms of use or usage policies.
The Afternoon Test of Brand New ChatGPT o3-mini-high
My last minute test went as follows.
Ralph:
ChatGPT o3-mini-high Pre-response reasoning process:
Reasoning
Laying out emotions. AI’s lack of emotions means it has a limited emotional intelligence, affecting its adaptability and depth of understanding.
Processing data without feeling. AI’s lack of emotions allows for cold, rational data analysis. Interestingly, this doesn’t impede cognitive tasks like classification or natural language processing.
Evaluating impact. Hmm, I’m thinking about how equating AI’s intelligence with wisdom or empathy highlights its lack of emotional nuance and moral judgment, unlike human decision-making.
Evaluating emotional constraints. OK, let me see. Without emotions, AI’s capabilities are limited. It helps with data processing but struggles with empathy, moral decisions, and emotionally complex tasks.
Understanding intelligence’s facets. Weighing whether AI’s purely cognitive intelligence is a drawback compared to human intelligence, which combines both cognitive and emotional facets.
Assessing AI’s limitations. AI’s lack of emotions hinders creative empathy, subjective experience, and moral judgment, affecting tasks like decision-making and understanding human social dynamics.
Evaluating AI’s objectivity. Without emotions, AI remains purely algorithmic and objective, excelling in data processing and reducing emotional bias. Yet, it lacks human-like emotional insight and creativity.
Parsing AI’s capabilities. Wanting to understand how AI’s lack of emotions impacts its combined computational and emotional intelligence, I find it excelling in logic and data analysis but deficient in empathy and moral reasoning.
Understanding human context. OK, let’s think it through: AI’s inability to grasp human context and ethical dimensions arises from its lack of emotional processing, crucial for learning from experiences and valuing outcomes, unlike humans.
Noticing AI’s limited domains. It’s interesting to see that AI’s lack of emotional awareness constrains its capacity to experience empathy, morality, or intuition. Thus, beyond cognitive tasks, its emotional dimensions remain undeveloped.
Assessing AI’s abilities. Here’s my take: AI excels in data processing with emotional neutrality but struggles with emotional nuances, moral reasoning, and creative tasks that require emotional depth or understanding of human behavior.
ChatGPT o3-mini-high Response:
AI’s lack of feelings doesn’t impede its ability to process data or solve problems—it simply means that its form of intelligence is purely computational rather than holistic. In human cognition, intelligence is a blend of both analytical and emotional components. Emotions guide decision-making, provide context in ambiguous situations, and foster empathy, creativity, and moral reasoning. AI, on the other hand, operates solely on algorithms and data patterns, which can be an advantage in tasks requiring objectivity and consistency but a limitation in scenarios where human nuance is essential.
Because AI does not experience feelings, it can avoid biases stemming from emotional influence; however, this also means it may not fully grasp or predict behaviors influenced by human emotions. For example, an AI system may excel at pattern recognition in large datasets but struggle to understand the subtleties of human interaction or the moral dimensions of complex social issues. This gap becomes particularly relevant in fields like law, ethics, or any context where understanding human emotion and intent is key to decision-making.
In summary, while the absence of feelings allows AI to maintain a level of objectivity and efficiency, it restricts its intelligence to a form of “cold cognition” that lacks the depth provided by emotional awareness. This delineation underscores that AI’s intelligence is not inherently superior or inferior to human intelligence; rather, it is different—optimized for data processing and pattern recognition but not for the subjective, value-laden, and context-rich decisions that emotions help shape in human thought.
My Personal Evaluation of ChatGPT o3-mini-high Response and Pre-Response Disclosure
It is pretty close between DeepSeek and OpenAI’s latest, o3, but I have to give the overall nod to OpenAI. Both gave detailed disclosures of chain of thought but Open AI’s was overall better. DeepSeek’s final answer was longer and more complete but I could always have asked for more detail with follow-up questions. Overall I liked the quality of OpenAI’s ChatGPT o3-mini high’s response better. OpenAI’s use of the phrase “cold-cognition” was very impressive, both creative and succinct, and that made it the clear winner for me.
I remain very suspicious of DeepSeek’s legal position. U.S. courts and politicians may shut them down. This Chinese software has no privacy protection at all and is censored not to say anything unflattering about the Chinese communist party or its leaders. Certainly no lawyers should ever use it for legal work. Another thing we know for sure, tons of litigation will flow from all this. Who says AI will put lawyers out of work?
Still, DeepSeek enjoyed a few days of superiority and forced OpenAI into the open, so I give it bonus points for that, but not a trillion of them. But see, Deepseek… More like Deep SUCKS. My honest thoughts… (YouTube video by Clever Programmer, 1/31/25) (Hated and made fun of DeepSink’s “think” comments).
Conclusion: Transparency as the Next Frontier in AI for Legal Professionals
DeepSeek’s Deep-Think feature, now OpenAI’s feature too, may not be a revolution in AI, but it is a step in the right direction—one that underscores the critical importance of transparency in AI decision-making. While the rest of DeepSeek’s R1 model was largely derivative, it was first to market, by a few days anyway, on the disclosure of reasoning process feature. Thanks to DeepSeek the time table for the escape from the black box was moved up. The transparent era has begun and we can all gain better insights into how AI reaches its conclusions. This level of transparency can help legal professionals refine their prompts, verify AI-generated insights, and ultimately make more informed decisions.
For the legal field, where the integrity of evidence, argumentation, and decision-making is paramount, AI must be more than a black-box tool. Lawyers, judges, and regulators must demand that AI models show their work—not just provide polished answers. Now it can and will. This is a big plus for the use of AI in the Law. Legal professionals should advocate for AI applications that can provide explainability and auditability. Without these safeguards, over-reliance on AI could undermine justice rather than enhance it.
Call to Action:
- Legal professionals must push for AI transparency. Demand that AI tools used in legal research, e-discovery, and case preparation disclose their reasoning processes.
- Develop AI literacy. Understanding AI’s limitations and strengths is now an essential skill in the practice of law.
- Engage with AI critically, not passively. Just as lawyers cross-examine human witnesses, they must interrogate AI outputs with the same skepticism.
Deep-Think and o3-mini-high are small but meaningful advances, proving that AI can be more than just an opaque oracle. Now it’s up to the legal profession to insist that all AI models embrace this new level of transparency.
Now listen to the EDRM Echoes of AI’s podcast of Ralph’s article, The Human Edge: How AI Can Assist But Never Replace.. Hear two Gemini model AIs talk about the human difference. They wrote the podcast, not Ralph. Ralph does oversee, guide and quality control the podcast. Listen to the podcast episode here!
Ralph Losey Copyright 2025 – All Rights Reserved
Assisted by GAI and LLM Technologies per EDRM GAI and LLM Policy.