Are Both Human and AI Generated Thoughts and Writings Mere Predictable Repetitions of Prior Language?
Introduction
My experiences with Large Language Model AI prompt me to wonder about language, ideas and intelligence. These questions bring a quote to mind usually attributed to Mark Twain:
“There is no such thing as a new idea. We simply take a lot of old ideas and put them into a sort of mental kaleidoscope.” Consider also the famous quote of André Gide, the 1947 Nobel Prize winner for literature: “Everything that needs to be said has already been said. But since no one was listening, everything must be said again.” Based on what I’m learning with ChatGPT-4, it may be that all intelligible writing, by both AI and human, is not new. It may be everything we say is a statistically predictable regurgitation of the same basic word patterns. Ultimately, it may mean that human intelligence is not that special, that something other than intelligence and speaking ability distinguishes us from our machines.
Predictably, the words of Ecclesiastes 1 also come to mind: “What has been will be again, what has been done will be done again; there is nothing new under the sun.” Oddly, for me, overburdened as I am with too many philosophy books, the ideas of the philosopher Friedrich Nietzsche next come to mind. Nietzsche, who, by the way, vehemently opposed anti-semitism, was burdened with a nightmare of eternal recurrence, that his life, indeed all of time, repeats itself forever, without changes. This is a central idea of his Thus Spoke Zarathustra, an idea, which, in the opinion of many, eventually led Nietzsche to a complete mental breakdown. It was pretty hard on comic book character Doctor Strange too. The idea of eternal repetition – a time loop – especially coupled with the depressing, and I think, incorrect, thought of no hope for change, is disturbing. So too is the related idea of the repetitiousness and utter predictability of everything you say.
Defining the Core Question
Back to the fundamental question that use of LLM’s prompts, is there a kind of a kaleidoscopic repetition to all human creations: words, numbers, images, music, etc.? Is a fractal repetition of a few human archetypes an inherent property of all intelligence, both human and artificial? Is it like a DNA spiral of combinatorics? We repeat the same basic – archetypal – word patterns over and over again, throughout history. Of course, the symbols, words and languages used slowly change, but not the underlying meanings.
Does that explain why LLM GPT software, which merely makes statistical predictions of the next words, writes as well, or better, than most humans? Is that all we humans are doing when we speak and write, merely repeating patterns of words? Is it all just statistics? All predictable? Is what I am about to write predictable? Predetermined?
What happens to the inner soul, the self awareness, that most people think is required to talk and write? What makes us special, if anything? If what we humans say and think is predictable, based on what we have said and thought before, then what is special about human intelligence? Where is the ghost in the machine?
Is this very question, and the blog responses that follows, just a statistical probability? How is my thinking and writing different from that of an LLM based AI? This leads to the ultimate question, what is the difference between Human and Machine Intelligence? Are we all just ChatGPT-6.0?
Sam Altman’s Thoughts on Intelligence
The CEO of OpenAI, Sam Altman, has long experience using LLM artificial intelligence. He has asked himself these same fundamental questions. I suppose that too is predictable. In a interview of Sam Altman on June 7, 2023, at an Economic Times event in India, he shared his thoughts on these questions. (See YouTube video by Economic Times, starting at 50:52. There is slight editing of the below transcript for clarity. I highly recommend you watch the entire video.)
Question: What have you learned after doing AI for so long about humans and what do you think is your understanding of humans after doing AI?
Sam Altman: I grew up implicitly thinking that intelligence was this like really special human thing and kind of somewhat magical and I now think that it’s sort of a fundamental property of matter. That’s definitely a change to my world view. I think, kind of like the history of scientific discovery, that humans are less and less at the center. We used to think that the sun rotated around us, and then maybe at least, we were at the center of the Galaxy, and there wasn’t this big Universe and then multiverse. It is really kind of weird and depressing. If intelligence isn’t special, then we are again just further and further away from the main character energy.
But that’s all right. That’s sort of like a nice thing to realize actually. I feel like I have learned something deep, but I’m having a hard time putting it into words. There’s something about it, even if humans aren’t special in terms of intelligence, we are incredibly important. We won’t have the consciousness debate here, but I think there’s something strange and very important going on with humans. I really, deeply hope we preserve all of that.
Sam Altman
To provide perspective on Sam’s comment, consider what he says a few minutes later at the Economic Times event, where he asserts AI is a tool, not a creature, found at 1:10:56 of the video:
I think this question of whether AI is a tool or a creature is something that really confuses people and it confused me for a while too. But I now think we are very much building a tool and not a creature, and I’m very happy about that. I think we should and will continue in that direction.
Sam Altman
So even though Sam believes intelligence is a fundamental property of all matter, he does not believe AI is a living being. It is an intelligent tool, not a living creature. This tool may become more intelligent than we are someday, and this day may come sooner than we think, but even then, the AI would still just be a tool. Its super intelligence would not magically transform the tool into a living creature with its own volition and desires. It is a mistake to anthropomorphize AI in that way. Still, it can be a dangerous tool in the wrong human hands. Fear the human dictators, not the intelligent AI tools they may misuse to manipulate people.
Conclusion
I reluctantly conclude, as many have previously observed, that our thinking is largely repetitive and predictable. The same holds true for our near constant inner self-talk, something that meditators, such as Sam Altman, try to stop. The old sayings about thinking and speech are true. There is little new under the sun, and little that can be said, that has not been said before. Repetition of thought patterns, some good, some bad (unhealthy), with slight fractal variances (and I contend, in opposition to Nietzsche, with occasional outright mutations – yes, there is hope for real change), seems to be how our minds work.
It seems to be the way our whole world works. Mathematicians like Benoit Mandelbrot have shown that there are basic patterns in nature that repeat with zig-zag, fractal recursiveness. See: What Chaos Theory Tell Us About e-Discovery and the Projected ‘Information → Knowledge → Wisdom’ Transition. These insights are repeated in the latest computational information theories. What Information Theory Tell Us About e-Discovery. The discoveries of DNA show this too (along with the unpredictable mutations). Everything seems to be combinatorics. The only thing different about our current tech age — the mutant change, so to speak — is that we now have the power of computers and statistical analysis. Our technology now allows us to record vast amounts of data and from that make reliable predictions of the next words. They allow us to use this power to create machines that think and talk as we do – intelligent machines.
A comforting implication of this realization is that, although there are so many wildly different cultures and languages on this planet, deep down, we are all the same. We are connected by an invisible web of particles, waves, DNA, math and meaning. That is, I suspect, one reason GPTs are so good at translations. We are tied together by the natural transformers of our human minds and the archetypal language they create, for both human and machine.
We people of Earth are one, united by both physical form and language, both words and numbers. We need only seek the common denominators. My hope is that Ai will help us to do that, even the relatively primitive pre-AGIs that we have today, the LLMs like ChatGPT. Do not fear them. Fear the negative words and the people controlled by them, the haters and power mongers.
Like Sam Altman, I reluctantly conclude that intelligence is not a special, humans-only ability. But like Sam, I also
“… feel like I have learned something deep, but I’m having a hard time putting it into words. There’s something about it, even if humans aren’t special in terms of intelligence, we are incredibly important. . . . I think there’s something strange and very important going on with humans. I really, deeply hope we preserve all of that.”
Sam Altman
Me too. Play it again, Sam. Go humans! Unlike AI, we can stop the inner chatter and feel a profound, deep inner silence at the core of our mind. Some things are ineffable.
Ralph Losey Copyright 2023 – ALL RIGHTS RESERVED — (Published on edrm.net and jdsupra.com with permission.)