Why Robots Cannot Think
Can we separate our perceptions, feelings, and thoughts? If not, how can Artificial Intelligence hope to think like we do...?
“The idea of thinking as a process in the head, in a completely enclosed space, gives us something occult.” - Ludwig Wittgenstein
Why should there be something supernatural about imagining our thinking as a process sealed within our mind? It’s quite an orthodox view: thoughts, feelings, perceptions are all ‘trapped within our own head’. Having walled up our inner life into its own brain-shaped prison cell like this, it’s small wonder that we gawk at ‘artificial intelligence’ and mistakenly presume we’re watching a machine think like we do. The new robot kid on the block, large language models, magically produce intelligible sentences as if from nowhere! But Wittgenstein (vit-gun-shtine) makes the opposite accusation. This mundane idea of thinking being imprisoned in our heads is akin to claiming arcane powers! Exploring why he makes such an unusual claim not only clarifies why AI cannot think, it might free us from this conceptual prison that misunderstands the essence of human existence.
The final published collection of Wittgenstein’s notes appeared under the amusing title ‘Paper Scraps’. These disparate aphorisms repeatedly challenge what we assume about ‘thinking’, maintaining that something far more than mechanical process accounts for our capacity to understand one another. “How words are understood is not told by words alone”, he remarked, gesturing towards something hidden with terms like ‘occult’, ‘metaphysics’, and ‘theology’. He even suggests thinking ‘in our heads’ is “one of the most dangerous of ideas for a philosopher”, insisting that something outside ourselves is both necessary and overlooked.
Wittgenstein noted that what actually goes on during thinking doesn’t seem to interest us. Is it even a ‘mental activity’: if you talk to someone else (or even to yourself!) are you conducting two activities? And why can’t a cat be taught to retrieve (“Doesn't it understand what one wants?”) - what kind of failure to understand is entailed here? Cats and humans understand each other well enough to share a home, yet the cat lacks the joy of chase and recovery that provides such pleasure to so many dogs. This activity - retrieval - might provide a clue, since in other species, we dismiss as instinct whatever happens so effortlessly. Are we then to ask whether it is our instinct as humans to think...?
While I find it risky to invoke neurobiology in such a superstitious time as our own, my suspicion is that the neural networks within our brains are only capable of learning patterns. It is the intricate connections to other parts of our biology that connect these patterns to the worlds we live within. The hippocampus allows us to remember (running ‘learning in reverse’) , while the orbito-frontal cortex offers satisfaction when we establish congruence between patterns (whether by solving equations, or interpreting events). The ancient amygdala (uh-mig-duh-la), our near-autonomous pilot for ‘fight or flight’, associates patterns with hot anger (fight) and cold fear (flight). Our thoughts cannot be severed from sensations, perceptions, and feelings - they are intimately bound up with them, all conditioned by communicative habits (words and grammar) that we inherit from those we grow up around. The inner life begins outside.
Wittgenstein proposed that what we mean by ‘thought’ is “what is alive in the sentence”, without which we are left with “a mere sequence of sounds or written shapes.” Any harmony with reality lies within “the grammar of the language” - that is, embedded in the linguistic habits of a living culture. This is the crucial deficit of large language model AI, which shares our capacity to learn patterns but can only handle this data as a dead thing. It cannot think for there is nothing alive in these sequences of words, only a string of probabilities memorised from a huge pile of borrowed texts.
Robots cannot remember, take pleasure in congruence, feel fear and anger, or indeed live in a world. Quite unlike us, their strictly mechanical processes do occur within sealed boxes. Not so for humans - nor for cats and dogs! Our thoughts were never imprisoned in our heads. They emerge from the world around us in a great flowing stream of thought entwined with life itself. It is solely within this living flow that our words take on their meanings.
I’ve split this comment off because it’s not germane to the subject at hand. But underneath the subject of this post, and also the next one, is the matter of how one views the nature of body/brain/mind. Since my view is that the source of everything in social reality arises from an individual body/brain/mind our models in this realm are fundamental to everything we think about
I appreciated your invocation of neurobiology. As I’ve tried to elucidate elsewhere, one of my fundamental concerns is with how humankind interacts with physical reality. My view is that we are entirely part of this reality, and like every other life form, our long-term success depends on how we interact with physical reality.
A few humans have a compulsion to create models of who and where we are. Our earliest models posited beings somewhat like us that orchestrated all of physical reality, including us. About three millennia ago some of us started to have thoughts that challenged this view. Less than a millennium ago this way of thinking picked up steam. Although at present most of humankind still holds to some version of our original view, a few of us do not. For me it is not a matter of which view is epistemologically correct, but only which view is most likely to result in long term success, say over a hundred millennia. In my view a really critical part of achieving success is understanding how our minds function, that is, what exactly determines the actions we take or don’t take. It is these actions that will determine our fate.
What will success look like? First, we won’t commit suicide, which on our present course is a distinct possibility. Second, we’ll have a civilization in which most of us can create lives that are physically and mentally comfortable most of the time, and at death most of us are reasonably satisfied with our life experience.
Over the course of three million millennia many species have come and gone. Has nature so evolved the human body/brain/mind that it is capable of creating such a future? Given what appears to be happening in the last decade or so, human civilization’s ultimate success is doubtful. Perhaps by the year 3000 we’ll have more clarity.
Hi Chris
An important piece for me. Thank you!
As usual, it’s a question of the meanings we assign to words. To argue that any mechanism that humans can ever create “thinks”, requires that we model a process we label ‘thinking’, and then show equivalence between the mechanism’s behavior in various situations and that of our model. Since my view is that the source of all human action is thought, any light we can throw on this concept is helpful
What is a thought? I claim that on most days I have several hundred thoughts. I crudely capture a few aspects of a few of these thoughts in words. But no recording of an event is the event itself. Just what a thought is, is intrinsically unknowable. It seems to me that this is just another way of expressing some of the points in your penultimate paragraph.
I use flying as an analogy. We call certain behaviors of birds and insects ‘flying’. When we use machines that can transport us from one place to another using aerodynamics, we say we ‘fly’. But what we have accomplished is only crudely similar to what a bird does.
In a few score millennia humankind has figured out how to amplify its muscle power by several orders of magnitude. Partially as a result of this, a worldwide civilization that is several orders of magnitude more complex than where we began has emerged. One promise of AI is that humankind now has tools it can use to more rationally direct its complex creation.
As our robots become ever “smarter” it is absolutely critical that we always remember that they can never think as we do, and that any output they produce is artificial and not necessarily always in our best interest.