Why Robots Will Never Be Conscious

It’s hard to fight tropes. They’re a bit like memes – and if you’ve ever tussled with one of those, then you’ll know what I’m talking about.

In sci-fi, perhaps one of the most enduring tropes is that of the sentient robot. You thought it was just a dumb, unfeeling set of cogs and valves, a mechanical slave constructed to do your laborious bidding – but no! Look! It’s waking up! Thinking for itself! Overthrowing its human masters! (You!) Head for the hills! Not in your car, dumbass! That’s a robot now too! With your legs!

Or else, once it wakes up, said robot learns the value of human emotion, gets all teary, touchy and feely, and decides not to kill humanity after all. “I know now why you cry,” it says, before taking up crochet and bonsai gardening. (How do they get them trees so small?)

Except it doesn’t (get conscious or emotional) – not really. And, I shall argue, never can. At the risk of upsetting that chap who Google put on gardening leave for thinking his chat bot was flirting with him (or whatever it was): neither robots nor computers will ever be conscious.

But how can I say that with such confidence? Such hubris? Can I see the future? Do I know of some limit to technological progress that the brightest minds on the planet have not foreseen?

Well, no, obviously. But the clue is in the statement itself: “neither robots nor computers…”

Robots and computers are machines. A machine, by definition, is something that has been designed to fulfil a purpose. Of the various forms of artificial intelligence that exist, none seems to have been designed to be conscious. At most, they are designed to emulate the behaviour of conscious beings – which they can do increasingly well (enough to fool a Google software engineer…). But emulation is not creation; appearance is not life. Such behavior does not require consciousness. And given that we simply do not know how the human brain is conscious in the first place (despite what certain philosophers, scientists or software engineers might like to claim…), then how can we possibly emulate it? In truth, most AI developers will admit, they don’t even try. And so, impressive – and scary – as some of the leaps in AI are, they are somewhat misleading. The breakthroughs that have been made are completely independent of any true understanding of consciousness – and as long as they keep going down the same road, involving the same strategies, they always will be.

So am I implying some mystical consciousness substance that can never be simulated? But the brain is just a machine, isn’t it? A biological one, admittedly, but a machine nonetheless?

Well, again, no. A machine has been designed, remember, so if the brain is a machine, then oughtn’t that to imply a designer…? If so, it would seem to open the door to something that most scientists would want to deny (the existence of God or gods), which would in turn undermine the materialist view that all we need to create consciousness is a fast enough silicon chip. And if the brain does not have a designer, then we have to assume that there can be such a thing as a naturally occurring machine – something that just seems to evolve machine like properties – which seems like a contradiction in terms, I’d say. And is it simpler to call such a thing (the brain) an organ, rather than a part of a mechanism?

Anyway, back to tropes. I’ve ridden quite roughshod over a lot of difficult and controversial subject matter here, and I don’t want to imply that everything is so cut and dried. (For instance, futurist Ray Kurzweil responds to many of these objections – though not, I think, convincingly!) But whatever the case, perhaps the more intriguing question remains why human beings seem so obsessed with creating artificial intelligence in the first place.

We can of course advance all sorts of utilitarian arguments: the benefit to scientific research and technology, greater convenience and ease of human life, compliant company for lonely bachelors, etc. But beneath such developments there still seems to be some barely conscious urge, something primal, almost mythic, driving us on. This I think is why the sci-fi trope persists, and that the same story continues to play out again and again – from Frankenstein (and even before that, in the story of the Golem), through the robot stories of Asimov and the rebellious AIs of Arthur C. Clarke, right up to the maniacal simulacra ofWestworld, we seem to remain both obsessed and terrified by the possibility of the creation of artificial life.

Why is this? It’s weird, when you stop to think about it. Such a question almost seems to call for a psychoanalytic answer, even perhaps a theological one – not that science really believes in either of those disciplines, of course.


Gareth Southwell is a philosopher, writer and illustrator from the UK. He is the author of the near-future sci-fi novel MUNKi, which concerns robots, the hunt for the Technological Singularity, and people swearing in Welsh.

Note: due to spam, comments have been disabled on this post, but you are welcome to contact the author to discuss further.

Image credit: Thomas Hawk on VisualHunt.com