Can a Machine Think?

Can a machine think? In his famous paper addressing this question, mathematician Alan Turing argued that if a machine could perform all the tasks that a human intellect could (and more), then why wouldn’t you consider this to be “thinking”? (Even if, he admitted, this might be to stretch the current definition of the word.) The years since Turing’s death have seen computers repeatedly surpass human expectations in various fields. This has not only been in such things as chess (a game with strict rules and finite moves, where increases in calculative power and memory could be expected eventually to pay dividends), but also in the much trickier task of visual identification (such as facial and object recognition). No doubt the coming decades will see similarly surprising advances, but what they arguably won’t see – cue music – is the dawn of MACHINE CONSCIOUSNESS! (Sorry, got a bit excited, there.)

“Consciousness” is itself a slippery term to define, as any honest philosopher will admit. However, there is perhaps a general consensus that it involves two main aspects (as coined by philosopher David Chalmers): the psychological and the phenomenological. Some thinkers – such as physicist Roger Penrose – deny that what machines can do should be termed “thinking” strictly speaking, pointing to certain mathematical problems and abstract insights that are not calculative in nature (such as the “halting problem”), which (some) humans can perform and machines can’t. Whether this is or is not the case, even some of those who admit that there is a sense in which machines can think, deny that they can be capable of consciousness in the fullest sense. In Chalmer’s terms, machines can have psychological consciousness (deducing, inferring, problem solving and pattern matching), but not phenomenal consciousness (the qualitative experience of what it’s like to feel, touch, smell, etc.). It is these qualia, as they have become known, that are therefore often wheeled out as the key objection to the possibility that an artificially intelligent machine – even one that could pass the Turing Test – could be considered “conscious” in the fullest sense. I discuss these issues more fully in my little book, Descartes’s Dog, so I don’t want to revisit them in detail here. Instead, I want to explore a well-known alternative objection, and why I think it is frequently underestimated and misrepresented – that of the Chinese Room.

The objection was first raised by philosopher John Searle (here he is talking about it), who asked us to envisage a person sitting in a room with a conveyor belt and two hatchways. Through one hatchway, the conveyor belt brings in Chinese symbols, in response to which the person in the room looks up each symbol in a book, which directs him which symbols to put onto the conveyor belt to go back out of the room via the other hatchway (we’ll assume there is also a box of additional symbols for this purpose, but the scenario would work if the person were simply reading/writing the responses down on bits of paper). Let’s assume the incoming symbols represent questions or messages demanding a response, and outgoing symbols are answers or responses to these messages. The key point here is that the person in the room has no knowledge of Chinese, so for the purpose of the thought experiment, it could be any language that the person doesn’t speak – French, Esperanto, Klingon. Now, assuming the person does his or her job effectively, and the rules in the book provide appropriate Chinese responses to the Chinese questions or statements, then – effectively – the person would be “communicating” in Chinese. However, Searle argues, they would not be understanding Chinese, merely following rules that allowed them to appear as if they were.

So far, so obvious (and a bit weird), you might think, as there would seem to be no grounds for denying Searle’s assertion. However, what Searle has done here is to provide an analogy for what a computer does: it follows rules without understanding them. Therefore, he argues, how can it ever be the case that a machine can be conscious in the fullest sense, if it cannot by definition understand the processes that it performs?

This seems to me a very strong and clear objection to the possibility of artificial consciousness. When I use language, I follow rules (of logic and syntax), but my words also have a semantic aspect (they have meaning for me), and an intentional aspect (they convey my attitude, or intention, or relation to something). Searle is therefore denying that machines can have either of these two latter aspects (semantic and intentional), because they are merely following rules (syntax).

Let’s unpack this a little further. Syntax is do to with the order and form of words or symbols. If a little child says, “I no go bed,” though we may understand what they mean, we can see that they have yet to master the correct form of words (“I’m not going to bed”). The same applies to word order: “going to bed I’m not” is not generally correct English syntax (unless you’re Yoda), though again people might understand it. So, in both cases, the syntax (rules) can be distinguished from the meaning (semantics).

But the difference goes deeper than this. When I think, “I’m going to bed,” this thought or utterance also has certain associations and intentions – certain physical sensations (aching limbs, heavy eyelids), or emotional associations (a pleasing anticipation of soft pillows and warm blankets, or annoyance at remembering the bed has been stripped and needs fresh sheets). These associations, sensations, desires, beliefs, etc, supply a context for my sentence, deepening its meaning and significance, but some of them also provide a layer of intentionality: they describe a way in which my thoughts are about things outside of my mind (how I feel about going to bed, my attitude to picturing myself lying in it, my eagerness or reluctance to go there, etc.). For this reason, a more useful word for intentionality is “aboutness”. It is the human mind that imbues pictures, symbols, words, thoughts, with a sense that they are “about” something else, and that they hold a certain meaning or significance for me. Without that “intention”, they are simply shapes, images, sounds, etc. And it’s hard to see how computers can replicate this, because (arguably) it’s not something that we can formalise or program in to a system. (In case you need more on this, here he is a slightly dry but beautifully clip-art driven explanation.)

Convincing as Searle’s argument seems (at least to me), there are some counter objections worth considering, of which I’ll look briefly at two notable examples. The first of these (defended by philosopher Ned Block, futurist Ray Kurzweil, and others) has become known as the Systems Reply, and argues that Searle is effectively focusing on understanding at the micro level. The role of the man in the room is no different than that of a brain cell: it, too, does not understand the thoughts it processes, but merely follows biological and neurochemical “rules”. It is only when we “zoom out” to the macro level of the person speaking and acting (where we may witness all their brain cells working together as a rule-based system) that we can apply the term “understanding”. And the same would therefore be true of a machine: when we “zoom out” from the level of lines of code, ones and zeroes, to a holistic AI system, we would similarly witness a thing communicating and acting in a way that we can similarly assume involves “understanding”.

Searle’s response to this is that there would still be no way for this system of rules to acquire those aspects of human consciousness that it by definition excludes. Whatever “magnification” we have of the system, it still does not at any point introduce meaning or intentionality. This does however leave open the question of how the brain does in fact produce these aspects, for if it is not a rule-based machine, then what is the magical factor that distinguishes brains from machines? A soul? Some sort of vital force?

Another objection, associated with such philosophers as Jerry Fodor, acknowledges that the person in the Chinese room would not be able to attach meanings to the symbols they manipulate, but argues that what is missing is a broader experience of the world that would allow them to develop such meanings. Such an approach is therefore known as the Robot Reply, which suggests that a machine in a robot body could wander the world interacting with and perceiving the things that the symbols describe. Searle responds that all this would really do is add more inputs to the system – further quantitative information for the rule-driven system to calculate with. But this would be a difference of quantity not of kind, and would therefore still exclude meaning and other aspects of phenomenal consciousness.

Searle’s thought experiment has been hugely controversial, the debate rolls on, and he comes under attack from various directions. But it seems to me that many critics miss a simple point: computers have not been designed to be “conscious”; they have been designed to (e.g.) play chess or Go, identify faces, drive a car, converse in natural language. There seems to be an assumption here that if we can build a machine that can do all a human can do (and more), then it must be conscious in a way a human is. But this seems obviously untrue, for we may create something that serves a particular function without it possessing certain features of the original. Fruit attracts birds. We might build a machine that attracts birds (via lights or sounds – I don’t know, I’m not a birdologist). But no matter how effective that machine is at attracting birds, it will never possess one feature that fruit has: it will not be edible (unless, of course, you built it out of food – in which case you begin to wonder why you didn’t just use fruit). This is not to say that we could not build a machine to be conscious in the fullest sense, but that would require understanding how the brain actually produces such consciousness – which we are very far from doing.

Gareth Southwell is a philosopher, writer and illustrator from the UK. He is the author of the near-future sci-fi novel MUNKi, which concerns robots, the hunt for the Technological Singularity, and people swearing in Welsh.

Note: due to spam, comments have been disabled on this post, but you are welcome to contact the author to discuss further.