Writer of Sci-Fi & Philosophy
Sci-fi has long agreed that the one thing missing from Artificial Intelligence is emotion. From Star Trek’s Data to Arnie’s Terminator, the capacity to feel anger, sadness, joy, etc, has been taken as a milestone past which robots and computers could rightly claim the same rights as their biological equivalents. As Yonck acknowledges to his credit, we are nowhere near that point – and may never be – but the role of emotion is nonetheless destined to play an increasingly vital role in technological development for decades to come. So, while the question of true artificial sentience is interesting – and Yonck does dedicate some of the book to discussion of that more far-fetched possibility – the really exciting developments in machine emotional intelligence are already beginning to take place.
Machine emotional intelligence – or “affective computing”, as Yonck terms it – involves both a computer’s ability to read human emotion, and the related ability to simulate it. The Siris and Alexas of the future will not only know when you’re feeling a bit down or pissed off, and adjust your house lights and introduce some appropriate music, but will ultimately develop a degree of emotional behaviour and empathy that will allow them to behave as “our assistants, our friends and companions, and yes, possibly even our lovers.” Westworld here we come! Well, perhaps not – or at least not yet.
The majority of the book takes the form of a survey of the varoius fields in which affective computing is being – and will be – developed. Each chapter is prefaced by a “scenario” – some past, current or future playlet – that explores how affective computing will impact (for instance) education, shopping, war, medicine, therapy, and of course sex (without which no book on robots would be complete). This is all, by and large, pretty interesting, and nicely put together. In this regard, the book acts as an extremely valuable primer on technological trends for any techno-fans or budding sci-fi writers (ahem!), and I can heartily (pun intended) recommend it on that score.
Yonck is closer to the techno-evangelist camp than neo-Luddites such as myself, but he does a fair job of cooling his futurist ardour with various caveats and healthy doses of scepticism. The closing chapters grow more philosophical – at which I pricked up my ears and sharpened my talons – but his overview of the philosophical terrain is both fair and balanced, even if it is not in-depth (e.g. there is no mention of intentionality), and there were points I would disagree with (I don’t think we need a third category in addition to “access” and “phenomenal” consciousness, really – but that’s by the by…). But overall I found myself agreeing with his conclusion that, while it’s highly unlikely that machines will eventually develop the capacity for phenomenal consciousness (feelings, emotions, etc), making it improbable that we could have full artificial equivalents of humans, we face a much greater likelihood of human-like artificial beings that can simulate human thought and behaviour. Beyond that there is also the worry that AI will develop its own non-human intelligence (though without true sentience, it’s hard to think how this can be anything more than a sort of rampant runaway set of maths equations with thermonuclear warheads attached – not a settling thought, though, I agree).
There are a few points where I would question Yonck’s theoretical commitments, though these are also shared by other commentators in the field who are less balanced in their approach. He parrots the common observation that technology is “neutral”, and that it’s up to us to decide its ethical import (which sort of misses the point – which I credit to Marx, via Marshall McLuhan – that all technologies – “media” – come pre-embedded with values and norms – their own “message”). He flirts with the idea that, sentient or not, since we won’t be able to tell whether future AI have in fact evolved sentience, machines should therefore be accorded rights. Here he offers up the usual comparisons with liberal progress – racism, sexism – where future generations will have to be cured of their inherent robotism (as I guess you might call it). The justification for this equality seems simply to be that such machines would be “black boxes”, so complicated that we simply don’t know what’s going on inside them; and because we don’t know, we should err on the side of caution (which is a not very appealing form of the argument from ignorance – to me, anyway). He also seems very keen on the idea of “homo hybridus”, or cyborgism, and that some future blending of human and machine almost represents an inevitability (“It’s only a short step from a wooden leg to a bionic foot!” – not a direct quote). So while at times he does topple from his balanced pedestal into something more partisan – “The future’s a-comin’, y’all!” (again, not a direct quote) – Yonck’s book is quite sober and measured.
In summary, then: worth reading, even just for it’s accessible, well-researched and (generally) well-balanced overview of what tech has in store for us – perhaps.
Gareth Southwell is a philosopher, writer and illustrator from the UK. He is the author of the near-future sci-fi novel MUNKi, which concerns robots, the hunt for the Technological Singularity, and people swearing in Welsh.
Note: due to spam, comments have been disabled on this post, but you are welcome to contact the author to discuss further.