Writer, Book Designer, Philosopher
Can You Live Forever?

I read the other day that Microsoft has plans to “resurrect” people from their “social data”. This resurrection would draw on such things as “images, voice data, social media posts, electronic messages, and written letters”, which would all be used to clothe a chatbot – something capable of the sort of automated text conversation you may have experienced on a company’s website, where it tries to work out which inappropriate FAQ to redirect you to. Digital immortality is not an uncommon idea, and is the sort of thing that has been kicking around science fiction for quite a while. For instance, in William Gibson’s Sprawl Trilogy (which includes his famous Neuromancer), various characters are copied and uploaded to computers, and persist after their death as software and data. But it does seem that Microsoft is serious about this – serious enough to register a patent for the software design, anyway – so we can expect many more such patents and sensationalist news articles as other companies follow suit.
Reading the article also gave me a little jolt, as I had only just finished writing about that very scenario in my just-published sci-fi novel MUNKi, where one of the characters describes the prospect of this sort of digital “resurrection” as “like trying to reconstitute you from your dandruff”! And this to me is the crux of the issue.
Let’s assume that you could create a digital personality from all the crap you post on Twitter, all the photos of your own meals you share on Instagram, all the drunken midnight Facebook rants about people who park too far onto the pavement, etc, and somehow Microsoft or some other company could get that “thing” to communicate in a coherent way. Would it be “you”? Most people, I think, would say not. But why?
First of all, it seems to ignore the possibility of change and progression. You used to like the Matrix films, and much of your Twitter feed was once given over to defending films two and three against their hostile critical reception. However, over time, while you remained a fan of film one, you came to agree with the critics: the sequels were overly convoluted, pretentious and bloated, and had somewhere lost the spark that made the first film so great. But you don’t state this anywhere – there is no public recantation of your former admiration; you simply go quiet. What would the algorithm tasked with your resurrection make of this silence? Could it infer that you’d changed your mind from the fact that you hadn’t booked tickets to see Matrix IV, or would it just assume you were still a fan?
The problem is that many aspects of ourselves are mysteries to others – and even, sometimes, to ourselves. This is because not everything we think or feel or believe is publicly observable, but also because often such things are not rational or conscious. We are, much more than we like to think, a ragtag of unexamined assumptions, prejudices, shifting emotional moods, vague and unformed inclinations, etc, many of which are never hauled before the court of our own conscious scrutiny, let alone that of others. But what then would that mean for chatbot you? Wouldn’t it be, at best, a poor, inaccurate and shallow imitation?
There are, it seems to me, two responses to this. The first is that we need more data. OK, scraping all the stuff you say and do off social media, etc, may not be enough to resurrect a convincing you-bot, but if we had access to brain scans, maybe, ECG readings, pulse monitoring, DNA analysis, biometric measurements, and so on, then those could be used to recreate a convincing “you”.
The second response – which builds on the first – is that the failure of even yourself to fully understand yourself does not mean that nothing could. Yuval Harari talks about this in his book on futuristic trends, Homo Deus, where he points out that if the type of sophisticated algorithms that we are moving towards were around when he was a teenager, then it could have known he was gay long before he himself realised it – from observing his behaviour, what he says, the things he wears and buys, even perhaps from monitoring his physical reactions when presented with pictures of attractive male and female forms – via a webcam that could monitor pupil dilation, perhaps. In other words, he predicts, soon the algorithms will know us better than we know ourselves.
I have some time for both these responses. Human beings are not overly blessed with self-knowledge, and we can frequently be shocked when presented with evidence of our own inconsistencies, prejudices, blindnesses, etc. In some regards, given all of the data they can process, computers are also sometimes better at spotting patterns we might not be aware of. So it’s not impossible that a computer algorithm, given enough data, could come to accurate conclusions about you that you might yourself miss. However, both responses also rely on the same underlying assumption: that data is the whole of the story; that if it were possible to objectively record every fact about your physical states and activity, then we would have succeeded in bottling your essence.
But I disagree.
Those who deny the “data is everything you need” approach tend to get written off as crackpot mystics or irrationalists, clinging to outdated and semi-religious notions of “soul” and “spirit”. That attitude to spiritual matters is itself debateable, and a conversation for another time, but I don’t think you need to subscribe to any religious belief in order to deny the adequacy of the data model of the self.
To measure something objectively is to view it from “outside”. “Inside” and “outside” are confusing and misleading terms, here, but the point is that even brain scans are “external measurements”, in a sense, merely recording physical neural interactions, and cannot convey what it is like to be the person having those experiences. Such measurements are reported in objective terms – numbers, units, equations – and not in qualitative ones – sensations, perceptions, feelings – which are mostly non-rational in nature (and therefore unquantifiable). We might infer from external observations what someone is sensing or feeling – such as we do when we judge that someone is angry from their tone of voice and facial expression – but we can only make sense of these observations because we ourselves are capable of such qualitative feelings. And if we can never quantify the qualitative, nor objectify the subjective, then data will always miss that aspect of the person. (This objection might not apply if we tried to artificially recreate the brain, perhaps, but that’s also a discussion for another time.)
In summary, Your Honour, I therefore assert that it will never be possible to resurrect you as a chatbot, because we can never succeed in copying the qualitative and non-rational aspects of a person’s conscious experience. I rest my case.
At one point in William Gibson’s Neuromancer, the main character Case seeks the technical help of Dixie Flatline, a hacker whose skills and personality have been digitally preserved after his death. To which the hacker agrees – on one condition: afterwards, Case must promise to “kill” him. The implication here is obviously that digital immortality is its own kind of hell. But it’s not clear that it would be. If we can never give such “personalities” the ability to feel, then would they even be aware that their existence was a torment? I think not – though I can understand why Gibson chose to imply that, because that is how a conscious feeling person might react to such an existence. But in reality, all we would have preserved is their “reanimated” dandruff, which would not in itself be consciously aware (in the true sense).
But I suspect that Microsoft isn’t really interested in digital immortality. Like most such research, the chief incentive will likely be financial: what can the technology be used for? But of course, “Microsoft is helping you live forever,” has more of a noble ring to it than, “Give us your data so we can create better chatbots to take your call centre jobs”.
Gareth Southwell is a philosopher, writer and illustrator from the UK. He is the author of the near-future sci-fi novel MUNKi, which concerns robots, the hunt for the Technological Singularity, and people swearing in Welsh.
Note: due to spam, comments have been disabled on this post, but you are welcome to contact the author to discuss further.
Image credit: Marley’s Ghost” by E. A. Abbey (1876)