by Calum Chace
‘And competition with the AI is just one of the scenarios which don’t work out well for humanity. Even if the AI is well-disposed towards us it could inadvertently enfeeble us simply by demonstrating vividly that we have become an inferior species. Science fiction writer Arthur C Clarke’s third law famously states that any sufficiently advanced technology is indistinguishable from magic. A later variant of that law says that any sufficiently advanced benevolence may be indistinguishable from malevolence.’
As Professor Montaubon drew breath, Ross took the opportunity to introduce a change of voice. ‘How about you, Professor Christensen? Are you any more optimistic?’
‘Optimism and pessimism are both forms of bias, and I try to avoid bias.’
Ross smiled uncertainly at this remark, not sure whether it was a joke. It was not, and Christensen pressed on regardless. ‘Certainly I do not dismiss Professor Montaubon’s concerns as fantasy, or as scare-mongering. We had better make sure that the first super-intelligence we create is a safe one, as we may not get a second chance.’
‘And how do we go about making sure it is safe?’ asked Ross .
‘It’s not easy,’ Christensen replied. ‘It will be very hard to programme safety in. The most famous attempt to do so is the three laws of robotics in Isaac Asimov’s stories. Do not harm humans; obey the instructions of humans; do not allow yourself to come to harm. With each law being subservient to the preceding ones. But the whole point of those stories was that the three laws didn’t work very well, creating a series of paradoxes and impossible or difficult choices. This was the mainspring of Asimov’s prolific and successful writing career. To programme safety into a computer we would have to give it a comprehensive ethical rulebook. Well, philosophers have been debating ethics for millennia and there is still heated disagreement over the most basic issues. And I agree with Professor Montaubon that a super-intelligence would probably be able to re-write its own rules anyway.’
‘So we’re doomed?’ asked Ross, playing to the gallery.
‘No, I think we can find solutions. We need to do a great deal more work on what type of goals we should programme into the AIs that are on the way to becoming human-level. There is also the idea of an Oracle AI.’
‘Like the Oracle of Delphi?’ asked Ross. He didn’t notice Matt and David exchanging significant glances.
‘Yes, in a sense. An Oracle AI has access to all the information it needs, but it is sealed off from the outside world: it has no means of affecting the universe – including the digital universe – outside its own substrate. If you like, it can see, but it cannot touch. If we can design such a machine, it could help us work out more sophisticated approaches which could later enable us to relax the constraints. My department has done some work on this approach, but a great deal remains to be done.’
‘So the race is on to create a super-intelligence, but at the same time there is also a race to work out how to make it safe?’ asked Ross.
‘Exactly,’ agreed Christensen.
‘I’m sorry, but I just don’t buy it,’ interrupted Montaubon, shaking his head impatiently. ‘A super-intelligence will be able to escape any cage we could construct for it. And that may not even be the most fundamental way in which the arrival of super-intelligence will be bad news for us. We are going to absolutely hate being surpassed. Just think how demoralising it would be for people to realise that however clever we are, however hard we work, nothing we do can be remotely as good as what the AI could do.’
‘So you think we’ll collapse into a bovine state like the people on the spaceship in Wall-E?’ joked Ross.
Montaubon arched his eyebrows and with a grim smile, nodded slowly to indicate that while Ross’s comment had been intended as a joke, he himself took it very seriously. ‘Yes I do. Or worse: many people will collapse into despair, but others will resist, and try to destroy the AI and those people who support it. I foresee major wars over this later this century. The AI will win, of course, but the casualties will be enormous. We will see the world’s first gigadeath conflicts, by which I mean wars with the death count in the billions.’ He raised his hands as if to apologise for bringing bad news. ‘I’m sorry, but I think the arrival of the first AI will signal the end of humans. The best we can hope for is that individual people may survive by uploading themselves into computers. But very quickly they will no longer be human. Some kind of post-human, perhaps.’
Ross felt it was time to lighten the tone. He smiled at Montaubon to thank him for his contribution.
‘So it’s widespread death and destruction, but not necessarily the end of the road for everyone. Well you’ve introduced the subject of mind uploading, which I want to cover next, but before we do that, I just wanted to ask you something, David and Matt. Professor Montaubon referred earlier to the fact that Dr Damiano is connected with the US military, and you have told us that you are considering working with that group. May I ask, how comfortable are you with the idea of the military – not just the US military, but any military – being the first organisation to create and own a super-intelligence?’
‘Well,’ David replied, ‘for one thing, as I said earlier, I have not decided what I am going to do next. For another thing, Dr Damiano is not part of the military; his company has a joint venture with DARPA. It is true that DARPA is part of the US military establishment, but it is really more of a pure technology research organisation. After all, as I’m sure you know, DARPA is responsible for the creation of the internet, and you’d have to be pretty paranoid to think that the internet is primarily an instrument of the US Army.’
There was some murmuring from the invited audience. Clearly not everyone was convinced by David’s argument.
One of David and Matt’s conditions for participating in the programme had been that Ross would not probe this area beyond one initial question. Nevertheless, they had prepared themselves for debate about it. After a quick exchange of glances with his father, Matt decided to see if he could talk round some of the sceptics in the audience.
‘I will say this for Dr Damiano’s group,’ he said. ‘The US military is going to research AI whatever Victor Damiano does, whatever anyone else does. So are other military forces. Don’t you think the people who run China’s Red Army are thinking the same thing right now? And Russia’s? Israel’s? Maybe even North Korea’s? But the US Army is special, because of the colossal scale of the funds at its disposal. I for one am pleased that it is bound into a JV with a leading civilian group rather than operating solo.’
It was hard to be sure, but Matt had the impression that the murmuring became less prickly, a little warmer. To his father’s relief, Ross stuck to the agreement, and resisted the temptation to probe further.
TWENTY-SIX
Malcolm Ross smiled at the cameras and the studio guests. Confidently into the home straight, he gathered up his audience in preparation for the last leg of their intellectual journey together.
‘Let’s turn to the fascinating idea of uploading the human mind into a computer,’ he said. ‘Professor Montaubon, you said just now that uploading is our best hope of surviving the arrival of a super-intelligence. The key questions that follow from that seem to me to be: is it possible to upload a mind, both technologically and philosophically, and would it be a good thing? Reverend Cuthman: we haven’t heard from you yet. Perhaps this is an issue you might like to comment on?’
The reverend placed the tips of his fingers together and pressed them to his lips. Then he pointed them down again and looked up at Ross.
‘Thank you, Malcolm. Well I confess to feeling somewhat alienated from much of this conversation. That is partly because I’m not as au fait with the latest technology as your other guests. But more importantly, I think, I start from a very different set of premises. You see I believe that humans are distinguished from brute animals by our possession of an immortal soul, which was placed inside us by almighty God. So as far as I’m concerned, whatever technological marvels may or may not come down
the road during this century and the next, we won’t be uploading ourselves into any computers because you can’t upload a soul into a computer. And a body or even a mind without a soul is not a human being.’
‘Yes, I can see that presents some difficulty,’ Ross said. ‘So if Dr Metcalfe here and his peers were to succeed in uploading a human mind into a computer, and it passed the Turing test, persuading all comers that it was the same person as had previously been running around inside a human body, you would simply deny that it was the same person?’
‘Yes, I would. Partly because it wouldn’t have a soul. At least, I assume that Dr Metcalfe isn’t going to claim that he and his peers are about to become gods, complete with the ability to create souls?’
David smiled and shook his head.
‘But even putting that to one side,’ the reverend continued, ‘this uploading idea doesn’t seem to preserve the individual. It makes a copy. A clone. Everybody has heard of Dolly, a cloned sheep, which was born in 1996. And many people know that the first animal, a frog, was cloned way back in the 1960s. But no-one is claiming that cloning preserves the individual. Uploading is the same. It just makes a copy.’
‘Yes,’ Ross said, thoughtfully. ‘This is an important problem, isn’t it, Professor Christensen? Uploading doesn’t perpetuate the individual: it destroys the individual and creates a copy.’
‘That is an important objection, I agree,’ Christensen said. ‘But not a fatal one, I think. If you could upload me into a computer and then give the newly created being a body exactly like mine, but leave me still alive, I might well deny that the new entity was me. That process has been called ‘sideloading’ rather than ‘uploading’.
‘But,’ he held up an index finger, ‘imagine a different thought experiment. Imagine that you are suffering from a serious brain disease, and the only way to cure you is to replace some of your neurons. Only we don’t know which of your neurons we have to replace, so we decide to replace them all, in batches of, say, a million at a time. Because you are a very important TV personality we have the budget to do this,’ he smiled. ‘After each batch has been replaced we check to see whether the disease has gone, and also to check that you are still you. We replace each batch with silicon instead of carbon, either inside your skull, or perhaps on a computer outside your brain, maybe in your home, or maybe in the cloud. The silicon batches preserve the pattern of neural connections inside your brain precisely.
‘We find that the disease persists and persists despite the replacements, but happily, when we replace the very last batch of a million neurons we suddenly find that we have cured you. Now, at each of the checkpoints you have confirmed that you are still Malcolm, and your family and friends have agreed. There was no tipping point at which you suddenly stopped being Malcolm. But when we have replaced the very last neuron we ask the reverend here to confirm that you are still Malcolm. He says no, so we go to court and ask a judge or a jury to decide whether your wife still has a husband, your children still have a father, and whether you may continue to enjoy your property and your life in general. I imagine that you would argue strenuously that you should.’
‘Well, yes indeed. And I hope that my wife and kids would do the same!’ Ross laughed. ‘So issues of personal identity are going to cause some trouble in the post-AI world?’
‘Yes indeed,’ Christensen agreed. ‘I think the concept of personal identity will come under immense strain, and will be stretched in all sorts of directions. It’s all very well to say that copying a mind does not preserve it, but from the point of view of the copy, things may look very different. Imagine a situation where we carry out the process I described before, but instead of making just one version of you we created two.’
‘Two for the price of one?’ joked Ross.
‘Probably two for the price of three,’ laughed Christensen, ‘but terribly good value just the same. And imagine that the day after the operation both versions of you turn up at your house. Both versions are equally convinced that they are you, and none of your family and friends can tell the difference. What then?’
‘Could be tricky,’ Ross agreed. ‘Actually, I suppose that having a doppelganger could be handy at times.’
‘Indeed,’ Christensen agreed. ‘Some people might think that having their state of mind persisting in the form of a backup is sufficient to constitute survival. And here is another thought experiment. A man feels cheated by a business rival, or a rival in love. The man has himself backed up, and then shoots the rival and also himself. The backup is brought online, and claims immunity from prosecution on the basis that he is a different person. Would we let him get away with that?’
‘Hmm, it could become complex,’ said Ross. He looked at the other members of the panel, inviting them to contribute. Matt accepted the challenge.
‘Some people think that the human mind is actually a composite of different sub-minds. One of the early pioneers of AI, Marvin Minsky, wrote a book about this, called The Society of Mind. And now we are adding new bits. For many of us, our smartphone is like an externalised part of our mind. So is Wikipedia: it’s like an externalised memory. Also, people who are close to us are in some way a part of us. I’m sorry if this sounds cheesy, but when we thought my dad had died, it was as if a part of me had died.’
David reached across and placed his hand on top of Matt’s. Matt smiled and there was a warm murmur in the audience.
‘So,’ Matt continued, ‘if we do manage to upload human minds, perhaps their components will start to separate a little, and re-combine in different ways. After all, they will probably be hosted at least partially in the cloud for safety reasons. Perhaps if we do manage to upload, then the destination will be some kind of hive mind.’
‘It sounds as though you’ve been inspired by your experiences to read around the subject, Matt,’ Ross teased him. ‘I’m sure your tutors will be impressed.’
Matt laughed. ‘They’d probably be more impressed if I stuck to maths – at least until I’ve finished my degree.’
‘I’m sure they’ll cut you a bit of slack, given what you’ve been through.’
Ross paused to smile at the audience, contemplating the ratings that he was confident Matt was providing. Then he turned back to the panel.
‘We’re reaching the end of the programme. It’s been a fascinating discussion, and I’d like to finish with a couple of questions. The first one is this. If we – or our children – do live to see this amazing future, a future of uploaded minds living potentially forever: will we like it? I mean, won’t we get bored? And if everybody is going to live forever, how will we all fit on this finite planet? Professor Christensen?’
‘I don’t think the problem will be one of boredom,’ Christensen replied, ‘but there is a dystopian scenario in which uploaded minds work out – and it wouldn’t be hard – how to stimulate their pleasure centres directly and they simply sit around pleasuring themselves all day.’
‘You mean like those rats in laboratory experiments which starved themselves by choosing continually to press a neural stimulation button rather than the button that delivered food?’ asked Ross.
‘Yes, exactly that,’ Christensen nodded. ‘And it could be a little more sophisticated than that in the human case. In the novel Permutation City by Greg Egan, an uploaded man chooses to spend his time in pointless hobbies like carving many thousands of identical chair legs, but he programmes himself to experience not just physical pleasure, but also profound intellectual and emotional fulfilment through these simple tasks.’
‘That sounds like the end of civilisation as we know it,’ joked Ross.
‘Yes, it does. But I very much doubt that things would collapse that way. Just as we humans are capable of enormously more complex, subtle and dare I say fulfilling experiences than chickens and chimpanzees, so I am confident that a super-intelligent uploaded human would be capable of enjoying more subtle and more profound experiences than we are. The more we find out about the universe, the
more we discover it to be a fascinatingly challenging and weird place. The more we know, the more we know we don’t know. So I don’t believe that our descendents will run out of things to explore. In fact you may be interested to know that there is a nascent branch of philosophy – a sub-branch of the Theory of Mind, you might say – called the Theory of Fun, which addresses these concerns.’
‘As for over-population,’ Montaubon chipped in, ‘there is a very big universe to explore out there, and we now know that planets are positively commonplace. It won’t be explored by flesh-and-blood humans as shown in Star Trek and Star Wars: that idea is absurd. It will be explored by intelligence spreading out in light beams, building material environments on distant planets using advanced 3-D printing techniques. But actually, I suspect that the future for intelligence is extreme miniaturisation, so there is definitely no need to worry about running out of space.’
‘Well, that’s a relief, then,’ said Ross, teasing slightly. He turned to address his audience. ‘We’ve travelled a long way in this consideration of the prospects opened up by the search for artificial intelligence, and we’ve heard some outlandish ideas. Let’s finish by coming back to the near term, and what could become a pressing matter. Public acceptability.’
He turned back to the panel. ‘You all acknowledge that creating an artificial super-intelligence carries significant risks. But what about the journey there? Some people may well object to what you are trying to achieve, either from fear of some of the consequences that you have yourselves described, or from a belief that what you are doing is blasphemous. Others may fear that the benefits of artificial intelligence and particularly of uploading will be available only to the rich. There could be very serious public opposition once enough people become aware of what is being proposed, and take it seriously. The transition to the brave new world that you are aiming for could be bumpy. There will be vigorous debates, protests, perhaps even violent ones. Reverend, would you like to comment on that?’