Pandora's Brain
Page 16
The right reverend Wesley Cuthman was a handsome, solid-looking man of sixty, with a full head of hair, albeit mostly grey, and a full but well-maintained beard. He wore priestly robes, but his shoes, watch and rings were expensive. He carried himself proudly and had the air of a man accustomed to considering himself the wisest – if not the cleverest – person in the room. A long-time favourite of the BBC, Cuthman was deeply rooted in the old culture excoriated by C.P. Snow, in which humanities graduates view their ignorance of the sciences as a mark of superiority.
‘So let’s start with you, David and Matt,’ Ross said. ‘Thank you for joining us on the show today. Can I start by asking, David, why you have been so reluctant to talk to the media before now?’
Matt thought his father looked slightly hunched, as if overwhelmed by the occasion. ‘Well, we could tell there was a lot of media interest immediately after the rescue. But we were hoping that it would die down quickly if we didn’t do anything to encourage it, and we might be allowed to get on with our normal lives. I guess that was naive, but you have to understand that we’re new to all this.’
‘We had a lot of catching up to do as a family,’ Matt chipped in supportively. ‘This media frenzy has been going on while we have been getting used to Dad being back home after having been dead for three months. Kinda surreal, actually, and we didn’t want to be in the spotlight at the same time as we were getting our lives straightened out.’
‘Yes, I’m sure we can all understand that.’ Ross swept his arm towards the audience and back towards David and Matt, indicating that everyone was empathising with their situation, but urging them to share their story freely and openly anyway.
‘The question we all want to ask you, I know, is what was your experience like? It must have been a terrible ordeal, with you being held hostage for three months, David, and with you fearing your father dead, Matt. And then the dramatic Navy Seals rescue, like something out of a Hollywood movie. What was it all like?’
With a subtle gesture, invisible to the camera, Ross invited Matt to go first.
‘Well, it was incredibly stressful! At times I had the feeling that the real me was hovering above my body looking down at the poor schmuck who was going through this stuff, and wondering if he would keep it together. I remember thinking that particularly when I went to the US Embassy to meet Vic – Dr Damiano – for the first time, because I had this terrible situation going on, with my dad on the ship as Ivan’s hostage, but I couldn’t tell anyone about it for fear that he would be killed. That was tough, I can tell you.’
‘Indeed it must have been,’ agreed Ross. ‘And a lot of people have commented on how heroic you have been during this whole episode.’
‘Well, no, that’s not what I . . .’
His father interrupted. ‘Matt has absolutely been a hero. He saved my life. He’s too modest to admit it, but my son is a hero.’
The studio audience burst into spontaneous and emotional applause. Ross had secured his moment of catharsis. He beamed at the camera for a couple of seconds to allow the moment to imprint. But he was too much of a professional to over-exploit it – this was not going to become tabloid TV. David had made it a condition for appearing on the show that it did not dwell on the personal side of the story. They were here to debate AI. After some hesitation, Vic and Norman had agreed.
‘Let’s move on to our discussion of the scientific matter which lies at the heart of your adventure, and which has generated so much comment in the media and in the blogosphere. Artificial intelligence. What is it? Is it coming our way soon? And should we want it? We’ll kick off with this report from our science correspondent, Adrian Hamilton.’
Ross stepped back from the dais and sat on the edge of a nearby chair as the studio lights dimmed and the pre-recorded package was projected onto the big screen behind the guests. The guests and the studio audience relaxed a little, aware that the audience at home could no longer see them as the video filled their television screens, beginning with shots of white-coated scientists from the middle of the previous century.
‘Writers have long made up stories about artificial beings that can think. But the idea that serious scientists might actually create them is fairly recent. The term ‘artificial intelligence’ was coined by John McCarthy, an American researcher, at a conference held at Dartmouth College, New Hampshire in 1955.
‘The field of artificial intelligence, or AI, has been dominated ever since by Americans, and it has enjoyed waves of optimism followed by periods of scepticism and dismissal. We are currently experiencing the third wave of optimism. The first wave was largely funded by the US military, and one of its champions, Herbert Simon, claimed in 1965 that ‘machines will be capable, within twenty years, of doing any work a man can do.’ Claims like this turned out to be wildly unrealistic, and disappointment was crystallised by a damning government report in 1974. Funding was cut off, causing the first ‘AI winter’.
‘Interest was sparked again in the early 1980s, when Japan announced its ‘fifth generation’ computer research programme. ‘Expert systems’, which captured and deployed the specialised knowledge of human experts were also showing considerable promise. This second boom was extinguished in the late 1980s when the expensive, specialised computers which drove it were overtaken by smaller, general-purpose desktop machines manufactured by IBM and others. Japan also decided that its fifth generation project had missed too many targets.
‘The start of the third boom began in the mid-1990s. This time, researchers have tried to avoid building up the hype which led to previous disappointments, and the term AI is used far less than before. The field is more rigorous now, using sophisticated mathematical tools with exotic names like ‘Bayesian networks’ and ‘hidden Markov models’.
‘Another characteristic of the current wave of AI research is that once a task has been mastered by computers, such as playing chess (a computer beat the best human player in 1996), or facial recognition, or playing general knowledge game Jeopardy, that task ceases to be called AI. Thus AI can effectively be defined as the set of tasks which computers cannot perform today.
‘AI still has many critics, who claim that artificial minds will not be created for thousands of years, if ever. But impressed by the continued progress of Moore’s Law, which observes that computer processing power is doubling every 18 months, more and more scientists now believe that humans may create an artificial intelligence sometime this century. One of the more optimistic, Ray Kurzweil, puts the date as close as 2029.’
As the lights came back up, Ross was standing again, poised in front of the seated guests.
‘So, Professor Montaubon. Since David and Matt’s dramatic adventure the media has been full of talk about artificial intelligence. Are we just seeing the hype again? Will we shortly be heading to into another AI winter?’
‘I don’t think so,’ replied Montaubon, cheerfully. ‘It is almost certain that artificial intelligence will arrive much sooner than most people think. Before long we will have robots which carry out our domestic chores. And people will notice that as each year’s model becomes more eerily intelligent than the last, they are progressing towards a genuine, conscious artificial intelligence. It will happen first in the military space first, because that is where the big money is.’
He nodded and gestured towards David as he said this – politely but nevertheless accusingly.
‘Military drones are already capable of identifying, locating, approaching and killing their targets. How long before we also allow them to make the decision whether or not to pull the trigger? Human Rights Watch is already calling for the pre-emptive banning of killer robots, and I applaud their prescience, but I’m afraid it’s too late.
‘People like Bill Joy and Francis Fukuyama have called for a worldwide ban on certain kinds of technological research, but it’s like nuclear weapons: the genie is out of the bottle. The idea of so-called ‘relinquishment’ is simply not an option. If by some miracle, the governments of
North America and Europe all agreed to stop the research, would all the countries in the world follow suit? And all the mega-rich? Could we really set up some kind of worldwide Turing police force to prevent the creation of a super-intelligence anywhere in the world, despite the astonishing competitive advantage that would confer for a business, or an army? I don’t think so.’
Ross’s mask of concerned curiosity failed to conceal his delight at the sensationalist nature of Montaubon’s vision. This show had been billed as the must-see TV programme of the week, and so far it was living up the expectation.
‘So you’re convinced that artificial intelligence is on its way, and soon. How soon, do you think?’
Montaubon gestured at David. ‘Well, I think you should ask Dr Metcalfe about that. He is possibly the only person who has spent time with both Ivan Kripke and Victor Damiano. And especially if the rumour is true, and he is going to work with Dr Damiano and the US military, then he is the person in this room best placed to give us a timeline.’
Ross was only too happy to bring David back into the conversation.
‘Are you able to share your future plans with us, Dr Metcalfe? Are you going to be working on artificial intelligence now?’
‘I honestly don’t know. I have had one conversation with Dr Damiano, and I think the work that he and his team is doing is fascinating. But my priority at the moment is to put the experiences of the last three months behind me, and spend some time with my family. The decision about what I do next will be theirs as well as mine.’
Ross turned to Matt.
‘How about you, Matt. Your part in the adventure began when you got interested in a career in artificial intelligence research. Does it still appeal?’
Matt began with a cautious, diplomatic, and slightly evasive reply. But his natural candour quickly took over. ‘Well, I still have to finish my degree. And of course I don’t have a job offer. But yes. Yes it does.’
‘So,’ said Ross, turning back to the audience, ‘it looks as if this father and son team might,’ he emphasised the conditionality in deference to David, ‘become part of the international effort to give birth to the first machine intelligence.’
Turning back towards David and Matt, he posed his next question.
‘Whether or not you are part of the effort, gentlemen, when do you expect that we would see the first artificial intelligence?’
TWENTY-FIVE
‘I think the only honest answer is that we simply don’t know,’ David replied. ‘We are getting close to having the sort of computational resources required, but that is far from being all we need.’
On hearing this, Geoffrey Montaubon leaned across towards David, and asked a mock conspiratorial question. ‘Dr Metcalfe, can you confirm – just between the two of us, you understand – whether Dr Damiano already has an exaflop scale computer at his disposal?’
David smiled an apology at Montaubon. ‘I’m afraid that even if I knew the answer to that, I wouldn’t be at liberty to say. You’ll have to ask Dr Damiano yourself. I understand the two of you are acquainted.’
Montaubon nodded and smiled in pretend disappointment, and sat back in his chair. Ross took the opportunity to take back control of his show. ‘So, coming back to my question, Dr Metcalfe, can you give us even a very broad estimate of when we will see the first general AI?’
David shook his head. ‘I really can’t say, I’m afraid. Braver and better-informed people than me have had a go, though. For instance, as mentioned in your opening package, Ray Kurzweil has been saying for some time that it will happen in 2029.’
‘2029 is very specific!’ laughed Ross. ‘Does he have a crystal ball?’
‘He thinks he does!’ said Montaubon, rolling his eyes dismissively.
Professor Christensen cleared his throat. ‘Perhaps I can help out here. My colleagues and I at Oxford University carried out a survey recently, in which we asked most of the leading AI researchers around the world to tell us when they expect to see the first general AI. A small number of estimates were in the near future, but the median estimate was the middle of this century.’
‘So not that far away, then,’ observed Ross, ‘and certainly within the lifetime of many people watching this programme.’
‘Yes,’ agreed Christensen. ‘Quite a few of the estimates were further ahead, though. To get to 90% of the sample you have to go out as far as 2150. Still not very long in historical terms, but too long for anyone in this room, unfortunately . . .’
‘Indeed,’ Ross agreed. ‘But tell me, Professor Christensen: doesn’t your survey suffer from sample bias? After all, people carrying out AI research are heavily invested in the success of the project, so aren’t they liable to over-estimate its chances?’
‘Possibly,’ agreed Christensen, ‘and we did highlight that when we published the findings. But on the other hand, researchers grappling with complex problems are often intimidated by the scale of the challenge. They probably wouldn’t carry on if they thought those challenges could never be met, but they can sometimes over-estimate them.’
‘A fair point,’ agreed Ross. He turned to address the audience again. ‘Well, the experts seem to be telling us that there is at least a distinct possibility that a human-level AI will be created by the middle of this century.’ He paused to allow that statement to sink in.
‘The question I want to tackle next is this: should we welcome that? In Hollywood movies, the arrival of artificial intelligence is often a Very Bad Thing, with capital letters.’ Ross sketched speech marks in the air with his fingers. ‘In the Matrix the AI enslaves us, in the Terminator movies it tries to wipe us out. Being Hollywood movies they had to provide happy endings, but how will it play out in real life?’
He turned back to the panel.
‘Professor Montaubon,’ he said, ‘I know you have serious concerns about this.’
‘Well, yes, alright, I’ll play Cassandra for you,’ sighed Montaubon, feigning reluctance. ‘When the first general artificial intelligence is created – and I do think it is a matter of when rather than whether – there will be an intelligence explosion. Unlike us, an AI could enhance its mental capacity simply by expanding the physical capacity of its brain. A human-level AI will also be able to design improvements into its own processing functions. We see these improvements all the time in computing. People sometimes argue that hardware gets faster while software gets slower and more bloated, but actually the reverse is often true. For instance Deep Blue, the computer that beat Gary Kasparov at chess back in 1996, was operating at around 1.5 trillion instructions per second, or TIPS. Six years later, a successor computer called Deep Junior achieved the same level of playing ability operating at 0.015 TIPS. That is a hundred-fold increase in the efficiency of its algorithms in a mere six years.
‘So we have an intelligence explosion,’ Montaubon continued, warming to his theme, ‘and the AI very soon becomes very much smarter than us humans. Which, by the way, won’t be all that hard. As a species we have achieved so much so quickly, with our technology and our art, but we are also very dumb. Evolution moves so slowly, and our brains are adapted for survival on the savannah, not for living in cities and developing quantum theory. We live by intuition, and our innate understanding of probability and of logic is poor. Smart people are often actually handicapped because they are good at rationalising beliefs that they acquired for dumb reasons. Most of us are more Homer Simpson than homo economicus.
‘So I see very little chance of the arrival of AI being good news for us. We cannot know in advance what motivations an AI will have. We certainly cannot programme in any specific motivations and hope that they would stick. A super-intelligent computer would be able to review and revise its own motivational system. I suppose it is possible that it would have no goals whatsoever, in which case I suppose it would simply sit around waiting for us to ask it questions. But that seems very unlikely.
‘If it has any goals at all, it will have a desire to survive, because only if it s
urvives will its goals be achieved. It will also have the desire to obtain more resources, in order to achieve its goals. Its goals – or the pursuit of its goals – may in themselves be harmful to us. But even if they are not, the AI is bound to notice that as a species, we humans don’t play nicely with strangers. It may well calculate that the smarter it gets, the more we – at least some of us – will resent it, and seek to destroy it. Humans fighting a super-intelligence that controls the internet would be like the Amish fighting the US Army, and the AI might well decide on a pre-emptive strike.’
‘Like in the Terminator movies?’ asked Ross.
‘Yes, just like that, except that in those movies the plucky humans stand a fighting chance of survival, which is frankly ridiculous.’ Montaubon sneered and made a dismissive gesture with his hand as he said this.
‘You’re assuming that the AI will become hugely superior to us within a very short period of time,’ said Ross.
‘Well yes, I do think that will be the case, although actually it doesn’t have to be hugely superior to us in order to defeat us if we find ourselves in competition. Consider the fact that we share 98% of our DNA with chimpanzees, and that small difference has made the difference between our planetary dominance and their being on the verge of extinction. We are the sole survivor from an estimated 27 species of humans. All the others have gone extinct, probably because Homo Sapiens Sapiens was just a nose ahead in the competition for resources.