by Dylan Evans
There seems to be something universal about this storyline. Revolutionary socialism and fascism are heirs to this tradition just as much as Waco and Jonestown. Peak oil is just the latest incarnation of this pan-cultural impulse.
It’s easy to understand why those with little stake in the current world order, those who have been left behind by technological progress, might be especially susceptible to the millennial impulse. There are also, however, a number of people working at the cutting edge who have converted to this seductive creed at the height of their technical prowess. I was working in a leading robotics lab when I underwent my conversion. Before me there were other, far more eminent scientists who trod the same path.
Take Bill Joy for example. The co-founder of Sun Microsystems was a legendary programmer, writing pioneering software in the late seventies and early eighties. So it came as quite a shock to the tech world when, in the April 2000 issue of Wired magazine, Joy expressed deep concerns over the increasing power of computers and other new technologies. ‘Our most powerful 21st-century technologies,’ he declared, ‘are threatening to make humans an endangered species.’
In the Wired article, Joy traced his unease to a conversation he had had with Ray Kurzweil a few years before. Kurzweil has pioneered many developments in artificial intelligence, from optical character recognition to music synthesizers, and in 2012 he became Director of Engineering at Google. He is also a leading transhumanist, who hopes that future developments in technology will radically transform human nature for the better. When Kurzweil told Joy back in 1998 that humans ‘were going to become robots or fuse with robots or something like that,’ Joy was taken aback:
While I had heard such talk before, I had always felt sentient robots were in the realm of science fiction. But now, from someone I respected, I was hearing a strong argument that they were a near-term possibility [. . .] I already knew that new technologies like genetic engineering and nanotechnology were giving us the power to remake the world, but a realistic and imminent scenario for intelligent robots surprised me.
In the hotel bar, Kurzweil gave Joy a partial preprint of his forthcoming book The Age of Spiritual Machines, which outlined the technological Utopia he foresaw. On reading it, Joy’s sense of unease only intensified; he felt sure Kurzweil was understating the dangers, underestimating the probability of a bad outcome along this path. He found himself most troubled by a passage detailing a dystopian scenario in which all work is done by vast, highly organized systems of machines, and no human effort is necessary. At that point, the fate of the human race would be at the mercy of the machines.
It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines, nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex, and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.
In Kurzweil’s book, you don’t discover until you turn the page that the author of this passage is Theodore Kaczynski, the notorious Unabomber, whose bombs killed three people during a seventeen-year terror campaign and wounded many others.
Kaczynski was a brilliant mathematician who resigned his job as an assistant professor at the University of California, Berkeley, just two years after receiving his PhD, and vanished. In 1971 he moved into a remote log cabin in Montana where he lived a simple life, without electricity or running water. In 1978 he began sending out mailbombs, targeting engineers, computer scientists and geneticists.
The Unabomber wrote his rambling 35,000-word manifesto on an old typewriter during the long years he spent alone in his cabin. In 1995 he wrote to several newspapers promising to end his bombing campaign if they printed it. After lengthy deliberations, the New York Times and the Washington Post agreed. By a curious twist of fate, that was Kaczynski’s undoing. His brother read the published essay, recognized the writing style, and went to the FBI. The Unabomber was arrested the following year, and is now serving a life sentence at the federal supermax prison in Florence, Colorado.
One of Kaczynski’s bombs had gravely injured Bill Joy’s friend David Gelernter, a brilliant and visionary computer scientist. At one time, Joy felt that he could easily have been the Unabomber’s next target. Nevertheless, when he read the extract from the Unabomber’s manifesto in Kurzweil’s book, Joy saw some merit in the reasoning. ‘I felt compelled to confront it,’ he would later write in his Wired article. Not long after the article was published, Joy decided he could no longer continue ‘working to create tools which will enable the construction of the technology that may replace our species’, and quit his job in Silicon Valley.
My first encounter with the Unabomber manifesto affected me just as deeply. In late 2005, shortly after my trip to Mexico, I visited my friend Nick Bostrom in Oxford. I had first met Nick eight years earlier, while we were both PhD students in the Philosophy Department at the London School of Economics. Now he had a fabulous job with perhaps the best job title I have ever seen: Director of the Future of Humanity Institute. Of course it would have been even more impressive without the final word.
Nick had written his doctoral dissertation about the doomsday argument, which purports to show that human extinction is likely to occur sooner rather than later. Now, five years on, he was still interested in the prospects for human extinction, and he was setting up a research programme to investigate the likelihood of a global catastrophe.
The night I went to have supper with Nick in his Oxford college was Halloween, and for once he didn’t look out of place in his long black gown. As we chatted over a festive meal of turkey and cranberry sauce, I told him about my growing disenchantment with technology, and my plans for the Utopia Experiment. Nick was interested, but not particularly surprised.
‘Have you read the Unabomber manifesto?’ he asked nonchalantly.
I hadn’t. In fact, I hadn’t even heard of the Unabomber.
‘You should definitely check it out,’ said Nick.
Intrigued, I searched for the manifesto online as soon as I got home, and read it that very night. And right away, I was hooked.
The manifesto is a curious and erratic document. Some sections are eloquent and persuasive, while others merely rambling and adolescent. Nevertheless, I found it intoxicating, in a way that perhaps affected Bill Joy too. We were both working at the cutting edge of artificial intelligence when we first read it, and both had incipient doubts about the risks posed by that technology. We both felt complicit in creating the terrifying future the Unabomber prophesied, and we were fertile ground for his more extreme claims.
Part of the manifesto’s attraction also lay, I now think, in the easy explanation it provided for my growing sense of malaise. For the past year or so I had been feeling increasingly out of sorts. In hindsight, I can see the telltale early warning signs of depression in the diary I kept, but they crept up on me so gradually that I didn’t notice the subtle change I was undergoing. My mood darkened so slowly, that by the time night had fallen, I couldn’t remember how things looked in the daylight.
The Unabomber gave me a convenient scapegoat for my ills. My angst had nothing to do with me; it was all society’s fault. More specifically, it was the fault of the industrial-technological system, which robbed u
s of our autonomy, diminished our rapport with nature, and forced us ‘to behave in ways that are increasingly remote from the natural pattern of human behavior’. It was hardly an original idea, and not one that had held any attraction for me before, but now the simple, stark language of the manifesto hypnotized me, and within a few days I was spouting Kaczynski’s strange gospel as fervently as a religious convert. I had, as they might say today, been ‘radicalized’. I did not conclude, as Kaczynski had done, that I should help bring on the collapse of civilization by killing computer scientists, but I did begin to look forward to the day when it would collapse of its own accord, and humanity could return to its pre-industrial past.
I can trace the development of these thoughts from the notes I made at the time in a black Moleskine notebook. Punctuating the various sketches for the Utopia Experiment are a series of entries recounting the story of a young monkey called Rousseau. Like his fellow monkeys, Rousseau doesn’t realize that the cage he lives in is not his natural habitat, for he was born in captivity. But then he strikes up a conversation with a baboon in a nearby cage.
The baboon reveals to him that the source of all his low-level angst, his sense of anomie and alienation, is the fact that he is not living as nature intended.
‘It’s not natural to live in a cage, to be fed by human beings, to have no predators, to spend your days aimlessly shuffling around this enclosure,’ says the baboon.
‘Rubbish!’ replies Rousseau. ‘It feels perfectly natural to me.’
‘That’s because it is all you’ve ever known,’ replies the baboon. ‘But if it was perfectly natural, do you really think you would have these feelings of emptiness and purposelessness? Monkeys who are not born in cages never have these feelings. Their whole life makes sense to them. They never have these existential crises because they are just immersed in the business of living. They get hungry, they find food, they eat, and they rest and play – all without the slightest uneasy thought. They suffer and die too, of course, but without any worries about the meaning of it all ever crossing their minds.’
‘Really?’ exclaims the young monkey, half amused by this wonderful thought, half doubting that it could possibly be true.
‘Sure,’ says the baboon, ‘let me show you.’
And so, together, they plan their escape from the zoo.
That little monkey was my alter ego, and the lab where I worked was feeling increasingly like a zoo. Not just because the robots we were building were all inspired by animals – we had robot rats and robots that ate flies – but because I felt like a caged animal myself. And it wasn’t just the lab that felt like a cage, either. The whole modern world felt increasingly artificial, far removed from the natural habitat in which the human race evolved, out there in the scrubland of the African savannah.
I began to lose interest in the classes I was giving, and would leave my students to work on the tasks I assigned them while I sat silently in a corner, my head buried in the Unabomber’s manifesto, or some other anti-technology tract. I now wonder what they made of all this. Did they ask themselves how this evangelist for artificial intelligence had turned into such a Luddite? Did they notice the full extent of my intellectual U-turn?
Perhaps they saw me in the same way as Nick Rosen, a documentary film-maker who came to interview me at the lab in the spring of 2006. Nick was researching a book about living off-grid, and had read about my plans for the Utopia Experiment online. A tall, slender man in his fifties with short dark hair and a relaxed gait, he arrived one afternoon with his tape recorder and notebook at the ready.
I greeted him at the reception desk and led him into the lab, a vast space with high ceilings, filled with strange-looking gizmos of all shapes and sizes.
‘Let me give you a tour,’ I said.
First I showed him our pièce de résistance, a small mobile robot that looked like a Frisbee on wheels, with a clear plastic box on top containing a slimy black liquid.
‘This is our fly-eating robot!’ I proudly announced.
Nick looked suitably impressed. ‘How does it work?’ he asked.
‘You see that plastic box? Well, it’s a microbial fuel cell. That black sludge contains lots of bacteria that can digest chitin. Chitin is what insect exoskeletons are made of. If we pop a few dead flies in this part, the bacteria will chew up the chitin, and electrons will be given off in the process. And these electrons can be used to generate an electric current, which powers the robot.’
‘What does the robot do with the energy it gets from eating flies?’ asked Nick.
‘It moves, in search of more flies.’
‘How fast does it move?’
‘Oh, only a few centimetres an hour.’
‘It’s not exactly the Terminator, is it?’ smiled Nick.
‘No, of course not. I don’t think the human race has to worry about being devoured by flesh-eating robots quite yet,’ I laughed.
‘But we might have to one day?’
I didn’t reply. The nightmare scenarios that worried me these days didn’t involve gruesome battles between humans and machines. They revolved around more subtle, but to my mind more plausible, scenarios, in which humans became so dependent on robots that they lost their autonomy. This was the future that worried the Unabomber, and now it worried me.
Next, I led Nick to another cubicle where a couple of my colleagues were fiddling around with little flexible filaments of plastic, and trying to attach them to a robot shaped like a very large rodent.
‘This is our robot rat. It will eventually be able to feel its way around in the dark by using its whiskers, just like real rats do.’
‘And why on earth are you trying to build a robot rat?’ asked Nick.
‘One reason is to try and understand how real rats navigate. We have a hunch about how rat brains work, but we don’t know if it’s correct, so we’re programming our robot rat to operate in the way we think real rats work. If the robot starts behaving like a real rat, that will mean we’re probably on the right track.’
Nick raised his eyebrows. ‘Does it have any practical uses?’
‘Yes, but it’s kind of secret.’
‘Military?’
I nodded conspiratorially. ‘Let’s put it this way; a robot that can find its way around a cave in the dark, without giving itself away by shining a light ahead of it, could come in handy in certain situations.’
‘So, Bin Laden is sitting in his cave in Afghanistan, and all of a sudden a robot rat appears. What’s the rat going to do? Arrest him? Or will it be a rat suicide bomber?’
‘Stranger things have happened. In World War Two the US military hatched a plan to attach little firebombs to bats and drop them by aircraft over Tokyo. The little creatures would fly into the attics of the wooden houses and hang from the rafters and – boom!’
‘The US military has always been pretty creative, I guess.’
‘And they do fund a hell of a lot of research in robotics,’ I said. ‘Don’t bite the hand that feeds you.’
I showed Nick around the rest of the lab, finishing at my own little area, where my research assistant Peter sat at a desk behind a disembodied head.
‘This is Peter,’ I said. Then, pointing to the head, ‘And this is Eva.’
Eva’s face was clearly female. She had large brown eyes, long dark eyelashes and full red lips. Her skin, however, only stretched as far back as her ears and forehead. Behind that, where the hair should have been, was a chaotic arrangement of little motors, which pulled little wires attached to the back of the skin on her face. By activating the motors in different combinations, we could program Eva to make a variety of facial expressions.
‘It’s creepy!’ exclaimed Nick. ‘The skin doesn’t look quite right. It’s a bit too rubbery.’
‘Yeah, skin is one of the hardest things to get right. And it’s quite normal to find it creepy. It’s the valley of the uncanny.’
‘The what?’
‘The valley of the uncanny. It’s a th
eory put forward by a Japanese roboticist in the 1970s. He argued that robots would become more acceptable as they came to resemble people more closely, but only up to a point. When the resemblance is almost, but not quite, perfect, people will suddenly experience a kind of revulsion.’
Nick leaned over towards Eva so his eyes were on a level with hers. He seemed to be looking for some kind of reaction, as if he thought there might be a soul lurking behind the blank expression.
Peter hit a button on his keyboard, and Eva sprang to life. She tilted her head back, fluttered her eyelashes a couple of times, and made a big smile.
Nick recoiled in horror. ‘Jesus!’ he said. ‘She made me jump!’
Eva’s lips turned down at the sides and she frowned.
‘Aaah! Sorry! Poor thing. I didn’t mean to offend you,’ apologized Nick. ‘Damn, it’s a bloody robot! What on earth am I saying?’
‘You see?’ I smiled. ‘It’s quite easy to start projecting some kind of personality onto these robots, even if it’s just a head on a stand with no hair.’
‘Right,’ said Nick, recovering his composure. ‘Let’s do the interview.’
We sat down in my cubicle and I proceeded to explain how I had become increasingly alarmed by the dangers of climate change, peak oil and various other threats facing our modern world. What if they reached some kind of tipping point, and civilization collapsed? How would the survivors cope with life after the crash? This was what I wanted to explore in the course of the Utopia Experiment.
When Nick’s book, entitled How to Live Off-Grid, came out a year later, I was rather surprised by his description of me. ‘Dylan is in his mid-thirties,’ he wrote, ‘with small delicate features and mousy hair.’ So far so good. But then he added:
Behind rimless glasses, his eyes, I have to admit, were glinting madly, and with his deep, almost expressionless voice and self-effacing mannerisms, he fitted the stereotype of a scientist who believes that humanity must be saved from itself, whatever the cost.