Films from the Future

Home > Other > Films from the Future > Page 19
Films from the Future Page 19

by Andrew Maynard


  Without a doubt, there’s a seductive lure to being able to play with technology without others telling what you can and cannot do. And it’s a lure that has its roots in our innate curiosity, our desire to know, and understand, and create.

  As a lab scientist, I was driven by the urge to discover new things. I was deeply and sometimes blindly focused on designing experiments that worked, and that shed new light on the problems I was working on. Above all, I had little patience for seemingly petty barriers that stood in my way. I’d like to think that, through my research career, I was responsible. And through my work on protecting human health and safety, I was pretty tuned in to the dangers of irresponsible research. But I also remember the times when I pushed the bounds of what was probably sensible in order to get results.

  There was one particularly crazy all-nighter while I was working toward my PhD, where I risked damaging millions of dollars of equipment by bending the rules, because I needed data, and I didn’t have the patience to wait for someone who knew what they were doing to help me. Fortunately, my gamble paid off—it could have easily ended badly, though. Looking back, it’s shocking how quickly I sloughed off any sense of responsibility to get the data I needed. This was a pretty minor case of “permissionless innovation,” but I regularly see the same drive in other scientists, and especially in entrepreneurs—that all-consuming need to follow the path in front of you, to solve puzzles that nag at you, and to make something that works, at all costs.

  This, to me, is the lure of permissionless innovation. It’s something that’s so deeply engrained in some of us that it’s hard to resist. But it’s a lure that, if left unchecked, can too often lead to dark and dangerous places.

  By calling for checks and balances in AI development, Musk and others are attempting to govern the excesses of permissionless innovation. Yet I wonder how far this concern extends, especially in a world where a new type of entrepreneur is emerging who has substantial power and drive to change the face of technology innovation, much as Elon Musk and Jeff Bezos are changing the face of space flight.

  AI is still too early in its development to know what the dangers of permissionless innovation might be. Despite the hype, AI and AGI (Artificial General Intelligence) are still little more than algorithms that are smart within their constrained domains, but have little agency beyond this. Yet the pace of development, and the increasing synergies between cybernetic substrates, coding, robotics, and bio-based and bio-inspired systems, are such that the boundaries separating what is possible and what is not are shifting rapidly. And here, there is a deep concern that innovation with no thought to consequences could lead to irreversible and potentially catastrophic outcomes.

  In Ex Machina, Nathan echoes many other fictitious innovators in this book: John Hammond in Jurassic Park (chapter two), Lamar Burgess in Minority Report (chapter four), the creators of NZT in Limitless (chapter five), Will Caster in Transcendence (chapter nine), and others. Like these innovators, he considers himself above social constraints, and he has the resources to act on this. Money buys him the freedom to do what he wants. And what he wants is to create an AI like no one has ever seen before.

  As we discover, Nathan realizes there are risks involved in his enterprise, and he’s smart enough to put safety measures in place to manage them. It may not even be a coincidence that Ava comes into being hundreds of miles from civilization, surrounded by a natural barrier to prevent her escaping into the world of people. In the approaches he takes, Nathan’s actions help establish the idea that permissionless innovation isn’t necessarily reckless innovation. Rather, it’s innovation that’s conducted in a way that the person doing it thinks is responsible. It’s just that, in Nathan’s case, the person who decides what is responsible is clearly someone who hasn’t thought beyond the limit of his own ego.

  This in itself reveals a fundamental challenge with such unbounded technological experimentation. With the best will in the world, a single innovator cannot see the broader context within which they are operating. They are constrained by their understanding and mindset. They, like all of us, are trapped in their own version of Plato’s Cave, where what they believe is reality is merely their interpretation of shadows cast on the walls of their mind. But, unlike Plato’s prisoners, they have the ability to create technologies that can and will have an impact beyond this cave. And, to extend the metaphor further, they have the ability to create technologies that are able to see the cave for what it is, and use this to their advantage.

  This may all sound rather melodramatic, and maybe it is. Yet perhaps Nathan’s biggest downfall is that he had no translator between himself and a bigger reality. He had no enlightened philosopher to guide his thinking and reveal to him greater truths about his work and its potential impacts. To the contrary, in his hubris, he sees himself as the enlightened philosopher, and in doing so he becomes mesmerized and misled by shadow-ideas dancing across the wall of his intellect.

  This broader reality that Nathan misses is one where messy, complex people live together in a messy, complex society, with messy, complex relationships with the technologies they depend on. Nathan is tech-savvy, but socially ignorant. And, as it turns out, he is utterly naïve when it comes to the emergent social abilities of Ava. He succeeds in creating a being that occupies a world that he cannot understand, and as a result, cannot anticipate.

  Things might have turned out very differently if Nathan had worked with others, and if he’d surrounded himself with people who were adept at seeing the world as he could not. In this case, instead of succumbing to the lure of permissionless innovation, he might have accepted that sometimes, constraints and permissions are necessary. Of course, if he’d done this, Ex Machina wouldn’t have been the compelling movie it is. But as a story about the emergence of enlightened AI, Ex Machina is a salutary reminder that, sometimes, we need other people to help guide us along pathways toward responsible innovation.

  There is a glitch in this argument, however. And that’s the reality that, without a gung-ho attitude toward innovation like Nathan’s, the pace of innovation—and the potential good that it brings—would be much, much slower. And while I’m sure some would welcome this, many would be saddened to see a slowing down of the process of turning today’s dreams into tomorrow’s realities.

  Technologies of Hubris

  This tension, between going so fast that you don’t have time to think and taking the time to consider the consequences of what you’re doing, is part of the paradox of technological innovation. Too much blind speed, and you risk losing your way. But too much caution, and you risk achieving nothing. By its very nature, innovation occurs at the edges of what we know, and on the borderline between success and failure. It’s no accident that one of the rallying cries of many entrepreneurs is “fail fast, fail forward.”106

  Innovation is a calculated step in the dark; a willingness to take a chance because you can imagine a future where, if you succeed, great things can happen. It’s driven by imagination, vision, single-mindedness, self-belief, creativity, and a compelling desire to make something new and valuable. Innovation does not thrive in a culture of uninspired, risk-averse timidity, where every decision needs to go through a tortuous path of deliberation, debate, authorization, and doubt. Rather, seeking forgiveness rather than asking permission is sometimes the easiest way to push a technology forward.

  This innovation imperative is epitomized in the character of Nathan in Ex Machina. He’s managed to carve out an empire where he needs no permission to flex his innovation muscles. And because of this—or so we are led to believe—he has pushed the capabilities of AGI and autonomous robots far beyond what anyone else has achieved. In the world of Nathan, he’s a hero. Through his drive, vision, and brilliance, he’s created something unique, something that will transform the world. He’s full of hubris, of course, but then, I suspect that Nathan would see this as an asset. It’s what makes him who he is, and enables him to do what he does. And drawing on his hubris,
what he’s achieved is, by any standard, incredible.

  Without a doubt, the technology in Ex Machina could, if developed responsibly, have had profound societal benefits. Ava is a remarkable piece of engineering. The way she combines advanced autonomous cognitive abilities with a versatile robotic body is truly astounding. This is a technology that could have laid the foundations for a new era in human-machine partnerships, and that could have improved quality of life for millions of people. Imagine, for instance, an AI workforce of millions designed to provide medical care in remote or deprived areas, or carry out search-and-rescue missions after natural disasters. Or imagine AI classroom assistants that allow every human teacher to have the support of two or three highly capable robotic support staff. Or expert AI-based care for the elderly and infirm that far surpasses the medical and emotional support an army of healthcare providers are able to give.

  This vision of a future based around human-machine partnerships can be extended even further, to a world where an autonomous AI workforce, when combined with a basic income for all, allows people to follow their dreams, rather than being tied to unfulfilling jobs. Or a world where the rate of socially beneficial innovation is massively accelerated, as AIs collaborate with humans in new ways, revealing approaches to addressing social challenges that have evaded our collective human minds for centuries.

  And this is just considering AGIs embedded in a cybernetic body. As soon as you start thinking about the possibilities of novel robotics, cloud-based AIs, and deeply integrated AI-machine systems that are inspired by Nathan’s work, the possibilities begin to grow exponentially, to the extent that it becomes tempting to argue that it would be unethical not to develop this technology.

  This is part of the persuasive power of permissionless innovation. By removing constraints to achieving what we imagine the future could be like, it finds ways to overcome hurdles that seem insurmountable with more constrained approaches to technology development, and it radically pushes beyond the boundaries of what is considered possible.

  This flavor of permissionless innovation—while not being AI-specific—is being seen to some extent in current developments around private space flight. Elon Musk’s SpaceX, Jeff Bezos’ Blue Origin, and a handful of other private companies are achieving what was unimaginable just a few years ago because they have the vision and resources to do this, and very few people telling them what they cannot do. And so, on September 29, 2017, Elon Musk announced his plans to send humans to Mars by 2024 using a radical design of reusable rocket—something that would have been inconceivable a year or so ago.107

  Private space exploration isn’t quite permissionless innovation; there are plenty of hoops to jump through if you want permission to shoot rockets into space. But the sheer audacity of the emerging technologies and aspirations in what has become known as “NewSpace” is being driven by very loosely constrained innovation. The companies and the mega-entrepreneurs spearheading it aren’t answerable to social norms and expectations. They don’t have to have their ideas vetted by committees. They have enough money and vision to throw convention to the wind. In short, they have the resources and freedom to translate their dreams into reality, with very little permission required.108

  The parallels with Nathan in Ex Machina are clear. In both cases, we see entrepreneurs who are driven to turn their science-fiction-sounding dreams into science reality, and who have access to massive resources, as well as the smarts to work out how to combine these to create something truly astounding. It’s a combination that is world-changing, and one that we’ve seen at pivotal moments in the past where someone has had the audacity to buck the status quo and change the course of technological history.

  Of course, all technology geniuses stand on the shoulders of giants. But it’s often individual entrepreneurs operating at the edge of permission who hold the keys to opening the floodgates of history-changing technologies. And I must admit that I find this exhilarating. When I first saw Elon Musk talking about his plans for interplanetary travel, my mind was blown. My first reaction was that this could be this generation’s Sputnik moment, because the ideas being presented were so audacious, and the underlying engineering was so feasible. This is how transformative technology happens: not in slow, cautious steps, but in visionary leaps.

  But it also happens because of hubris—that excessive amount of self-confidence and pride in one’s abilities that allows someone to see beyond seemingly petty obstacles or ignore them altogether. And this is a problem, because, as exciting as technological jumps are, they often come with a massive risk of unintended consequences. And this is precisely what we see in Ex Machina. Nathan is brilliant. But his is a very one-dimensional brilliance. Because he is so confident in himself, he cannot see the broader implications of what he’s creating, and the ways in which things might go wrong. He can’t even see the deep flaws in his unshakable belief that he is the genius-master of a servant-creation.

  For all the seductiveness of permissionless innovation, this is why there need to be checks and balances around who gets to do what in technological innovation, especially where the consequences are potentially widespread and, once out, the genie cannot be put back in the bottle.

  In Ex Machina, it’s Nathan’s hubris that is ultimately his downfall. Yet many of his mistakes could have been avoided with a good dose of humility. If he’d not been such a fool, and he’d recognized his limitations, he might have been more willing to see where things might go wrong, or not go as he expected, and to seek additional help.

  Several hundred years and more ago, it was easier to get away with mistakes with the technologies we invented. If something went wrong, it was often possible to turn the clock back and start again—to find a pristine new piece of land, or a new village or town, and chalk the failure up to experience.109 From the Industrial Revolution on, though, things began to change. The impacts of automation and powerful new manufacturing technologies on society and the environment led to hard-to-reverse changes. If things went wrong, it became increasingly difficult to wipe the slate clean and start afresh. Instead, we became increasingly good at learning how to stay one step ahead of unexpected consequences by finding new (if sometimes temporary) technological solutions with which to fix emerging problems.

  Then we hit the nuclear and digital age, along with globalization and global warming, and everything changed again. We now live in an age where our actions are so closely connected to the wider world we live in that unexpected consequences of innovation can potentially propagate through society faster than we can possibly contain them. These consequences increasingly include widespread poverty, hunger, job losses, injustice, disease, and death. And this is where permissionless innovation and technological hubris become ever more dangerous. For sure, they push the boundaries of what is possible and, in many cases, lead to technologies that could make the world a better place. But they are also playing with fire in a world made of kindling, just waiting for the right spark.

  This is why, in 2015, Musk, Hawkins, Gates, and others were raising the alarm over the dangers of AI. They had the foresight to point out that there may be consequences to AI that will lead to serious and irreversible impacts and that, because of this, it may be expedient to think before we innovate. It was a rare display of humility in a technological world where hubris continues to rule. But it was a necessary one if we are to avoid creating technological monsters that eventually consume us.

  But humility alone isn’t enough. There also has to be some measure of plausibility around how we think about the future risks and benefits of new technologies. And this is where it’s frighteningly easy for things to go off the rails, even with the best of intentions.

  Superintelligence

  In January 2017, a group of experts from around the world got together to hash out guidelines for beneficial artificial intelligence research and development. The meeting was held at the Asilomar Conference Center in California, the same venue where, in 1975, a group of scientists famou
sly established safety guidelines for recombinant DNA research. This time, though, the focus was on ensuring that research on increasingly powerful AI systems led to technologies that benefited society without creating undue risks.110 And one of those potential risks was a scenario espoused by University of Oxford philosopher Nick Bostrom: the emergence of “superintelligence.”

  Bostrom is Director of the University of Oxford Future of Humanity Institute, and is someone who’s spent many years wrestling with existential risks, including the potential risks of AI. In 2014, he crystallized his thinking on artificial intelligence in the book Superintelligence: Paths, Dangers and Strategies,111 and in doing so, he changed the course of public debate around AI. I first met Nick in 2008, while visiting the James Martin School at the University of Oxford. At the time, we both had an interest in the potential impacts of nanotechnology, although Nick’s was more focused on the concept of self-replicating nanobots than the nanoscale materials of my world. At the time, AI wasn’t even on my radar. To me, artificial intelligence conjured up images of AI pioneer Marvin Minsky, and what was at the time less than inspiring work on neural networks. But Bostrom was prescient enough to see beyond the threadbare hype of the past and toward a new wave of AI breakthroughs. And this led to some serious philosophical thinking around what might happen if we let artificial intelligence, and in particular artificial general intelligence, get away from us.

  At the heart of Bostrom’s book is the idea that, if we can create a computer that is smarter than us, it should, in principle, be possible for it to create an even smarter version of itself. And this next iteration should in turn be able to build a computer that is smarter still, and so on, with each generation of intelligent machine being designed and built faster than the previous until, in a frenzy of exponential acceleration, a machine emerges that’s so mind-bogglingly intelligent it realizes people aren’t worth the trouble, and does away with us.

 

‹ Prev