Films from the Future

Home > Other > Films from the Future > Page 23
Films from the Future Page 23

by Andrew Maynard


  Groups such as ELF and Earth First, together with their underlying concerns over the potentially harmful impacts of technological innovation, clearly provide some of the inspiration for RIFT. Yet, beyond the activities of these two groups, which have been predominantly aimed at combatting environmental harm rather than resisting technological change, it’s surprisingly hard to find examples of substantial and coordinated techno-terrorism. Today’s Luddites, it seems, are more comfortable breaking metaphorical machines from the safety of their academic ivory towers rather than wreaking havoc in the real world. Yet there are still a small number of individuals and groups who are motivated to harm others in their fight against emerging technologies and the risks they believe they represent.

  On August 8, 2011, Armando Herrera Corral, a computer scientist at the Monterrey Institute of Technology and Higher Education in Mexico City, received an unusual package. Being slightly wary of it, he asked his colleague Alejandro Aceves López to help him open it.

  In opening the package, Aceves set off an enclosed pipe bomb, and metal shards ejected by the device pierced his chest. He survived, but had to be rushed to intensive care. Herrera got away with burns to his legs and two burst eardrums.

  The package was from a self-styled techno-terrorist group calling itself Individuals Tending Towards the Wild, or Individuals Tending toward Savagery (ITS), depending on how the Spanish is translated.134 ITS had set its sights on combating advances in nanotechnology through direct and violent action, and was responsible for two previous bombing attempts, both in Mexico.135

  ITS justified its actions through a series of communiques, the final one being released in March 2014, following an article on the group’s activities published by the scholar Chris Toumey.136 Reading the communique they released the day after the August 8 bombing, what emerges is a distorted vision of nanotechnology that, to them, justified short-term violence to steer society away from imagined existential risks. At the heart of these concerns was their fear of nanotechnologies creating “nanomachines” that would end up destroying the Earth.

  ITS’ “nanomachines” are remarkably similar to the nanobots seen in Transcendence. Just to be clear, these do not present a plausible or rational risk, as we’ll get to shortly. Yet it’s easy to see how these activists twisted together the speculative musings of scientists, along with a fractured understanding of reality, to justify their deeply misguided actions.

  In articulating their concerns, ITS drew on a highly influential essay, published in Wired magazine in 2000, by Sun Microsystems founder Bill Joy. Joy’s article was published under the title “Why the future doesn’t need us,”137 and in it he explores his worries that the technological capabilities being developed at the time were on the cusp of getting seriously out of hand—including his concerns over a hypothetical “gray goo” of out-of-control nanobots first suggested by futurist and engineer Eric Drexler.

  Joy’s concerns clearly resonated with ITC, and somehow, in the minds of the activists, these concerns translated into an imperative to carry out direct action against nanotechnologists in an attempt to save future generations. This was somewhat ironic, given Joy’s clear abhorrence of violent action against technologists. Yet, despite this, Joy’s speculation over the specter of “gray goo” was part of the inspiration behind ITC’s actions.

  Beyond gray goo though, there exists another intriguing connection between Joy and ITC. In his essay, Joy cited a passage from Ray Kurzweil’s book The Age of Spiritual Machines that troubled him, and it’s worth reproducing part of that passage here:

  “First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.

  “If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than manmade ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.”

  Kurzweil’s passage shifted Joy’s focus of concern onto artificial intelligence and intelligent machines. This was something that resonated deeply with him. But, to his consternation, he discovered that this passage was not, in fact, written by Kurzweil, but by the Unabomber, and was merely quoted by Kurzweil.

  Joy was conflicted. As he writes, “Kaczynski’s actions were murderous and, in my view, criminally insane. …But simply saying this does not dismiss his argument; as difficult as it is for me to acknowledge, I saw some merit in the reasoning in this single passage.”

  Joy worked through his concerns with reason and humility, carving out a message that innovation can be positively transformative, but only if we handle the power of emerging technologies with great respect and responsibility. Yet ITS took his words out of context, and saw his begrudging respect for Kaczynski’s arguments as validation of their own ideas.

  The passage above that was cited by Kurzweil, and then by Joy, comes from Kaczynski’s thirty-five-thousand-word manifesto138, published in 1995 by the Washington Post and the New York Times. Since its publication, this manifesto has become an intriguing touchstone for action against perceived irresponsible (and permissionless) technology innovation. Some of its messages have resonated deeply with technologists like Kurzweil, Joy, and others, and have led to deep introspection around what socially responsible technology innovation means. Others—notably groups like ITS—have used it to justify more direct action to curb what they see as the spread of a technological blight on humanity. And a surprising number of scholars have tried to tease out socially relevant insights on technology and its place within society from the manifesto.

  The result is an essay that some people find easy to read selectively, cherry-picking the passages that confirm their own beliefs and ideas, while conveniently ignoring others. Yet, taken as a whole, Kaczynski’s manifesto is a poorly-informed rant against what he refers to pejoratively as “leftists,” and a naïve justification for reverting to a more primitive society where individuals had what he believed was more agency over how they lived, even if this meant living in poverty and disease.

  Fortunately, despite Kaczynski, ITS, and fictitious groups like RIFT, violent anti-technology activism in the real world continues to be relatively rare. Yet the underlying concerns and ideologies are not. Here, Bill Joy’s article in Wired provides a sobering nexus between the futurist imaginings of Kurzweil and Drexler, Kaczynski’s anti-technology-motivated murders, and the bombings of ITS. Each of these are worlds apart in how they respond to new technologies. But the underlying visions, fears, and motivations are surprisingly similar.

  In today’s world, most activists working toward more measured and responsible approaches to tech
nology innovation operate within social norms and through established institutions. Indeed, there is a large and growing community of scholars, entrepreneurs, advocates, and even policy makers, who are sufficiently concerned about the future impacts of technological innovation that they are actively working within appropriate channels to bring about change. Included here are cross-cutting initiatives like the Future of Life Institute, which, as was discussed in chapter eight, worked with experts from around the world to formulate the 2017 set of principles for beneficial AI development. There are many other examples of respected groups—as well as more shadowy and anarchic ones, like the “hacktivist” organization Anonymous—that are asking tough questions about the line between what we can do, and what we should be doing, to ensure new technologies are developed safely and responsibly. Yet the divide between legitimate action and illegitimate action is not always easy to discern, especially if the perceived future impacts of powerful technologies could possibly lead to hundreds of millions of people being harmed or killed. At what point do the stakes become so high around powerful technologies that violent means justify the ends?

  Here, Transcendence treads an intriguing path, as it leads viewers on a journey from reacting to RIFT with abhorrence, to begrudging acceptance. As cyber-Will’s powers grow, we’re sucked into RIFT’s perspective that the risk to humanity is so great that only violent and direct action can stop it. And so, Bree and her followers pivot in the movie from being antagonists to heroes.

  This is a seductive narrative. If, by allowing a specific technology to emerge, we would be condemning millions to die, and many more to be subjugated, how far would you go to stop it? I suspect that a surprising number of people would harbor ideas of carrying out seemingly unethical acts in the short term for the good of future generations (and indeed, this is a topic we’ll come back to in chapter eleven and the movie Inferno). But there’s a fatal flaw in this way of thinking, and that’s the assumption that we can predict with confidence what the future will bring.

  Exponential Extrapolation

  In 1965, Gordon Moore, one of Intel’s founders, observed that the number of transistors being squeezed into integrated circuits was doubling around every two years. He went on to predict—with some accuracy—that this trend would continue for the next decade.

  As it turned out, what came to be known as Moore’s Law continued way past the 1970s, and is still going strong (although there are indications that it may be beginning to falter). It was an early example of exponential extrapolation being used to predict how the future of a technology would evolve, and it’s one of the most oft-cited case of exponential growth in technology innovation.

  In contrast to linear growth, where outputs and capabilities increase by a constant amount each year, exponential growth leads to them multiplying rapidly. For instance, if a company produced a constant one hundred widgets a year, after five years, it would have produced five hundred widgets. But if it increased production exponentially, by a hundred times each year, after five years, it would have produced a hundred million widgets. In this way, exponential trends can lead to massive advances over short periods of time. But because they involve such large numbers, predictions of exponential growth are dangerously sensitive to the assumptions that underlie them. Yet, they are extremely beguiling when it comes to predicting future technological breakthroughs.

  Moore’s Law, it has to be said, has weathered the test of time remarkably well, even when data that predates Moore is taken into account. In the supporting material for his book The Singularity is Near, Ray Kurzweil plotted out the calculations per second per $1,000 of computing hardware—a convenient proxy for computer power—extrapolating back to some of the earliest (non-digital) computing engines of the early 1900s.139 Between 1900 and 1998, he showed a relatively consistent exponential increase in calculations per second per $1,000, representing a twenty-trillion-times increase in computing power over this period. Based on these data, Kurzweil projected that it will be only a short time before we are able to fully simulate the human brain using computers and create superintelligent computers that will far surpass humans in their capabilities. Yet, these predictions are misleading, because they fall into the trap of assuming that past exponential growth predicts similar growth rates in the future.

  One major issue with extrapolating exponential growth into the future is that it massively amplifies uncertainties in the data. Because each small step in the future extrapolation involves incredibly large numbers, it’s easy to be off by a factor of thousands or millions in predictions. These may just look like small variations on plots like those produced by Kurzweil and others, but in real life, they can mean the difference between something happening in our lifetime or a thousand years from now.

  There is another, equally important risk in extrapolating exponential trends, and it’s the harsh reality that exponential relationships never go on forever. As compelling as they look on a computer screen or the page of a book, such trends always come to an end at some point, as some combination of factors interrupts them. If these factors lie somewhere in the future, it’s incredibly hard to work out where they will occur, and what their effects will be.

  Of course, Moore’s Law seems to defy these limitations. It’s been going strong for decades, and even though people have been predicting for years that we’re about to reach its limit, it’s still holding true. But there is a problem with this perspective. Moore’s Law isn’t really a law, so much as a guide. Many years ago, the semiconductor industry got together and decided to develop an industry roadmap to guide the continuing growth of computing power. They used Moore’s Law for this roadmap, and committed themselves to investing in research and development that would keep progress on track with Moore’s predictions.

  What is impressive is that this strategy has worked. Moore’s Law has become a self-fulfilling prophecy. Yet for the past sixty-plus years, this progress has relied extensively on the same underlying transistor technology, with the biggest advances involving making smaller components and removing heat from them more efficiently. Unfortunately, you can only make transistors so small before you hit fundamental physical limits.

  Because of this, Moore’s Law is beginning to run into difficulties. What we don’t know is whether an alternative technology will emerge that keeps the current trend in increasing computing power going. But, at the moment, it looks like we may be about to take a bit of a breather from the past few decades’ growth. In other words, the exponential trend of the past probably won’t be great at predicting advances over the next decade or so.

  Not surprisingly, perhaps, there are those who believe that new technologies will keep the exponential growth in computing power going to the point that processing power alone matches that of the human brain. But exponential growth sadly never lasts. To illustrate this, imagine a simple thought experiment involving bacteria multiplying in a laboratory petri dish. Assume that, initially, these bacteria divide and multiply every twenty minutes. If we start with one bacterium, we’d have two after twenty minutes, four after forty minutes, eight after an hour, and so on. Based on this trend, if you asked someone to estimate how many bacteria you’d have after a week, there’s a chance they’d do the math and tell you you’d have five times ten to the power of 151 of them—that’s five with 151 zeroes after it. This, after all, is what the exponential growth predicts.

  That’s a lot of bacteria. In fact, it’s an impossible amount; this many bacteria would weigh many, many times more than the mass of the entire universe. The prediction may be mathematically reasonable, but it’s practically nonsensical. Why? Because, in a system with limited resources and competing interests, something’s got to give at some point.

  In the case of the bacteria, their growth is limited by the size of the dish they’re contained in, the amount of nutrients available, how a growing population changes the conditions for growth, and many other factors. The bacteria cannot outgrow their resources, and as they reach their limits
, the growth rate slows or, in extreme cases, may even crash.

  We find the same pattern of rapid growth followed by a tail-off (or crash) in pretty much any system that, at some point, seems to show exponential growth. The exponential bit is inevitably present for a limited period of time only. And while exponential growth may go on longer than expected, once you leave the realm of hard data, you really are living on the edge of reality.

  The upshot of this is that, while Kurzweil’s singularity may one day become a reality, there’s a high chance that unforeseen events are going to interfere with his exponential predictions, either scuppering the chances of something transformative happening, or pushing it back hundreds or even thousands of years.

  And this is the problem with the technologies we see emerging in Transcendence. It’s not that they are necessarily impossible (although some of them are, as they play fast and loose with what are, as far as we know, immutable laws of physics). It’s that they depend on exponential extrapolation that ignores the problems of error amplification and resource constraints. This is a mere inconvenience when it comes to science-fiction plot narratives—why let reality get in the way of a good story? But it becomes more serious when real-world decisions and actions are based on similar speculation.

  Make-Believe in the Age of the Singularity

  In 2003, Britain’s Prince Charles made headlines by expressing his concerns about the dangers of gray goo.140 Like Bill Joy, he’d become caught up in Eric Drexler’s idea of self-replicating nanobots that could end up destroying everything in their attempt to replicate themselves. Prince Charles later backtracked, but not until after his concerns had led to the UK’s Royal Society and Royal Academy of Engineering launching a far-reaching study on the implications of nanotechnology.141

 

‹ Prev