Book Read Free

Future Crimes

Page 51

by Marc Goodman


  Though largely at the research-and-development stage today, nanoscale machines will make it possible to create nano-robots—further accelerating the already exponential changes going on in the fields of robotics and artificial intelligence, someday creating robots a thousand times smaller than our own cells. These nano-bots will have huge implications for the field of robotics, able to build anything from rocket ships to injectable medical devices. Nanotechnology will also be immensely impactful in the world of computer processing, allowing us to build computers that are mindblowingly powerful—a nano-computer the size of a sugar cube could have more processing power than exists in the entire world today.

  But small things can come with very large risks.

  Eric Drexler famously argued in his 1986 book, Engines of Creation, that if nanoscale machines (assemblers) could build materials molecule by molecule, then using billions of these assemblers, one could build any material or object one could imagine. But in order to get that scale, scientists would have to build the first few nano-assemblers in a lab and direct them to build other assemblers, which would in turn build more, growing exponentially with each generation. Drexler worried that such a situation could, however, grow quickly out of control as assemblers began to convert all organic matter around them into the next generation of nanomachines in a process he famously called the “gray goo scenario,” one in which the earth might be reduced to a lifeless mass overrun by nanomachines. How might such a doomsday scenario play out? Let’s say in the future billions of nano-bots were released to clean up an oil spill disaster in an ocean. Sounds great, except that a minor programming error might lead the nano-bots to consume all carbon-based objects (fish, plants, plankton, coral reefs) instead of just the hydrocarbons in the oil. The nano-bots might consume everything in their path, “turning the planet to dust.” To understand just how quickly this might happen, consider the example Drexler provides in his book:

  Imagine such a replicator floating in a bottle of chemicals, making copies of itself …[T]he first replicator assembles a copy in one thousand seconds, the two replicators then build two more in the next thousand seconds, the four build another four, and the eight build another eight. At the end of ten hours, there are not thirty-six new replicators, but over 68 billion. In less than a day, they would weigh a ton; in less than two days, they would outweigh the Earth; in another four hours, they would exceed the mass of the Sun and all the planets combined—if the bottle of chemicals hadn’t run dry long before.

  While many have dismissed “gray goo” as highly improbable fantasy, others, including government reports and NGOs, have given serious consideration to the scenario, making it clear that there are some types of accidents that humanity simply cannot afford. Eventually, Drexler himself clarified his comments to downplay the gray goo scenario, calling it improbable. Whether an accidental release of biovorous self-replicating nano-bots ever takes place or not, the power of such technology will not go unnoticed by malicious actors, including terrorist organizations, who a decade or more in the future may explore these tools just as Aum Shinrikyo did with its chemical and biological weapons program in the 1980s.

  Another area of emerging science that holds the potential for tremendous transformation in the field of computing is that of quantum physics. Although there is much work to be done before quantum computing becomes mainstream and in many tests of existing systems the reality hasn’t quite matched the hype, quantum computers hold the potential to perform calculations at speeds that may leave today’s computers in the dust: in one test carried out by Google and NASA, a developmental quantum computer processed several test algorithms at speeds thirty-five thousand times faster than classical computing methods, running off-the-shelf commercial servers. This could help answer some of the world’s most difficult problems, whether it’s the hunt for new drug therapies or creating next-generation nanotechnology or artificial intelligence.

  Today’s computers are binary, possessing only two possible values, either a one or a zero, known as bits, to carry out their instruction sets. Quantum computers, on the other hand, leverage the idiosyncrasies of subatomic particles known as qubits, which can be a one, a zero, or a simultaneous mix of the two. In plain English, this allows quantum computers to test a huge number of possibilities at the very same time and brings with it far-reaching security implications. In particular, quantum computers hold the potential to completely nullify all the computer security systems commonly in use today. Present computer security is based on cryptography—that is, using number theory and prime number multiplication to encode messages so that they are unreadable by unauthorized parties. In order for people to read your encrypted data, either they have to have the mathematical key, or they could “brute-force it” by doing the math required over and over again to factor the prime numbers involved to provide the correct solution. When we enter our passwords, encryption algorithms convert them into the correct factor that unlocks the message and provides authentication. Today, a brute-force attack is something most hackers never have to resort to. Instead, they rely on poorly implemented encryption protocols, computer malware, keystroke loggers, and human error to steal the cryptographic key required to read your credit card data or banking information.

  Absent the correct password, hackers would have to reverse engineer the encryption process, a computationally difficult and highly improbable feat using today’s computers. Even with a supercomputer, a brute-force attack would take billions of years to crack the 128-bit AES encryption that is today’s standard (the age of the universe is only 13.75 billion years). While classical computers can only do a single calculation at a time, quantum computers can perform a huge number of calculations that leverage the counterintuitive nature of quantum mechanics to drive directly to the answers of very complex questions. In other words, a quantum computer could potentially bypass encryption protocols, allowing its owner to read everybody’s e-mail, transfer funds from bank accounts, control the financial markets, commandeer air traffic control systems, and manipulate critical infrastructures. Conversely, quantum technology might also be the breakthrough that allows for fully secure, uncrackable communications, since any observation or interception of a quantum encryption key during transit would change its content. Though you won’t pick up one of these in the Apple Store anytime soon, many governments around the world are working on building quantum computers capable of cracking today’s crypto technology, and developing their own quantum secure networks. Not surprisingly, the NSA has already appropriated nearly $100 million toward crafting a “cryptologically useful quantum computer” as part of its Penetrating Hard Targets project. To be clear, this is a massively difficult problem to solve, but the first person who does it will have tremendous concentrated power, something he is unlikely to mention to all of those whose communications he is reading and whose systems he is accessing.

  Taken on the whole, the most powerful technologies of the twenty-first century, including robotics, synthetic biology, molecular manufacturing, and artificial intelligence, hold the power to create a world of unprecedented abundance and prosperity. From the creation of unlimited energy to the production of boundless food sources and monumental advances in medicine, exponential technologies can be an extraordinary force for good.

  But there is a flip side to these advances as well, as we have seen time and time again throughout this book. In the year 2000, Bill Joy, the former chief scientist at Sun Microsystems, provided a glimpse of how bad things could theoretically get in a seminal article published in Wired titled “Why the Future Doesn’t Need Us.” Joy bluntly warned that robotics, genetic engineering, and AI threaten to make human beings “an endangered species” as exponential technologies would eventually grow beyond us and our control. Joy pointed out that all of our twenty-first-century technologies are being democratized, available to anybody with an Internet connection. There are robot-building clubs in high schools and synthetic biology competitions in colleges. AIs navigate our cars, and U
AVs can be purchased at Costco. Compared with the nuclear threat, however, there is an incongruence between the potentially destructive power of these exponential technologies and their widespread availability to the common man today. This does not mean these technologies should be banned, nor should they be locked away in government labs, given the vast potential for good they will bring, especially as they become democratized. Who knows what kid in Jaipur or what grandmother in Milwaukee while hacking away at synbio will make that game-changing cancer-fighting breakthrough we’ve all been hoping for? But it is also just as likely that among the masses will be those few bad actors who can use the same technologies to create a global pandemic. This should give us pause. We should be thinking more deeply and seriously about our use of exponential technologies, their downsides, and the potential for harm they may bring.

  Although space attacks, evil AI, and gray goo may be low on our list of personal priorities, far below the rush to pick up the kids at school, there are a mass of threats that demand our immediate attention. The critical infrastructures that run the world, from our energy grids to the financial markets, are under persistent attack, leaving us with a global information grid that is readily susceptible to a systemic crash. At the same time, the volume of data we are producing about ourselves and the things around us is growing at an exponential rate, raising deep questions about our privacy and the ethical implications of what becomes possible with big data and an emerging surveillance society. These data can be hacked and projected onto an ever-growing number of screens in our lives to portray “realities” that are in fact falsehoods. This lack of trustworthy computing is further exacerbated by the ease with which black-box algorithms can be used to distort our reality in ways barely perceptible, their secrets known only to those who program them behind closed doors and beyond the scrutiny of the masses.

  Mobile computing and an Internet that will grow from the metaphoric size of a golf ball to the size of the sun is just over the horizon, and soon every physical object may be connected online and assigned an IP address. But more things online means more things to hack, giving bad actors access to increasingly intimate parts of our lives from our bedrooms to our own bodies as biology becomes integrated with information technology. And at every step of the way, criminals, terrorists, and rogue governments are ready to exploit our common technical insecurity through the sweeping flaws that persist in today’s software and hardware systems. These illicit knowledge workers of the twenty-first century are deeply innovative, adaptive, and ever learning and employ the latest business practices, from crowdsourcing to affiliate marketing, to subvert the technologies around us.

  Advances in computing and artificial intelligence mean that crime has now become scripted, run algorithmically, to much greater effect and with far fewer human beings required. Worse, the tools we have available to detect these threats are woefully inadequate. With 95 percent of new malware threats going undetected and the time to discovery of an intruder in our corporate networks hovering around 210 days, it is clear that any of our systems can be penetrated at will by those who have the time and inclination to do so. Indeed, not much time is required at all, as the Verizon–Secret Service study demonstrated: 75 percent of all computer systems can be penetrated in mere minutes, and only 15 percent require more than a few hours to hack.

  The impact of these threats will be felt more profoundly as cyber crime goes 3-D, with billions more objects connected to the Internet of Things, an emerging online world that too is eminently hackable and may be even less secure than our existing laptops and smart phones. The risks of three-dimensional computing, embodied by the rise of robotics, mean that we are creating machines with the ability to outrun and overpower us, made even more powerful by their ability to act in unison, working as a swarm to accomplish their goals. This specter is a troubling development given the increasing physical prowess of the growing legions of armed flying, walking, or swimming military robots, most equipped with artificial intelligence systems to guide them and some imbued with the lethal autonomy to make “kill decisions” for us. The cyber threat is thus morphing from a purely virtual problem into a physical world danger. The result, as we have seen throughout this book, is that science fiction is becoming science fact before our very eyes.

  With the advent of the Internet and the imminent arrival of the billions of additional connections afforded by the IoT and its sensors, our planet has developed an ever-expanding nervous system. It links our communications, our thoughts, and even our bodies to an online global brain of tremendous complexity controlled by a plethora of software systems and networking protocols, each of which can readily be exploited by those who wish to do us harm. Regrettably, the immune system protecting this global nervous system is weak and under persistent attack. The consequences of its failure cannot be overstated. As a result, it is time to start designing, engineering, and building much more robust systems of self-protection—safeguards that can grow and adapt as rapidly as new technological threats are emerging into our world. Though it’s easy to focus solely on the abundant benefits technology brings into our lives, we ignore the accompanying risks at our own peril.

  We are now living in an exponential age, and yet physiologically our brains are still those of Stone Age hunters, barely upgraded in the past fifty thousand years: it is not in our nature to grasp the inherent power of exponential technologies. But try we must. For just as the creatures living in the proverbial lily-pad-covered pond mentioned earlier were under threat from exponential change, so are we. For the students in France who were warned they had thirty days to act in order to save the pond, on day 25 there was scantly anything to be concerned about because the lily only covered 3 percent of the pond’s surface, so they let it grow. As we know, by day 29, the lily had miraculously grown to cover half the water, but by then there was precious little time to save the pond, which was strangled by the lily the very next day. Today the totality of our technological insecurity may seem easy to ignore. Sure, a few million accounts may be hacked here, and a billion passwords get stolen there, but we have time. Drones, pacemakers, air traffic control, cars, streetlights, navigation systems, MRI machines—all hacked. But we have time. Tens of billions of new objects to be added to the Internet, but we have time. Don’t we?

  The writing is on the wall. Technology is leaving us increasingly connected, dependent, and vulnerable. Though the myriad scientific breakthroughs enabled by exponential technology promise great and untold benefits for humanity, they must be guided and protected from those who would exploit them to harm others. We ignore the overwhelming evidence of technological risk around us at our own peril. Day 29 is rapidly approaching. What are we going to do about it?

  PART

  THREE

  Surviving

  Progress

  CHAPTER 17

  Surviving Progress

  For me, it is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring.

  CARL SAGAN

  It has been a rough ride. We’ve been asked to consider difficult and often uncomfortable questions about technology and the role of omnipresent machines in our lives, devices that we have unquestioningly welcomed into our homes, our offices, our cities, and even our own bodies. This journey has led us to take a piercing and critical look at the ever-growing number of computer screens proliferating in our world, screens we’ve turned around 180 degrees to show the other side of the story, the peril as well as the promise in our love affair with technology. Our growing interconnectedness and the ubiquity of inherently vulnerable computing systems mean that this gathering storm of technological insecurity can no longer be ignored.

  The problem of course is not that technology is bad but that so few understand it. As a result, the computer code that runs our planet can be subverted and used against us by those who do. Exponential times are leading to exponential crimes, ones in which lone individuals with ill intent can reach out and have a nega
tive effect on tens of millions anywhere at any time. Indeed, the entire range of critical information infrastructures that run our society are at risk. These challenges will become greatly exacerbated as billions of new objects go online and networked computers in the form of robots begin moving about the physical space they will share with us, to say nothing of risks from artificial intelligence and synthetic biology. It all seems so daunting and overwhelming, but it is only by first understanding and acknowledging these threats that we can begin to make the changes required to bolster the future foundations of our technological tomorrow.

  There are no easy fixes to the situation in which we currently find ourselves. No panacea or single “just add water and mix” solution that will make it all better. It took billions of individual steps to get us into this predicament, and it may take billions more to get us out. The asymmetric nature of the threat means attackers only need to find a single weakness, while defenders must guard against them all, a veritable impossibility. That said, all is not lost, nor are things hopeless. We don’t need, nor will we ever attain, “perfect security.” Such a thing does not exist. But the near-total absence of trustworthy computing in a world run by computers should serve as a flashing red warning light to us all.

  That science and technology have been a net positive for humanity there is no doubt. Yet, in order to thrive in the coming century, we must first survive the technological risks this progress inevitably brings. There are actions we must take today, important course corrections, to head off the dangerous future looming before us. In the pages that follow are a variety of technical, organizational, educational, and public policy recommendations, both strategic and tactical, meant to lessen the exponentially growing risks posed by technology. Of the myriad steps we must take to protect our technological future, I believe the following to be the most important. Technology is here to stay, and there is no turning back. The key question is how to harness these tools to achieve the maximum possible good while minimizing their downsides. Here’s how we might survive progress.

 

‹ Prev