Films from the Future

Home > Other > Films from the Future > Page 17
Films from the Future Page 17

by Andrew Maynard


  Amazingly, we are already moving closer to some of the sensing technology that Kennedy envisions in 2051. In 2016, researchers at the University of California, Berkeley announced they had built a millimeter-sized wireless neural sensor that they dubbed “neural dust.” Small numbers of these, it was envisaged, could be implanted in someone’s head to provide wireless feedback on neural activity from specific parts of the brain. The idea of neural dust is still at a very early stage of development, but it’s not beyond the realm of reason that these sensors could one day be developed into sophisticated wireless brain interfaces.96 And so, while Kennedy’s sci-fi story stretches credulity, reality isn’t as far behind as we might think.

  There’s another side of Kennedy’s story that is relevant here, though. 2051 is set in a future where artificial intelligence and “nanobots” (which we’ll reencounter in chapter nine) have become a major threat. In an admittedly rather silly plotline, we learn that the real-life futurist and transhumanist Ray Kurzweil has loaned the Chinese nanobots which combine advanced artificial intelligence with the ability to self-replicate. These proceed to take over China and threaten the rest of the world. And they have the ability to hack into and manipulate wired-up brains. Because everything that these brains experience comes through their computer connections, the AI nanobots can effectively manipulate someone’s reality with ease, and even create an alternate reality that they are incapable of perceiving as not being real.

  The twist in Kennedy’s tale is that the fictitious nanobots simply want global peace and universal happiness. And the logical route to achieving this, according to their AI hive-mind, is to assimilate humans, and convince them to become part of the bigger collective. It’s all rather Borg-like if you’re a Start Trek fan, but with a benevolent twist.

  Kennedy’s story is, admittedly, rather fanciful. But he does hit on what is probably one of the most challenging aspects of having a fully connected brain, especially in a world where we are seceding increasing power to autonomous systems: vulnerability to hacking.

  Some time ago, I was speaking with a senior executive at IBM, and he confessed that, from his elevated perspective, cybersecurity is one of the greatest challenges we face as a global society. As we see the emergence of increasingly clever hacks on increasingly powerful connected systems, it’s not hard to see why.

  Cyberspace—the sum total of our computers, the networks they form, and the virtual world they represent—is unique in that it’s a completely human-created dimension that sits on top of our reality (a concept we come back to in chapter nine and the movie Transcendence). We have manufactured an environment that quite literally did not exist until relatively recently. It’s one where we can now build virtual realities that surpass our wildest dreams. And because, in the early days of computing, we were more interested in what we could do rather than what we should (or even how we should do it), this environment is fraught with vulnerabilities. Not to put too fine a point on it, we’ve essentially built a fifth dimension to exist in, while making up the rules along the way, and not worrying too much about what could go wrong until it was too late.

  Of course, the digital community learned early on that cybersecurity demanded at least as much attention to good practices, robust protocols, smart design, and effective governance as any physical environment, if people weren’t going to get hurt. But certainly, in the early days, this was seasoned with the idea that, if everything went pear-shaped, someone could always just pull the plug.

  Nowadays, as the world of cyber is inextricably intertwined with biological and physical reality, this pulling-the-plug concept seems like a quaint and hopelessly outmoded idea. Cutting off the power simply isn’t an option when our water, electricity, and food supplies depend on cyber-systems, when medical devices and life-support systems rely on internet connectivity, where cars, trucks and other vehicles cannot operate without being connected, and where financial systems are utterly dependent on the virtual cyber worlds we’ve created.

  It’s this convergence between cyber and physical realities that is massively accelerating current technological progress. But it also means that cyber-vulnerabilities have sometimes startling real-world consequences, including making everything from connected thermostats to digital pacemakers vulnerable to attack and manipulation. And, not surprisingly, this includes brain-machine interfaces.

  In Ghost in the Shell, this vulnerability leads to ghost hacking, the idea that if you connect your memories, thoughts, and brain functions to the net, someone can use that connection to manipulate and change them. It’s a frightening idea that, in our eagerness to connect our very soul to the net, we risk losing ourselves, or worse, becoming someone else’s puppet. It’s this vulnerability that pushes Major Kusanagi to worry about her identity, and to wonder if she’s already been compromised, or whether she would even know if she had been. For all she knows, she is simply someone else’s puppet, being made to believe that she’s her own person.

  With today’s neural technologies, this is a far-fetched fear. But still, there is near-certainty that, if and when someone connects a part of their brain to the net, someone else will work out how to hack that connection. This is a risk that far transcends the biological harms that brain implants and neural nets could cause, potentially severe as these are. But there’s perhaps an even greater risk here. As we move closer to merging the biological world we live in with the cyber world we’ve created, we’re going to have to grapple with living in a world that hasn’t had billions of years of natural selection for the kinks to be ironed out, and that reflects all the limitations and biases and illusions that come with human hubris. This is a world wherein human-made monsters lie waiting for us to stumble on them. And if we’re not careful, we’ll be giving people a one-way neurological door into it.

  Not that I think this should be taken as an excuse not to build brain-machine interfaces. And in reality, it would be hard to resist the technological impetus pushing us in this direction. But at the very least, we should be working with maps that says in big bold letters, “Here be monsters.” And one of the “monsters” we’re going to face is the question of who has ultimate control over the enhanced and augmented bodies of the future.

  Your Corporate Body

  If you have a body augmentation or an implant, who owns it? And who ultimately has control over it? It turns out that if you purchase and have installed a pacemaker or implantable cardiovascular defibrillator, or an artificial heart or other life-giving and life-saving devices, who can do what with it isn’t as straightforward as you might imagine. As a result, augmentation technologies like these raise a really tricky question—as you incorporate more tech into your body, who owns you? We’re still a long way from the body augmentations seen in Ghost in the Shell, but the movie nevertheless foreshadows questions that are going to become increasingly important as we continue to replace parts of our bodies with machines.

  In Ghost, Major Kusanagi’s body, her vital organs, and most of her brain are manufactured by the company Megatech. She’s still an autonomous person, with what we assume is some set of basic human rights. But her body is not her own. Talking with her colleague Batou, they reflect that, if she were to leave Section 9, she would need to leave most of her body behind. Despite the illusion of freedom, Kusanagi is effectively in indentured servitude to someone else by virtue of the technology she is constructed from.

  Even assuming that there are ethical rules against body repossession, Kusanagi is dependent on regular maintenance and upgrades. Miss a service, and she runs the risk of her body beginning to malfunction, or becoming vulnerable to hacks and attacks. In other words, her freedom is deeply constrained by the company that owns her body and the substrate within which her mind resides.

  In 2015, Hugo Campos wrote an article for the online magazine Slate with the sub-heading, “I can’t access the data generated by my implanted defibrillator. That’s absurd.”97 Campos had a device inserted into his body—an Implantable Cardiac Def
ibrillator, or ICD—that constantly monitored his heartbeat, and that would jump-start his heart, were it to falter. Every seven years or so, the implanted device’s battery runs low, and the ICD needs to be replaced, what’s referred to as a “generator changeout.” As Campos describes, many users of ICDs use this as an opportunity to upgrade to the latest model. And in his case, he was looking for something specific with the changeout; an ICD that would allow him to personally monitor his own heart.

  This should have been easy. ICDs are internet-connected these days, and regularly send the data they’ve collected to healthcare providers. Yet patients are not allowed access to this data, even though it’s generated by their own body. Campos’ solution was to purchase an ICD programmer off eBay and teach himself how to use it. He took the risk of flying close to the edge of legality to get access to his own medical implant.

  Campos’ experience foreshadows the control and ownership challenges that increasingly sophisticated implants and cyber/machine augmentations raise. As he points out, “Implants are the most personal of personal devices. When they become an integral part of our organic body, they also become an intimate part of our identity.” And by extension, without their ethical and socially responsive development and use, a user’s identity becomes connected to those that have control over the device and its operations.

  In the case of ICDs, manufacturers and healthcare providers still have control over the data collected and generated by the device. You may own the ICD, but you have to take on trust what you are told about the state of your health. And you are still beholden to the “installers” for regular maintenance. Once the battery begins to fail, there are only so many places you can go for a refit. And unlike a car or a computer, the consequence of not having the device serviced or upgraded is possible death. It’s almost like being locked into a phone contract where you have the freedom to leave at any time, but contract “termination” comes with more sinister overtones. Almost, but not quite, as it’s not entirely clear if users of ICDs even have the option to terminate their contracts.

  In 2007, Ruth and Tim England and John Coggins grappled with this dilemma through the hypothetical case of an ICD in a patient with terminal cancer.98 The hypothetical they set up was to ask who has the right to deactivate the device, if constant revival in the case of heart failure leads to continued patient distress. The scenario challenges readers of their work to think about the ethics of patient control over such implants, and the degree of control that others should have. Here, things turn out to be murkier than you might think. Depending on how the device is classified, whether it is considered a fully integrated part of the body, for instance, or an ongoing medical intervention, there are legal ramifications to who does what, and how. If, for instance, an ICD is considered simply as an ongoing medical treatment, the healthcare provider is able to decide on its continued use or termination, based on their medical judgment, even if this is against the wishes of the patient. In other words, the patient may own the ICD, but they have no control over its use, and how this impacts them.

  On the other hand, if the device is considered to be as fully integrated into the body as, say, the heart itself, a physician will have no more right to permanently switch it off than they have the right to terminally remove the heart. Similarly, the patient does not legally have the right to tamper with it in a way that will lead to death, any more than they could legally kill themselves.

  In this case, England and colleagues suggest that intimately implanted devices should be treated as a new category of medical device. They refer to these as “integral devices” that, while not organic, are nevertheless a part of the patient. They go on to suggest that this definition, which lies somewhere between the options usually considered for ICDs, will allow more autonomy on the part of patient and healthcare provider. And specifically, they suggest that “a patient should have the right to demand that his ICD be disabled, even against medical advice.”

  England’s work is helpful in thinking through some of the complexities of body implant ethics. But it stops far short of addressing two critical questions: Who has the right to access and control augmentations designed to enhance performance (rather than simply prevent death), and what happens when critical upgrades or services are needed?

  This is where we’re currently staring into an ethical and moral vacuum. It might not seem such a big deal when most integrated implants at the moment are health-protective rather than performance-enhancing. But we’re teetering on the cusp of technological advances that are likely to sweep us toward an increasingly enhanced future, without a framework for thinking about who controls what, and who ultimately owns who you are.

  This is very clear in emerging plans for neural implants, whether it’s Neuralink’s neural lace or other emerging technologies for connecting your brain to the net. While these technologies will inevitably have medical uses—especially in treating and managing neurological diseases like Parkinson’s disease—the expectation is that they will also be used to increase performance and ability in healthy individuals. And as they are surgically implanted, understanding who will have the power to shut them down, or to change their behavior and performance, is important. As a user, will you have any say in whether to accept an overnight upgrade, for instance? What will your legal rights be when a buggy patch leads to a quite-literal brain freeze? What happens when you’re given the choice of paying for “Neuralink 2.0” or keeping an implant that is no longer supported by the manufacturer? And what do you do when you discover your neural lace has a hardware vulnerability that makes it hackable?

  This last question is not idle speculation. In August 2016, a report from the short-selling firm Muddy Waters Capital LLC released a report claiming that ICDs manufactured by St. Jude Medical, Inc. were vulnerable to potentially life-threatening cyberattacks.99 The report claimed:

  “We have seen demonstrations of two types of cyber-attacks against [St Jude] implantable cardiac devices (‘cardiac devices’): a ‘crash’ attack that causes cardiac devices to malfunction—including by apparently pacing at a potentially dangerous rate; and, a battery drain attack that could be particularly harmful to device dependent users. Despite having no background in cybersecurity, Muddy Waters has been able to replicate in-house key exploits that help to enable these attacks.”

  St. Jude vehemently denied the accusations, claiming that they were aimed at manipulating the company’s value (the company’s stock prices tumbled as the report was released). Less than a year later, St. Jude was acquired by medical giant Abbott. But shortly after this, hacking fears led to the US Food and Drug Administration recalling nearly half a million former St. Jude pacemakers100 due to an identified cybersecurity vulnerability.

  Fortunately, there were no recorded cases of attacks in this instance, and the fix was a readily implementable firmware update. But the case illustrates just how vulnerable web-connected intimate body enhancements can be, and how dependent users are on the manufacturer. Obviously, such systems can be hardened against attack. But the reality is that the only way to be completely cyber-secure is to have no way to remotely connect to an implanted device. And increasingly, this defeats the purpose for why a device is, or might be, implanted in the first place.

  As in the case of the St Jude pacemaker, there’s always the possibility of remotely-applied patches, much like the security patches that seem to pop up with annoying frequency on computer operating systems. With future intimate body enhancements, there will almost definitely be a continuing duty of care from suppliers to customers to ensure their augmentations are secure. But this in turn ties the user, and their enhanced body, closely to the provider, and it leaves them vulnerable to control by the providing company. Again, the scenario is brought to mind of what happens when you, as an enhanced customer, have the choice of keeping your enhancement’s buggy, security-vulnerable software, or paying for the operating system upgrade. The company may not own the hardware, but without a doubt, they own you, or at lea
st your health and security.

  Things get even more complex as the hardware of implantable devices becomes outdated, and wired-in security vulnerabilities are discovered. On October 21, 2016, a series of distributed denial of service (DDOS) attacks occurred around the world. Such attacks use malware that hijacks computers and other devices and redirects them to swamp cyber-targets with massive amounts of web traffic—so much traffic that they effectively take their targets out. What made the October 21 attacks different is that the hijacked devices were internet-connected “dumb devices”: home routers, surveillance cameras, and many others with a chip allowing them to be connected to the internet, creating an “Internet of Things.” It turns out that many of these devices, which are increasingly finding their way into our lives, have hardware that is outdated and vulnerable to being coopted by malware. And the only foolproof solution to the problem is to physically replace millions—probably billions—of chips.

  The possibility of such vulnerabilities in biologically intimate devices and augmentations places a whole new slant on the enhanced body. If your enhancement provider has been so short-sighted as to use attackable hardware, who’s responsible for its security, and for physically replacing it if and when vulnerabilities are discovered? This is already a challenge, although thankfully tough medical device regulations have limited the extent of potential problems here so far. Imagine, though, where we might be heading with poorly-regulated innovation around body-implantable enhancements that aren’t designed for medical reasons, but to enhance ability. You may own the hardware, and you may have accepted any “buyer beware” caveats it came with. But who effectively owns you, when you discover that the hardware implanted in your legs, your chest, or your brain, has to be physically upgraded, and you’re expected to either pay the costs, or risk putting your life and well-being on the line?

 

‹ Prev