Bossert paused. “I’d have to think about whether I mean that,” he responded, slowing down. “I don’t think I believe that. I don’t subscribe to that policy.
“In the case of war, we reserve the right to do whatever is in our self-interest and within the law of armed conflict,” he continued, echoing the point that Michael Daniel had expressed to me months before. “If you and I put ourselves in the Captain America chair and decide to go to war with someone, we might turn off their power and communications to give ourselves a strategic and tactical advantage. In fact, it’s even condoned in the law of war to conduct all sorts of sabotage against the enemy.”
But these blackouts weren’t aimed at achieving tactical military gains, I pointed out. They were targeted well beyond the front lines and intended to intimidate civilians.
“Agreed, and I do not condone them,” Bossert said. “But put yourself in Putin’s perspective.” Putin was willing to send little green men into Ukraine, to shoot down planes, to hack power grids, Bossert noted. All of that was justified, in Putin’s view, by his original, dubious rationale for the invasion of Ukraine.
“If a similar hypothetical situation confronted the U.S., and if we similarly didn’t care what the international opinion was, meaning we had reached the conclusion it was in our national self-defense interest, we might easily do the same,” Bossert said. “We would shoot down airplanes if we were at war with someone. We would take down power. We would do all those things. The difference here becomes whether Putin was justified militarily being in the Ukraine. We all believe he wasn’t.”
Only after Bossert finished his lunch, shook my hand, and jumped in a cab to Penn Station did I manage to mentally unwrap the layers of policy he’d put forward: Putin’s invasion of Ukraine broke the rules. So did the sloppy, reckless destruction NotPetya inflicted as part of that invasion, but on different grounds. But those rules drew red lines that still preserved the ability to carry out all manner of cyberattacks on civilian critical infrastructure.
If any nation were instead to aim its cyberattacks carefully and start a war for the right reasons, against the right country, those red lines would offer no impediment. In that future cyberwar, in other words, the ends would justify the means.
* * *
■
On November 9, 2017, Microsoft’s president, Brad Smith, stood before a crowd at the United Nations building in Geneva and reminded them of a particular thread of their city’s history. A century and a half earlier, a dozen countries had met in Geneva to hammer out an agreement that they would no longer kill one another’s medical personnel on the battlefield. Over the next century, a growing group of nations would meet three more times, culminating in the signing of the Fourth Geneva Convention in that very spot, setting down the basic protections for noncombatants in wartime that the world largely abides by today.
“It was here in Geneva in 1949 that the world’s governments came together and pledged that they would protect civilians even in times of war,” Smith said. “And yet let’s look at what is happening. We’re seeing nations attack civilians even in times of peace.”
Smith walked through the cybersecurity disasters that had racked the globe in just the prior months: first WannaCry, then NotPetya. Back-to-back acts of state-sponsored hacking had called into question the fundamental security of human infrastructure worldwide—from hospitals to manufacturing to shipping—just as the rifle-and-artillery horrors of the Battle of Solferino in 1859 had brought attention to the need to create what would ultimately become the Red Cross, and World War II had shown the need to protect civilians.
“We live in a world where the infrastructure of our lives is ultimately vulnerable to the weakest link,” Smith told the crowd, notably skipping the fact that some of the weakest links in both of the cyberattacks he’d mentioned had been security flaws in Microsoft’s own Windows operating system. “It’s clear where the world is going. We’re entering a world where every thermostat, every electrical heater, every air conditioner, every power plant, every medical device, every hospital, every traffic light, every automobile will be connected to the Internet. Think about what it will mean for the world when those devices are the subject of attack.”
Then he made his pitch. “The world needs a new, digital Geneva Convention. It needs new rules of the road,” Smith said, intoning the words slowly for emphasis. “What we need is an approach that governments will adopt that says they will not attack civilians in times of peace, they will not attack hospitals, they will not attack the electrical grid, they will not attack the political processes of other countries.”
Smith’s speech was, perhaps, the broadest, most public articulation of the ideal that I’d heard stated for years, most notably by Richard Clarke, a former national security counterterrorism adviser to three presidents whose 2010 book Cyber War had advocated a “Cyber War Limitation Treaty.” Clarke’s imagined treaty would ban “first-use” cyberattacks on critical infrastructure, and even forbid planting sabotage malware on targets like power grids, railroads, and financial institutions.
The cyberwar doves’ position boiled down to Rob Lee’s maxim: No one, anywhere, should be hacking anyone else’s civilian critical infrastructure. For those who’d been in the trenches of the recent cyberattacks spilling out from Ukraine, it seemed obvious. The world needs new red lines beyond the ones I’d heard from officials like J. Michael Daniel and Tom Bossert. It needs straightforward new norms, enshrined in international law, limiting the use of a powerful and dangerous new class of weapon before it costs human lives or cripples entire societies.
But it wouldn’t be so simple. “I think there’s room for a set of agreed-upon rules in cyberspace,” Bossert told me in a follow-up phone call after our meeting, when I’d brought up the Geneva Convention idea. “But it’s hard to imagine all the caveats I’d have to place on that.”
Countries frequently probe each other’s infrastructure or even infect it with malware but stop short of pulling the sabotage trigger, Bossert pointed out. Would those probes violate the letter of the hypothetical new rules? “I just want to make sure whoever writes the rules understands what they’re trying to sign up to,” Bossert said. “If they interpret ‘attack’ to mean scanning and taking control but not actually turning the lights out, we might be going to war unnecessarily. I’ve lived in that gray zone too much.”
But there was a more fundamental roadblock to a digital Geneva Convention, according to Joshua Corman, who was at the time of Smith’s speech the director of the Cyber Statecraft Initiative at the Atlantic Council: Countries like the United States still think they benefit more from their own ability to wage cyberwar than they would from depriving their enemies of that power. “There’s no appetite to go straight to the Geneva Convention. None,” Corman told me. “The Microsoft thing is dead on arrival, because there’s no way we’re going to give up that freedom of movement.”
American officials, Corman explained, still look at the NSA’s superior capabilities and believe that cyberwar favors those with the best offense. What they don’t consider is the degree to which the West has become dependent on the internet and automation—vastly more than adversaries like North Korea or even Russia. “As one of the most connected nations, we’re more dependent and more exposed,” Corman said. “And we stand to lose much more.”
Instead of a full Geneva-style answer, though, Corman advocates a narrower set of rules: no cyberattacks on hospitals, for instance—what he calls a “cyber no-fly zone” around medical targets. “Fine, just say hacking hospitals, deliberately or otherwise, constitutes a war crime,” Corman said. “Cyber’s gonna cyber, but you better be damn flawless in your execution. You fuck up and hit a hospital, you get the international war crime, you’re going to The Hague.”
Of course, any debate over those diplomatic measures remains academic. Russia, China, North Korea, and Iran have no intention of giving up
their cyberweapons. The Trump administration, too, has seemed determined to move in the opposite direction from hacker pacifism. In 2017, Trump announced he would elevate the authority of the Pentagon’s Cyber Command and then the next year quietly increased that cyberwar force’s mandate to preemptively attack foreign targets if it believed they were planning to strike the United States. Three months later, Trump reversed an Obama administration directive that required a complex set of federal agencies to sign off on any offensive hacking operation.
All of that followed through on a campaign promise Trump had made in October 2016, before his election. “As a deterrent against attacks on our critical resources, the United States must possess, and has to, the unquestioned capacity to launch crippling cyber counter attacks,” Trump told a crowd at a speech to military veterans in Virginia. “I mean crippling. Crippling.”
A digital Geneva Convention remains a nice dream. In the meantime, the American government looks more likely to follow the most reflexive, primitive response to a cyberwar arms race: escalation.
41
BLACK START
On a wet day in early November 2018, a power utility engineer named Stan McHann was walking along a road on the southeastern coast of Plum Island, a tiny three-by-one-mile strip of land off the tip of Long Island’s North Fork. He looked to his left, out to the expanse of the Atlantic Ocean, and felt a rare moment of peace in what had been a supremely rough week.
McHann and his colleagues had been fighting off a team of devious hackers who had proven themselves fiercely determined to take down their grid—and keep it down. He’d been locked in combat with the intruders for days, scrambling between distribution substations, sometimes in the midst of sixty-knot winds and sideways rain, to bypass corrupted digital equipment and diagnose problems. Each time the hackers seemed to have been expelled, they’d find another way to inject a new round of mayhem, sending McHann back out into the storm.
Just before 9:00 that morning, all his substations finally seemed to be back online. Out of an abundance of paranoia, McHann had decided to check them anyway, walking out of the utility’s dispatch center near the north of the island and down the coastal road. That’s when he heard a very particular sound.
“It was a bam bam bam bam bam bam bam,” as McHann later described it to me. Seven pops like the explosion of a small-caliber gun, ringing out in succession across the island’s landscape. Each “bam,” he knew immediately, was a circuit breaker slamming open. A startled Con Edison engineer walking with him asked what the strange and terrible noise had been. McHann answered, “That’s all your power going off.”
This disaster situation was not, thankfully, what it sounds like. McHann and his fellow engineers weren’t fighting off the first-ever cyberattack to trigger a blackout on American soil. Instead, they were in the midst of a disturbingly realistic simulation of that dreaded scenario, defending a custom-built and isolated grid from a “red team” of skilled Department of Defense contract hackers, designed to let his “blue team” feel the pain of a utility-targeted cyberattack without inflicting that pain on American civilian victims.
The Plum Island test grid had been constructed by the Defense Advanced Research Projects Agency, or DARPA, the experimental arm of the Pentagon designed to develop technologies to fight future wars. DARPA is famously credited with inventing the internet and in recent decades helping to develop other world-changing technologies like GPS and unmanned aerial vehicles. On Plum Island, the agency had set out to find the tools that would allow electric utilities to fight off highly sophisticated hackers. And to test them, it had dropped those utilities’ engineers into a worst-case scenario: one where they were tasked not with merely keeping the power on but with turning it back on after digital adversaries had already blacked out a grid for days.
As the red team hackers dragged his utility back to that blackout state, McHann was discovering just how painful that recovery could be. “Your heart sinks and your stomach falls through the ground,” he said, describing the feeling of starting over yet again. “Then you suck it up and get back to work.”
* * *
■
A few days before McHann had heard those seven shots ring out across Plum Island, DARPA had kicked off its cyberattack war game with an elaborate setup: Two utilities had been assembled on the island with a remote dispatch center and sixteen transmission substations housed in shipping containers dotting the landscape. One utility started the exercise fully dark; in the game’s panic-inducing scenario, hackers have turned off the power for long enough—weeks or even months—that all its generators are down and even backup batteries are entirely depleted.
The second utility started the week with one diesel generator connected only to a “critical asset,” which the blue team was told it must keep powered at all times. That asset, a crumbling building near the south of Plum Island that was once used by the Pentagon as a laboratory for germ warfare, represented in the world of DARPA’s simulation something like a hospital or a defense command center—an imaginary, nonnegotiable consumer of power that’s required to save lives or win a war. To allow the participants to see from a distance whether the critical asset was powered or not, a series of inflatable dancing wind-sock figures had been plugged in just outside the building, giving it the look more of a used-car dealership than an imperiled hospital.
The participants, engineers drawn from utilities around the country and cybersecurity researchers who submitted proposals to DARPA, were told they must perform a so-called black start. That meant bootstrapping one blacked-out utility’s grid from scratch by spinning up its diesel generator, then building out a path of electrical distribution from both utilities’ generators to their substations, and finally syncing the island’s two utilities to create a redundant power source for the critical asset.
On the first day of the exercise, the utility engineers quickly discovered just how comprehensively they’d have to rethink their approach to running a power grid after it had been fully hijacked by digital saboteurs. Some senior utility operators had begun by telling their teammates that they would restart the grid “by the numbers,” McHann said. These engineers believed they could use their remote readings from networked, digital equipment to power up the grid just as they would after any natural disaster. “They were pretty sure it was going to be another hurricane-training scenario,” he said drily.
Within twenty-four hours, according to McHann, those naive operators had learned that straightforward approach didn’t work when every computer lied to you. The industrial control system software that the utility operators were accustomed to drew its readings of current and voltage from power equipment, displaying them on the dispatch centers’ computers known as human-machine interfaces. But that software had now been fully penetrated by the hackers and offered only wildly inaccurate or even deceptive answers.
Worse, the operators soon discovered that not only those remote readings but even the panels on the equipment couldn’t be trusted. “They knocked out routers, mucked with the data on screens, tripped breakers, routed power wrong,” said McHann of the phantom hackers tormenting them. “You name it, it was coming at us.”
The utility defenders tried to push the hackers out of their systems and rebuild them, only to find that the attackers would tenaciously dig their way back in again. “While we were cleaning things up, the adversary was countering our moves,” one cybersecurity researcher, Stan Pietrowicz, told my Wired colleague Lily Hay Newman, who visited the island during the exercise, in the midst of one of its punishing rainstorms. On the third day, just as the defenders had almost restarted the entire grid, the attackers took down a key substation, throwing them back into chaos. “Even that small victory got taken away from us,” Pietrowicz lamented.
Once the utility engineers conceded to their cybersecurity researcher teammates that their traditional computers were hopeless, they resorted to experimental tools de
signed to bypass the hacked network. Engineers eventually walked into each substation and connected sensors with clamps directly to power equipment. They connected those sensors with a “mesh” network built from portable computers—black boxes the size of a desktop PC—that talked to one another over encrypted channels protected from the rest of the utility’s infections.
Communicating via that encrypted network and with voice commands by phone, the utility operators were finally able to make some progress toward stability. In the very last hours of the weeklong exercise, they finally, briefly synced the two utilities, though there was no guarantee the hackers wouldn’t have taken it down again had the game continued.
Meanwhile, the red team had scored a different sort of victory: Twice, they had managed to take down the power to the “critical asset” the blue team had been ordered to protect. On both occasions, the inflatable sock-men had fallen limp on a concrete ramp outside the building, casualties of a conflict against an insidious, highly persistent enemy.
* * *
■
Hearing the experience of DARPA’s guinea pigs, I was reminded of something Rob Lee had said to me a year and a half earlier, not long after the second Ukrainian blackout. We’d just met for the first time at the bare-bones Baltimore headquarters of his newly formed start-up, Dragos. Outside the window of his office, appropriately, loomed a series of pylons holding up transmission lines that carried power eighteen miles south to Washington, D.C.
“Taking down the American grid would be harder than Ukraine,” Lee had told me at the time. “Keeping it down might be easier.”
Sandworm Page 31