The DARPA exercise seemed to bear out that idea: American utility operators, more than Ukrainians, have learned to manage the generation and flow of power primarily through their computers and automated systems. Without those modern tools, they’re blinded. Ukrainian operators, by contrast, are far more accustomed to those tools’ failures, and thus ready to fall back on an analog option.
When Sandworm opened circuit breakers in utilities across western and central Ukraine, those utilities’ staffers were ready within hours to drive out in trucks to manually flip the switches at those substations. When I asked Stan McHann why the blue team in the DARPA simulation hadn’t simply disconnected devices like circuit breakers from automation and operated them manually, he told me that was, for some modern equipment, not even an option. “Some breakers have gotten so automated that they’re software-controlled only,” he said ruefully.
But even more than that lack of an analog fallback in American utilities, the DARPA exercise had illustrated a broader point Lee had made to me: that attacks on power grids or other industrial control systems could be far, far worse than what the world had seen thus far.
A future intrusion might target not a distribution or transmission station but an actual power plant. Or it could be designed not simply to turn off equipment but to destroy it, as Mike Assante’s Aurora experiment had demonstrated back in 2007. The massive, rotating generator killed in that proof-of-concept attack, after all, was safeguarded by the same protective relays that are found all over U.S. electrical systems, including transmission stations like the target of Sandworm’s Kiev attack. With the right protective relay exploit, it’s possible that someone could permanently disable power-generation equipment or the massive, often custom-made, multimillion-dollar transformers that serve as the backbone of the American electric transmission system.
Add in that destructive capability, and the dystopian scenarios start to expand well beyond the brief outages that Ukraine experienced, Lee told me. “Washington, D.C.? A nation-state could take it out for two months without much issue,” he’d said calmly.
An isolated incident of physical destruction might not even be the limit to the damage hackers could inflict. When the cybersecurity community talks about state-sponsored hackers, they often refer to them as “advanced, persistent threats”—sophisticated intruders who don’t simply infiltrate a system for the sake of one attack but stay there, silently keeping their hold on a target. If a victim is lucky enough to discover them and purge them from its systems—as the DARPA blue team had tried—it often finds that the hackers have left a backdoor for themselves in some obscure corner of its network and used it to silently take up residence again, like an infestation of hyperintelligent cockroaches.
In his nightmares, Lee had told me, American infrastructure is hacked with this kind of persistence: transportation networks, pipelines, or power grids taken down again and again by deep-rooted adversaries. “If they did that in multiple places, you could have up to a month of outages across an entire region,” he’d said. “Tell me what doesn’t change dramatically when key cities across half of the U.S. don’t have power for a month.”
* * *
■
A year and a half later, when I visited Dragos again, Rob Lee’s critical infrastructure cybersecurity firm had moved across town into a sleek industrial space. In one corner of the new office was a small “pipeline” system of plumbing and pumps, along with a closet full of programmable logic controllers. In the other corner was a full in-house industrial beer-brewing setup. All were intended to serve as targets for Dragos’s hacking demonstrations and training sessions—as well as a virtually unlimited supply of in-house IPA and stout. The business of protecting customers from industrial control system attacks seemed to be booming: Since my last meeting with Lee, his company had exploded from twenty-two employees to eighty-four and raised more than $48 million from investors.
I sat down with Lee at a conference table he’s had custom-made from a single oak trunk. He looked older and rounder than photographs from his air force days, with the bushy red beard of a Viking. His transition from a military maverick to a confident, eccentric CEO seemed to be complete. And he quickly made it clear that the fuel of his and Dragos’s prosperity was a very real escalation in critical infrastructure hacking around the world.
“Nothing has gotten better,” he summarized. “When I last saw you, we were tracking three different groups targeting industrial sectors specifically. We’re tracking ten now.”
Those ten infrastructure-hacking teams, Dragos’s analysts believed, work in the service of six distinct governments, though Lee declined to list exactly which ones. “Russia, China, Iran, and North Korea are not the only actors in this space,” he hinted. “We’re tracking one African state targeting industrial sectors. All of this goes completely outside of what people tend to think.” And Lee estimated that despite Dragos’s extensive intelligence collection—now as the world’s largest cybersecurity incident response team focused on industrial control systems—they’d found less than half the active hacking operations infiltrating targets like grids, factories, pipelines, and water treatment facilities around the world.
Late 2017, in fact, saw another landmark in that mostly invisible conflict: Hackers of unknown origin hit the Saudi oil refinery Petro Rabigh with a piece of malware called Triton or Trisis, designed to disable so-called safety-instrumented systems, which monitor equipment for conditions that might lead to an explosion or chemical leak.
The result could have been a catastrophic lethal accident. Luckily, the malware simply triggered a shutdown of the plant. Though the hackers were widely suspected of working for Iran, FireEye in the fall of 2018 linked the malware to a lab at Russia’s Central Scientific Research Institute of Chemistry and Mechanics, suggesting Russian developers might have built this cyberweapon, too, though perhaps on behalf of another team of saboteurs. “The threat is becoming bolder,” Lee told me. “We’re seeing more aggressive, numerous actions than I’ve seen in ten-plus years of doing this.”
A primary reason for that escalation is the one that Lee had been harping on for years, long enough that his outrage had hardened into a kind of static, cynical anger: The U.S. government, and the West as a whole, have failed to set the norms that might keep the march toward cyberwar in check.
The same inaction, negligence, and focus on offense that spurred him to leave the intelligence community, and led him to burn so many of his bridges with government contacts, were now fueling a global digital arms race. In fact, they had become the primary source of growth for his very successful start-up. And he was still unhappy about it. “The government has largely abdicated its responsibility,” Lee concluded. The red lines had still not been drawn. “Our adversaries think they can get away with it.”
But Lee also viewed the continuing uptick in infrastructure hacking as a kind of self-perpetuating cycle: Every country’s intelligence agencies that witness another country’s hacking capabilities, he explained, immediately seek to match or overtake their foes. And Russia had demonstrated its blackout malware known as Crash Override or Industroyer more than two years earlier. Since then, it was safe to assume, Sandworm’s hackers had developed new ways to wreak havoc in the physical world.
“States like to have parity with each other. If you’re any state other than Russia or the U.S. at this point, you’re feeling like you’re really far behind,” Lee said. “There will be a rush for everyone to build these capabilities. And the losers will be civilian infrastructure owners.”
The powers of disruption Sandworm so recklessly displayed, in other words, weren’t an aberration. They’re merely the most visible model of a tool kit that every militarized nation and rogue state in the world might soon covet or possess: the new standard arsenal for a global cyberwar standoff.
42
RESILIENCE
Dan Geer lives in a one-story wh
ite house near the border between Tennessee and Alabama, surrounded by his two-hundred-acre farm. Together with his wife, he works that land, growing a strain of heirloom corn, garlic, dahlia blossoms, seeds for field peas, and a particular white bean called tarbais, which, he explained to me, any self-respecting chef requires to make a proper cassoulet.
In the other part of his professional life, he works as the chief information security officer for In-Q-Tel, a nonprofit organization that functions as something like a venture capital investor for U.S. spy agencies. “I find it somewhat productive to have one foot in the dirt and one foot in the intelligence community,” he tells me.
In-Q-Tel is tasked with investing in companies that both make money and advance the agenda of the CIA, NSA, FBI, and other three-letter agencies. Geer’s job, as the overseer of the firm’s cybersecurity portfolio, is to see the future of security, on behalf of intelligence agencies that pride themselves on seeing everything.
He fits the part of a professional prophet: At sixty-nine, age has bleached his voluminous muttonchop sideburns white, and he pulls his darker hair back into a ponytail. His reputation in the cybersecurity community warrants this Jedi master image. The Atlantic Council’s Josh Corman described him to me in a reverent tone as “the elder statesman and philosopher” of cybersecurity. Geer emerges semiannually from the hinterlands to give pithy keynotes before hushed audiences at the world’s biggest security conferences like Black Hat and RSA. He’s testified five times before Congress on hearings about national security and technological risk.
But for a futurist, Geer acts a lot like a Luddite. When I managed to speak to him after a few weeks of attempts, it was via a copper landline that connects to his house’s spiral-cord phone. His only cell phone is turned off and stored in the glove compartment of his rust-covered 2001 Ford F-150. He has no TV, and no radios in his house other than a windup one for emergencies. Even his tractor, an older Korean model, was chosen to minimize automation and software. “If you don’t pick up the latest fads, after a while you look like you’ve discarded modern life, but no, you just haven’t adopted it,” Geer explained. “I have no exposure.”
That last sentence in particular captures why I’d sought Geer out. It hints at a key fact about his digital asceticism: It’s not merely the result of a hermit’s inflexible habits. It’s also his way of living out his security principles. In a short paper published by the Stanford-based think tank the Hoover Institution in early 2018 titled “A Rubicon,” Geer made a case for examining an often forgotten variable in the equation of a society’s security against cyberattack: resilience.
Long before and after NotPetya, cybersecurity wonks, experts, and salesmen offered strategies to head off catastrophic cyberattacks: Write more secure software and patch it more conscientiously. Monitor networks with machine-learning-honed tools designed to spot intruders or their malicious software. Punish bad actors like Russia and North Korea.
But like the DARPA Plum Island exercise, Geer isn’t focused on how to prevent the next massive, cascading security fiasco. Instead, he’s determined to figure out how to recover from it quickly and limit its damage. “It may be time to no longer invest further in lengthening time between failures,” as he put it to me, “but instead on shortening meantime to repair.”
The key to that resilience, Geer had argued in his paper, is a sort of independence. “Because the wellspring of risk is dependence, aggregate risk is a monotonically increasing function of aggregate dependence,” Geer had written. Put more simply, a complex system like a digitized civilization is subject to cascading failures, where one thing depends on another, which depends on another thing. If the foundation fails, the whole tower tumbles. If the control systems are hacked, the power turns off, so the gas pumps don’t work, so the mail trucks stop, so the bread isn’t delivered—or a thousand such unpredictable outcomes flowing from myriad, mind-bending interdependencies too complex to compute.
Geer argued that the problem of potential cascading failures in computer systems might by some measures be more threatening to human life as we know it than even climate change. “Interdependence within society today is centered on the Internet beyond all other dependencies excepting climate, and the Internet has a time constant of change five orders of magnitude smaller than that of climate,” he wrote. Or, as he translated to me on the phone, dependence on a stable climate poses at least as much of an existential risk for humanity as dependence on stable computer networks. But a malicious actor doing his or her best to change the climate would need decades of pumping out carbon to do serious damage, while a malicious team of hackers could unleash chaos on the internet in a matter of minutes.
And how to protect society from those dangerous dependencies? “Quenching cascade failure, like quenching a forest fire, requires an otherwise uninvolved area to be cleared of the mechanisms of transit, which is to say it requires the opposite of interdependence,” Geer wrote.
Somehow, he argued, societies need to build or maintain backup systems that are disconnected from interdependent, fragile modern networks. Often, that means an analog alternative. Landline phones when cellular networks fail. Paper ballots that can be counted by hand if vote tallies are hacked. Utility operators like those in Ukraine, ready to switch to manual control and turn the power back on by hand, one circuit breaker at a time. The backup domain controller in a blacked-out data center in Ghana, disconnected from your ravaged global shipping network.
* * *
■
Geer’s paper reminded me of a conversation I’d had a few months earlier, riding along with the CEO of the Ukrainian postal service as his private driver took him to Kiev’s Boryspil airport. That CEO, Igor Smelyansky, had impressed me with the frank way he talked about NotPetya’s paralyzing effects on the postal service, which razed thousands of its computers. He had no illusions about whether it could happen again. “I don’t think we can really prevent something like this,” he’d said calmly. “We can prepare. And we can try to minimize the damage.”
As for how to do damage control for the next cyberfiasco, Smelyansky had ideas. For every element of his seventy-four-thousand-person company, Smelyansky said he and his executives were drawing up a plan for how they could fall back to a kind of minimum set of basic services in the event of another tech meltdown. They were weighing the cost of fuel reserves for their trucks and emergency backup systems for key offices. Every mail truck would get a paper packet that explains how to proceed in the event that cell phone networks are taken down. “We’ll have sorting centers where we have backup generators, so the system can still work on a smaller scale,” Smelyansky said in the Americanized English he’d picked up from years working in New York. “The driver knows he needs to go there. If he can’t reach us, that’s what he does, one, two, three.”
For the post offices’ crucial pension disbursements, Smelyansky had proposed a backup system where, in the event of a prolonged mass computer outage, everyone simply gets a monthly pension of 1,000 hryvnias, a rough median allowance of about $36. When the computers come back online, they add to or subtract from the next payment. But in the meantime, no one starves.
In some ways, the Ukrainian postal service had already fared far better in the wake of NotPetya than another, more modernized country’s might have. When the database of newspaper subscriptions had been destroyed, local offices had pulled out boxes of paper subscription cards to re-create the distribution lists. Ukraine’s pensioners still picked up their pensions in cash, rather than electronic payments. Many offices, particularly in remote regions of Ukraine, still used paper systems to process payments; some employees didn’t even touch computers in their daily work.
“We had the biggest issues in the big cities. In the smaller cities, some employees still remember how to work on paper,” Smelyansky told me. “In Kiev, we had employees who didn’t remember a time before computers. We had to tell them to find someone older to
teach them.”
Despite the fact that the Ukrainian postal service had one foot safely planted in the analog past, NotPetya had still exacted a huge toll, just as it had for so many organizations in Ukraine and around the world. Preparing for the next one wouldn’t be simple, Smelyansky admitted as we arrived at Boryspil airport for his flight. But he was optimistic. “It’s a problem of dependencies. You have to work through the dependencies all the way to the end,” he said. “We’re working through them.”
* * *
■
When I asked Dan Geer if he expected another NotPetya to hit the internet, he answered before I’d even finished asking the question. “Yes. Yes, yes,” he said. “Why would it not? Is there reason to believe North Korea would do something like this? Yes, they have. Would China? Yes. Is the number of countries capable of doing this going up? I would guess so.”
But one lesson of the last cybersecurity disaster—that an older generation of Ukrainian postal workers had the analog skills to keep the system running while the younger generation didn’t—is the thought that particularly troubles Geer. Those analog fallbacks are slipping away into history, replaced by digitized, automated, and ultimately fragile new systems. Geer sees his own disconnection from modern technology as both a personal preference and a contribution to a “baseload” population that keeps the stable, fallback systems of the past running.
“The societal advantage of having a ready, running, and known-to-work alternative if the current option were to blow up is not easy to measure, but I believe it’s important,” he said. “But where do you get the baseload to keep the analog thing running? There has to be some body of people that, left to their own devices, would continue to use it. Otherwise we’ll have to go back in the future and re-create it, and that’s going to be an awful lot harder than keeping it running now.”
Sandworm Page 32