Naturally, when the Pentagon, the FBI, and utility executives looked at malware in an American power plant—especially the BlackEnergy exploit, which became widespread starting in 2014—their minds went immediately to the most extreme scenario: the Russians were preparing to shut down everything that makes America hum. But as Michael Hayden, the former NSA and CIA director, often said, “Breaking into a system is one thing, and breaking it is another.” He was right: The Russians—and other nations—had been lurking inside American utilities, financial markets, and cell-phone networks for years. But so far they hadn’t hit the kill switch.
In its own operations, the United States has also been cautious. Any destructive attack to actually break a foreign system requires many levels of approval, including from the president. There were looser rules about just entering a system and looking around—espionage instead of “preparing the environment” for attack. Yet as Martin Libicki, a cyber expert at the US Naval Academy, noted, to the country or company on the receiving end, that distinction may mean little: “From a psychological perspective, the difference between penetration and manipulation may not matter so much.”
Perhaps that is why the Obama administration almost reflexively decided to treat the early breaches in the American utilities’ networks as a classified secret. Senior members of Congress, selected staffers, and utility company CEOs were taken into locked, signal-proof rooms for briefings on the intelligence. There was no note taking. “It was ridiculous,” one of them complained to me soon after. Under the strict rules they were given, the utility executives were barred from sharing the information with the people who administered their networks. Put another way, the only people who could do something about the problem—or at least prepare backup systems—were prohibited from knowing about it.
The intelligence agencies said they feared that if the discovery were made public it would tip off the Russians to the quality of our detection systems and perhaps to how deeply the NSA is into Russian systems as well. No doubt that was a risk. But could anyone imagine the United States withholding similar intelligence about an impending terror attack that might bring down a bridge, or blow up an electric substation? Almost certainly not—they would want everyone on alert. The rules were different for cyber.
For the Russians, there were no consequences in the Ukraine or in the United States for hacking into the grid. This situation would play out time and again: The NSA did not want to expose intrusions in American systems, for fear of exposing “sources and methods.” The White House did not want to reveal what was known for fear that, as one of Obama’s top advisers put it, “someone will then ask, ‘What do you plan to do about it?’ ” Eventually, the Russian hack was exposed by private cybersecurity firms that detected the same malware in the utility grid that the government was seeing. And government officials would play coy, discussing the issue only in background conversations, rarely on the record. (Attacks by North Korea seemed to be the exception.)
Ozment knew all this recent history as he looked at what was happening in Ukraine. It made for several urgent questions: Was Ukraine a test run for something Russia was planning in the United States? Or was it simply part of the shadow war under way for two years in a far-away land?
No one really knew for sure.
* * *
—
The Christmastime attack in Ukraine turned out the lights for only 225,000 customers, for a few hours. But Ozment suspected that switching off the grid, even briefly, was all the Russians intended. After all, this attack was about sending a message and sowing fear. It has never been clear, from what has so far been declassified, whether Putin himself knew about the Ukraine power attack in advance or if he ordered it. But whether Putin knew or not, the attack demonstrated in the cyber realm what the Russians had already demonstrated in the physical world by retaking Crimea: they could get away with a lot, as long as they used subtle, short-of-war tactics.
Ozment knew the United States had to understand how the Russians pulled off the attack. This was, after all, the first publicly acknowledged hack of electric utilities that actually turned out the lights. With the help of the Department of Energy and the White House, he assembled a team of experts—some of his own first responders and others from big electric utilities—and negotiated with Ukraine to dispatch them to Kiev. The instructions they carried were simple: go figure out what happened and whether the United States is vulnerable to the same kind of attack.
The team came back with a mixed answer. While the Ukrainians did not have defenses as sophisticated as many American utility companies, a quaint oddity in Ukrainian systems ultimately saved them from an even greater disaster. It turned out that their electric grid, built by the Soviets, was so antiquated that it wasn’t entirely dependent on computers.
“They still had the big, old metal switches that ran the power grid back in the pre-computer age,” Ozment explained, as if admiring the simplicity of an original Ford Model A engine. The investigators reported that Ukrainian engineers got into their trucks and went scrambling from one substation to another, looking for switches that they could throw to route around the computers and turn the lights back on.
Score one for a creaking, antiquated system, particularly since it would take months for the Ukrainians to rebuild their damaged computer-based network controls. But Ukraine’s resilience was not much comfort to the Americans who read the reports and thought about their own vulnerabilities. Few American systems still had these rusting old switches—they were eliminated long ago. And even if the American utilities had hung on to the old systems, the engineers who knew how they worked had long since retired. It would, one energy executive told me, be quite a challenge to find someone with the knowledge of how to save the day.
The reports contained some other lessons. Ozment’s investigators had discovered a trail of evidence in Ukraine that pointed to careful planning, a professional job. Before the attackers struck, he said, they had been inside Ukraine’s electrical grid for more than six months. They had stolen passwords, slowly gained administrator privileges, and learned everything they needed to take over the system and lock out the control-room operators.
They had also left bits of computer malware waiting to explode, like land mines inside the network. Just as in the Sony attack, a “KillDisk” program had been used to wipe out hard drives—turning the computers at the utilities into a useless pile of metal, plastic, and mice. For good measure, a “call center” for customers to dial to report an outage was flooded with automated calls, to maximize frustration and anger.
In the dry wording of one after-action report, “the outages were caused by the use of the control systems and their software through direct interaction by the adversary.” Translation: Not only had the computer systems been attacked, they had been controlled from afar, likely outside Ukraine’s borders. It was a metaphor for what Russia wanted to do with the whole country.
Read closely, the reports underscored how much the Russian attackers seemed to learn from what the United States and Israel did in preparation for the Stuxnet attack on Iran’s nuclear program. Each of the steps seemed familiar—the patience, the care that went into mapping the systems and making the losses so hard to recover from. One night I pressed a former official who had delved deeply into the attack. “They followed the American script,” he agreed. “And they figured out how to use it against us.” It wouldn’t be the first time, he said, nor the last.
Nothing about the Ukraine attack was particularly sophisticated technically, Ozment observed, “but it is dangerous to confuse sophistication with effectiveness.” That was the message being circulated back in Washington, where, as private firms noted, some of the malware found in the Ukrainian power grid was the same “BlackEnergy” code that was also in the US power grid. BlackEnergy hadn’t turned the lights out in Ukraine. But it helped prepare the operation.
Ozment paused a moment. “When you looked at what
happened,” he concluded, “it was pretty chilling.”
* * *
—
The Russians weren’t done in Ukraine. Other attacks were still building steam. Russia pummeled Ukraine in 2016—a year when President Poroshenko declared that the Ukrainian government had been hit by 6,500 cyberattacks in just two months, though these were mostly harassment rather than serious attacks. The message was clear: the cyberattacks were part of a continuing, low-level conflict meant to keep the Poroshenko government off balance. Clearly Ukraine was being tested. The Russians wanted to determine if there were any limits. They discovered none.
What happened in Ukraine confirmed the corollary to the Gerasimov doctrine: As long as cyber-induced paralysis was hard to see, and left little blood, it was difficult for any country to muster a robust response. The attack would make a lot of news, but it would be unlikely to galvanize much action, especially if it were unclear for a few weeks who was responsible. Meanwhile, the government under attack could appear both helpless and hopeless.
Putin’s gamble was clearly paying off. He had sent a powerful signal that there were ways to take the conflict to the Ukrainian capital and undermine the government of President Poroshenko without sending a single tank into the city. And, on the world stage, these cyberattacks showcased Putin’s newest tools—which other countries had not figured out how to deter, or respond to if deterrence fails.
Incapable of striking back at Russia, Ukraine remains locked in the oddest kind of just-below-the-radar conflict, one intended to keep the country from straying too far toward Europe. For Putin, the country still serves its centuries-old role as a buffer zone with the West. The daily digital air raids are intended to keep the country in perpetual instability—an onslaught that everyone soon got accustomed to, a feature as permanent as Saint Sophia Cathedral in the middle of Kiev.
So the Ukrainians shrugged when the Russians hit the power grid again, in December 2016. That attack was briefer, but it hit the capital. And it showed that the Russians were learning. In 2015 they had gone after a distribution system; when they came back they had gone after one of Kiev’s main transmission systems. And when a company called Dragos unpacked the code, they found a new kind of malware, called “Crash Override,” that was designed specifically to take over the equipment in the grid. It was based in part on artificial-intelligence techniques, enabling it, as Andy Greenberg of Wired wrote, to “launch at a preset time, opening circuits on cue without even having an internet connection back to the hackers.” It was the cyber equivalent of a self-guiding missile.
It would work almost anywhere, with a few tweaks.
Shymkiv played down its import when he saw me in his office, about seven months later. But he admitted that Ukraine was “the petri dish for every cyber technique the Russians want to trot out.” What he lacked was a game plan for stopping those attacks; he was always playing defense.
It turned out he wasn’t the only one without a strategy for dealing with a newly aggressive Russia. Washington didn’t have one either.
CHAPTER VIII
THE FUMBLE
I cannot forecast to you the action of Russia. It is a riddle wrapped in a mystery inside an enigma; but perhaps there is a key.
—Winston Churchill, October 1939
In the middle of 2015, long before the 2016 presidential primaries heated up, the Democratic National Committee asked Richard Clarke, a hard-bitten national security fixture in Washington, to assess the political organization’s digital vulnerabilities.
Clarke was best known as the counterterrorism chief in the Clinton and Bush national security councils, who had given warning that Osama bin Laden was planning a massive attack on the United States. He was the one who, in the aftermath of the September 11 attacks, famously told the relatives of victims that “your government failed you,” and blamed President Bush’s White House for ignoring the many warnings he had issued. Embittered by his government experience but very much a DC creature, he went off to start a cybersecurity firm, Good Harbour International.
He wasn’t surprised when the Democratic National Committee called. “They were an obvious target,” Clarke told me later. But he was amazed when his team discovered how wide-open the committee’s systems were. As it stood, the DNC—despite its Watergate history, despite the well-publicized Chinese and Russian intrusions into the Obama campaign computers in 2008 and 2012—was securing its data with the kind of minimal techniques that you might expect to find at a chain of dry cleaners.
The committee employed a basic service to filter out ordinary spam, but it wasn’t even as sophisticated as what Google’s Gmail provides; it certainly wasn’t a match for a sophisticated attack. And the DNC barely trained its employees to spot a “spear phishing” attack, the kind that fooled the Ukrainian power operators into clicking on a link, only to steal whatever passwords are entered. It lacked any capability for anticipating attacks or detecting suspicious activity in the network—such as the dumping of data to a distant server. It was 2015, and the committee was still thinking like it was 1972.
So Good Harbour came up with a list of urgent steps the DNC needed to take to protect itself.
Too expensive, the DNC told Clarke after the company presented the list. “They said all their money had to go into the presidential race,” he recalled. They told him they’d worry about the security issues after Election Day. That response came as no surprise to anyone who knew the DNC as a bailing-wire-and-duct-tape organization held together largely by the labors of recent college graduates working on shoestring budgets.
Of the many disastrous misjudgments the Democrats made in the 2016 elections, that one may rank as the worst.
“These DNC guys were like Bambi walking in the woods, surrounded by hunters,” a senior FBI official told me. “They had zero chance of surviving an attack. Zero.”
* * *
—
When an intelligence report from the National Security Agency about a suspicious Russian intrusion into the computer networks at the DNC was tossed onto Special Agent Adrian Hawkins’s desk at the end of the summer of 2015, he was already overwhelmed. Whenever the FBI was called in to investigate a major cyber intrusion in Washington that had struck a think tank, law firm, lobbyist, or political organization, the assignment usually ended up on Hawkins’s desk. As an agent making his way up the ladder at the Washington Field Office, Hawkins possessed the weary air of a man who had seen every variety of hack. And he had, from espionage to identity theft to attempts at destroying data.
So when the DNC hack got tossed onto his growing pile, it did not strike him or his superiors at the FBI as a four-alarm fire.
“It was hard to find a prominent organization in Washington that the Russians weren’t hitting,” another veteran of the FBI’s cyber division told me later. “At the beginning, this just looked like espionage. Everyday, ordinary spying.” They assumed that the DNC intrusion was probably just another case of hyperactive Russian spies looking to buff their résumés by bringing home a bit of political gossip. After all, the DNC was not exactly where you went to find the nuclear codes.
In September, Hawkins called the DNC switchboard, hoping to alert its computer-security team to the FBI’s evidence of Russian hacking. He quickly discovered they didn’t have a computer-security team. He ended up being transferred to the “help desk,” which he found singularly unhelpful. Then, someone on the other end of the line handed the phone to a young information technology contractor with no real experience in computer security. His name was Yared Tamene.
Hawkins identified himself on the line and explained to Tamene that he had evidence that the DNC had been hacked by a group that the federal government (but no one else) called “the Dukes,” a Russia-affiliated group. He didn’t go into details about their long history of breaking into other government agencies or how stealthy they were in avoiding detection. He couldn’t�
��much of that was treated as classified information, even though private security firms had published extensively about the group, which most called “Cozy Bear.”
Tamene jotted down some information about how to identify the malware. Later he composed an internal DNC memo and emailed it to his colleagues. “The FBI thinks the DNC has at least one compromised computer on its network and the FBI wanted to know if the DNC is aware, and if so, what the DNC is doing about it,” Tamene wrote. And he went back to his workday tasks.
Of course, the DNC wasn’t aware. And it wasn’t doing anything about it.
Perhaps Tamene’s sangfroid in the face of Hawkins’s news was due to the fact that he had no memory of Watergate. Or perhaps it was because he wasn’t a full-time employee at the DNC; he worked for a Chicago-based contracting firm they’d hired to keep their computers operating. He was in charge of keeping the network running, not keeping it safe. Most important, he thought that Agent Hawkins might be spoofing him, that perhaps the call was from someone impersonating an FBI agent. So when Hawkins left a series of voice messages a month later, Tamene didn’t call back. “I did not return his calls,” Tamene later wrote to his colleagues, “as I had nothing to report.”
It wasn’t until November 2015 that the two spoke on the phone again. This time Hawkins explained that the situation was now worsening. One of DNC’s own computers—it wasn’t clear which one—was transmitting information out of its headquarters. In a memo Tamene later wrote, he said Hawkins specifically warned him the machine was “calling home, where home meant Russia.”
“SA Hawkins added that the FBI thinks that this calling home behavior could be the result of a state-sponsored attack,” Tamene wrote. Implicit in the memo was this reality: The FBI might see the DNC’s data flowing outside its building, but it didn’t have the responsibility to protect privately owned computer networks. That was the job of the DNC itself.
The Perfect Weapon Page 20