by Fred Kaplan
By the start of 2010, nearly a quarter of Iran’s centrifuges—about 2,000 out of 8,700—were damaged beyond repair. U.S. intelligence analysts estimated a setback in Iran’s enrichment program of two to three years.
Then, early that summer, it all went wrong. President Obama—who’d been briefed on every detail and alerted to every success or breakdown—was told by his advisers that the worm was out of the box: for reasons not entirely clear, it had jumped from one computer to another, way outside the Natanz network, then to another network outside that. It wouldn’t wreak damage—as the briefers had told him before, it was programmed to shut down if it didn’t see a particular Siemens controller—but it would get noticed: the Iranians would eventually find out what had been going on; Olympic Games was on the verge of being blown.
Almost at once, some of the world’s top software security firms—Symantec in California, VirusBlokAda in Belarus, Kaspersky Lab in Russia—started detecting a strange virus randomly popping up around the world. At first, they didn’t know its origins or its purpose; but probing its roots, parsing its code, and gauging its size, they realized they’d hit upon one of the most elaborate, sophisticated worms of all time. Microsoft issued an advisory to its customers, and, forming an anagram from the first few letters on the code, called the virus “Stuxnet”—a name that caught on.
By August, Symantec had uncovered enough evidence to release a statement of its own, warning that Stuxnet was designed not for mischievous hacking or even for espionage, but rather for sabotage. In September, a German security researcher named Ralph Langner inferred, from the available facts, that someone was trying to disable the Natanz nuclear reactor in Iran and that Israelis were probably involved.
At that point, some of the American software sleuths were horrified: Had they just helped expose a highly classified U.S. intelligence operation? They couldn’t have known at the time, but their curiosity—and their professional obligation to inform the public about a loose and possibly damaging computer virus—did have that effect. Shortly after Symantec’s statement, even before Langner’s educated guess about Stuxnet’s true aim, the Iranians drew the proper inference (so this was why their centrifuges were spinning out of control) and cut off all links between the Natanz plant and the Siemens controllers.
When Obama learned of the exposure at a meeting in the White House, he asked his top advisers whether they should shut down the operation. Told that it was still causing damage, despite Iranian countermeasures, he ordered the NSA to intensify the program—sending the centrifuges into wilder contortions, speeding them up, then slowing them down—with no concerns about detection, since its cover was already blown.
The postmortem indicated that, in the weeks after the exposure, another 1,000 centrifuges, out of the remaining 5,000, were taken out of commission.
* * *
Even after Olympic Games came to an end, the art and science of CNA—Computer Network Attack—pushed on ahead. In fact, by the end of October, when U.S. Cyber Command achieved full readiness for operations, CNA emerged as a consuming, even dominant, activity at Fort Meade.
A year earlier, anticipating Robert Gates’s directive creating Cyber Command, the chairman of the Joint Chiefs of Staff, General Peter Pace, issued a classified document, National Military Strategy for Cyber Operations, which expressed the need for “offensive capabilities in cyber space to gain and maintain the initiative.”
General Alexander, now CyberCom commander as well as the NSA director, was setting up forty “cyber-offensive teams”—twenty-seven for the U.S. combatant commands (Central Command, Pacific Command, European Command, and so forth) and thirteen engaged in the defense of networks, mainly Defense Department networks, at home. Part of this latter mission involved monitoring the networks; thanks to the work of the previous decade, starting with the Air Force Information Warfare Center, then gradually extending to the other services, the military networks had so few access points to the Internet—just twenty by this time, cut to eight in the next few years—that Alexander’s teams could detect and repel attacks across the transom. But defending networks also meant going on the offensive, through the deliberately ambiguous concept of CNE, Computer Network Exploitation, which could be both a form of “active defense” and preparation for CNA—Computer Network Attack.
Some officials deep inside the national security establishment were concerned about this trend. The military—the nation—was rapidly adopting a new form of warfare, had assembled and used a new kind of weapon; but this was all being done in great secrecy, inside the nation’s most secretive intelligence agency, and it was clear, even to those with a glimpse of its inner workings, that no one had thought through the implications of this new kind of weapon and new vision of war.
During the planning for Stuxnet, there had been debates, within the Bush and Obama administrations, over the precedent that the attack might establish. For more than a decade, dozens of panels and commissions had warned that America’s critical infrastructure was vulnerable to a cyber attack—and now America was launching the first cyber attack on another nation’s critical infrastructure. Almost no one outright opposed the Stuxnet program: if it could keep Iran from developing nuclear weapons, it was worth the risk; but several officials realized that it was a risk, that the dangers of blowback were inescapable and immense.
The United States wasn’t alone on this cyber rocket ship, after all. Ever since their penetration of Defense Department sites a decade earlier, in Operation Moonlight Maze, the Russians had been ramping up their capabilities to exploit and attack computer networks. The Chinese had joined the club in 2001 and soon grew adept at penetrating sensitive (though, as far as anyone knew, unclassified) networks of dozens of American military commands, facilities, and laboratories. In Obama’s first year as president, around the Fourth of July, the North Koreans—whose citizens barely had electricity—launched a massive denial-of-service attack, shutting down websites of the Departments of Homeland Security, Treasury, Transportation, the Secret Service, the Federal Trade Commission, the New York Stock Exchange, and NASDAQ, as well as dozens of South Korean banks, affecting at least 60,000, possibly as many as 160,000 computers.
Stuxnet spurred the Iranians to create their own cyber war unit, which took off at still greater levels of funding a year and a half later, in the spring of 2012, when, in a follow-up attack, the NSA’s Flame virus—the massive, multipurpose malware from which Olympic Games had derived—wiped out nearly every hard drive at Iran’s oil ministry and at the Iranian National Oil Company. Four months after that, Iran fired back with its own Shamoon virus, wiping out 30,000 hard drives (basically, every hard drive in every workstation) at Saudi Aramco, the joint U.S.-Saudi Arabian oil company, and planting, on every one of its computer monitors, the image of a burning American flag.
Keith Alexander learned, from communications intercepts, that the Iranians had expressly developed and launched Shamoon as retaliation for Stuxnet and Flame. On his way to a conference with GCHQ, the NSA’s British counterpart, he read a talking points memo, written by an aide, noting that, with Shamoon and several other recent cyber attacks on Western banks, the Iranians had “demonstrated a clear ability to learn from the capabilities and actions of others”—namely, those of the NSA and of Israel’s Unit 8200.
It was the latest, most dramatic illustration of what agency analysts and directors had been predicting for decades: what we can do to them, they can someday do to us—except that “someday” was now.
Alexander’s term as NSA director was coinciding with—and Alexander himself had been fostering—not only the advancement of cyber weapons and the onset of physically destructive cyber attacks, but also the early spirals of a cyber arms race. What to do about it? This, too, was a question that no one had thought through, at even the most basic level.
When Bob Gates became secretary of defense, back at the end of 2006, he was so stunned by the volume of attempted intrusions into American military networks—his
briefings listed dozens, sometimes hundreds, every day—that he wrote a memo to the Pentagon’s deputy general counsel. At what point, he asked, did a cyber attack constitute an act of war under international law?
He didn’t receive a reply until the last day of 2008, almost two years later. The counsel wrote that, yes, a cyber attack might rise to the level that called for a military response—it could be deemed an act of armed aggression, under certain circumstances—but what those circumstances were, where the line should be drawn, even the criteria for drawing that line, were matters for policymakers, not lawyers, to address. Gates took the reply as an evasion, not an answer.
One obstacle to a clearer answer—to clearer thinking, generally—was that everything about cyber war lay encrusted in secrecy: its roots were planted, and its fruits were ripening, in an agency whose very existence had once been highly classified and whose operations were still as tightly held as any in government.
This culture of total secrecy had a certain logic back when SIGINT was strictly an intelligence tool: the big secret was that the NSA had broken some adversary’s code; if that was revealed, the adversary would simply change the code; the agency would have to start all over, and until it broke the new code, national security could be damaged; in wartime, a battle might be lost.
But now that the NSA director was also a four-star commander, and now that SIGINT had been harnessed into a weapon of destruction, something like a remote-control bomb, questions were raised and debates were warranted, for reasons having to do not only with morality but with the new weapon’s strategic usefulness—its precise effects, side effects, and consequences.
General Michael Hayden, the former NSA director, had moved over to Langley, as director of the CIA, when President Bush gave the go-ahead on Olympic Games. (He was removed from that post when Obama came to the White House, so he had no role in the actual operation.) Two years after Stuxnet came crashing to a halt, when details about it were leaked to the mainstream press, Hayden—by now retired from the military—voiced in public the same concerns that he and others had debated in the White House Situation Room.
“Previous cyber-attacks had effects limited to other computers,” Hayden told a reporter. “This is the first attack of a major nature in which a cyber-attack was used to effect physical destruction. And no matter what you think of the effects—and I think destroying a cascade of Iranian centrifuges is an unalloyed good—you can’t help but describe it as an attack on critical infrastructure.”
He went on: “Somebody has crossed the Rubicon. We’ve got a legion on the other side of the river now.” Something had shifted in the nature and calculation of warfare, just as it had after the United States dropped atom bombs on Hiroshima and Nagasaki at the end of World War II. “I don’t want to pretend it’s the same effect,” Hayden said, “but in one sense at least, it’s August 1945.”
For the first two decades after Hiroshima, the United States enjoyed vast numerical superiority—for some of that time, a monopoly—in nuclear weapons. But on the cusp of a new era in cyber war, it was a known fact that many other nations had cyber war units, and America was far more vulnerable in this kind of war than any likely adversary, than any other country on the planet, because it relied far more heavily on vulnerable computer networks—in its weapons systems, its financial systems, its vital critical infrastructures.
If America, or U.S. Cyber Command, wanted to wage cyber war, it would do so from inside a glass house.
There was another difference between the two kinds of new weapons, besides the scale of damage they could inflict: nuclear weapons were out there, in public; certain aspects of their production or the exact size of their stockpile were classified, but everyone knew who had them, everyone had seen the photos and the film clips, showing what they could do, if they were used; and if they were used, everyone would know who had launched them.
Cyber weapons—their existence, their use, and the policies surrounding them—were still secret. It seemed that the United States and Israel sabotaged the Natanz reactor, that Iran wiped out Saudi Aramco’s hard drives, and that North Korea unleashed the denial-of-service attacks on U.S. websites and South Korean banks. But no one took credit for the assaults; and while the forensic analysts who traced the attacks were confident in their assessments, they didn’t—they couldn’t—boast the same slam-dunk certainty as a physicist tracking the arc of a ballistic missile’s trajectory.
This extreme secrecy extended not only to the mass public but also inside the government, even among most officials with high-level security clearances. Back in May 2007, shortly after he briefed George W. Bush on the plan to launch cyber attacks against Iraqi insurgents, Mike McConnell, then the director of national intelligence, hammered out an accord with senior officials in the Pentagon, the NSA, the CIA, and the attorney general’s office, titled “Trilateral Memorandum of Agreement Among the Department of Defense, the Department of Justice, and the Intelligence Community Regarding Computer Network Attack and Computer Network Exploitation Activities.” But, apart from the requirement that cyber offensive operations needed presidential approval, there were no formal procedures or protocols for top policy advisers and policymakers to assess the aims, risks, benefits, or consequences of such attacks.
To fill that vast blank, President Obama ordered the drafting of a new presidential policy directive, PPD-20, titled “U.S. Cyber Operations Policy,” which he signed in October 2012, a few months after the first big press leaks about Stuxnet.
Eighteen pages long, it was the most explicit, detailed directive of its kind. In one sense, its approach was more cautious than its predecessors. It noted, for instance, in an implied (but unstated) reference to Stuxnet’s unraveling, that the effects of a cyber attack can spread to “locations other than the intended target, with potential unintended or collateral consequences that may affect U.S. national interests.” And it established an interagency Cyber Operations Policy Working Group to ensure that such side effects, along with other broad policy issues, were weighed before an attack was launched.
But the main intent and impact of PPD-20 was to institutionalize cyber attacks as an integral tool of American diplomacy and war. It stated that the relevant departments and agencies “shall identify potential targets of national importance” against which cyber attacks “can offer a favorable balance of effectiveness and risk as compared to other instruments of national power.” Specifically, the secretary of defense, director of national intelligence, and director of the CIA—in coordination with the attorney general, secretary of state, secretary of homeland security, and relevant heads of the intelligence community—“shall prepare, for approval by the President . . . a plan that identifies potential systems, processes, and infrastructure against which the United States should establish and maintain [cyber offensive] capabilities; proposes circumstances under which [they] might be used; and proposes necessary resourcing and steps that would be needed for implementation, review, and updates as U.S. national security needs change.”
Cyber options were to be systematically analyzed, preplanned, and woven into broader war plans, in much the same way that nuclear options had been during the Cold War.
Also, as with nuclear options, the directive required “specific Presidential approval” for any cyber operation deemed “reasonably likely to result in ‘significant consequences’ ”—those last two words defined to include “loss of life, significant responsive actions against the United States, significant damage to property, serious adverse U.S. foreign policy consequences, or serious economic impact to the United States”—though an exception was made, allowing a relevant agency or department head to launch an attack without presidential approval in case of an emergency.
However, unlike nuclear options, the plans for cyber operations were not intended to lie dormant until the ultimate conflict; they were meant to be executed, and fairly frequently. The agency and department heads conducting these attacks, the directive said, “shall report a
nnually on the use and effectiveness of operations of the previous year to the President, through the National Security Adviser.”
No time was wasted in getting these plans up and ready. An action report on the directive noted that the secretary of defense, director of national intelligence, and CIA director briefed an NSC Deputies meeting on the scope of their plans in April 2013, six months after PPD-20 was signed.
PPD-20 was classified TOP SECRET/NOFORN, meaning it could not be shared with foreign officials; the document’s very existence was highly classified. But it was addressed to the heads of all the relevant agencies and departments, and to the vice president and top White House aides. In other words, the subject was getting discussed, not only in these elite circles, but also—with Stuxnet out in the open—among the public. Gingerly, officials began to acknowledge, in broad general terms, the existence and concept of cyber offensive operations.
General James Cartwright, who’d recently retired as vice chairman of the Joint Chiefs of Staff and who, before then, had been head of U.S. Strategic Command, which had nominal control over cyber operations, told a reporter covering Stuxnet that the extreme secrecy surrounding the topic had hurt American interests. “You can’t have something that’s a secret be a deterrent,” he said, “because if you don’t know it’s there, it doesn’t scare you.”
Some officers dismissed Cartwright’s logic: the Russians and Chinese knew what we had, just as much as we knew what they had. Still, others agreed that it might be time to open up a little bit.
In October, the same month that PPD-20 was signed, the NSA declassified a fifteen-year-old issue of Cryptolog, the agency’s in-house journal, dealing with the history of information warfare. The special issue had been published in the spring of 1997, its contents stamped TOP SECRET UMBRA, denoting the most sensitive level of material dealing with communications intelligence. One of the articles, written by William Black, the agency’s top official for information warfare at the time, noted that the secretary of defense had delegated to the NSA “the authority to develop Computer Network Attack (CNA) techniques.” In a footnote, Black cited a Defense Department directive from the year before, defining CNA as “operations to disrupt, deny, degrade, or destroy information resident in computers and computer networks, or the computers and networks themselves.”