by Fred Kaplan
As it happened, a framework for this fusion already existed. The CIA had created the Information Operations Center during the Belgrade operation, to plant devices on Serbian communications systems, which the NSA could then intercept; this center would be Langley’s contribution to the new joint effort. Fort Meade’s would be the third box on the new SIGINT organizational chart—“tailored access.”
Minihan had coined the phrase. During his tenure as director, he pooled a couple dozen of the most creative SIGINT operators into their own corner on the main floor and gave them that mission. What CIA black-bag operatives had long been doing in the physical world, the tailored access crew would now do in cyberspace, sometimes in tandem with the black-baggers, if the latter were needed—as they had been in Belgrade—to install some device on a crucial piece of hardware.
The setup transformed the concept of signals intelligence, the NSA’s stock in trade. SIGINT had long been defined as passively collecting stray electrons in the ether; now, it would also involve actively breaking and entering into digital machines and networks.
Minihan had wanted to expand the tailored access shop into an A Group of the digital era, but he ran out of time. When Hayden launched his reorganization, he took the baton and turned it into a distinct, elite organization—the Office of Tailored Access Operations, or TAO.
It began, even under his expansion, as a small outfit: a few dozen computer programmers who had to pass an absurdly difficult exam to get in. The organization soon grew into an elite corps as secretive and walled off from the rest of the NSA as the NSA was from the rest of the defense establishment. Located in a separate wing of Fort Meade, it was the subject of whispered rumors, but little solid knowledge, even among those with otherwise high security clearances. Anyone seeking entrance into its lair had to get by an armed guard, a cipher-locked door, and a retinal scanner.
In the coming years, TAO’s ranks would swell to six hundred “intercept operators” at Fort Meade, plus another four hundred or more at NSA outlets—Remote Operations Centers, they were called—in Wahiawa, Hawaii; Fort Gordon, Georgia; Buckley Air Force Base, near Denver; and the Texas Cryptology Center, in San Antonio.
TAO’s mission, and informal motto, was “getting the ungettable,” specifically getting the ungettable stuff that the agency’s political masters wanted. If the president wanted to know what a terrorist leader was thinking and doing, TAO would track his computer, hack into its hard drive, retrieve its files, and intercept its email—sometimes purely through cyberspace (especially in the early days, it was easy to break a target’s password, if he’d inscribed a password at all), sometimes with the help of CIA spies or special-ops shadow soldiers, who’d lay their hands on the computer and insert a thumb drive loaded with malware or attach a device that a TAO specialist would home in on.
These devices—their workings and their existence—were so secret that most of them were designed and built inside the NSA: the software by its Data Network Technologies Branch, the techniques by its Telecommunications Network Technologies Branch, and the customized computer terminals and monitors by its Mission Infrastructure Technologies Branch.
Early on, TAO hacked into computers in fairly simple ways: phishing for passwords (one such program tried out every word in the dictionary, along with variations and numbers, in a fraction of a second) or sending emails with alluring attachments, which would download malware when opened. Once, some analysts from the Pentagon’s Joint Task Force-Computer Network Operations were invited to Fort Meade for a look at TAO’s bag of tricks. The analysts laughed: this wasn’t much different from the software they’d seen at the latest DEF CON Hacking Conference; some of it seemed to be repackaged versions of the same software.
Gradually, though, the TAO teams sharpened their skills and their arsenal. Obscure points of entry were discovered in servers, routers, workstations, handsets, phone switches, even firewalls (which, ironically, were supposed to keep hackers out), as well as in the software that programmed, and the networks that connected, this equipment. And as their game evolved, their devices and programs came to resemble something out of the most exotic James Bond movie. One device, called LoudAuto, activated a laptop’s microphone and monitored the conversations of anyone in its vicinity. HowlerMonkey extracted and transmitted files via radio signals, even if the computer wasn’t hooked up to the Internet. MonkeyCalendar tracked a cell phone’s location and conveyed the information through a text message. NightStand was a portable wireless system that loaded a computer with malware from several miles away. RageMaster tapped into a computer’s video signal, so a TAO technician could see what was on its screen and thus watch what the person being targeted was watching.
But as TAO matured, so did its targets, who figured out ways to detect and block intruders—just as the Pentagon and the Air Force had figured out ways, in the previous decade, to detect and block intrusions from adversaries, cyber criminals, and mischief-makers. As hackers and spies discovered vulnerabilities in computer software and hardware, the manufacturers worked hard to patch the holes—which prodded hackers and spies to search for new vulnerabilities, and on the race spiraled.
As this race between hacking and patching intensified, practitioners of both arts, worldwide, came to place an enormous value on “zero-day vulnerabilities”—holes that no one had yet discovered, much less patched. In the ensuing decade, private companies would spring up that, in some cases, made small fortunes by finding zero-day vulnerabilities and selling their discoveries to governments, spies, and criminals of disparate motives and nationalities. This hunt for zero-days preoccupied some of the craftiest mathematical minds in the NSA and other cyber outfits, in the United States and abroad.
Once, in the late 1990s, Richard Bejtlich, a computer network defense analyst at Kelly Air Force Base discovered a zero-day vulnerability—a rare find—in a router made by Cisco. He phoned a Cisco technical rep and informed him of the problem, which the rep then quickly fixed.
A couple days later, proud of his prowess and good deed, Bejtlich told the story to an analyst on the offensive side of Kelly. The analyst wasn’t pleased. Staring daggers at Bejtlich, he muttered, “Why didn’t you tell us?”
The implication was clear: if Bejtlich had told the offensive analysts about the flaw, they could have exploited it to hack foreign networks that used the Cisco server. Now it was too late; thanks to Bejtlich’s phone call, the hole was patched, the portal was closed.
As the NSA put more emphasis on finding and exploiting vulnerabilities, a new category of cyber operations came into prominence. Before, there was CND (Computer Network Defense) and CNA (Computer Network Attack); now there was also CNE (Computer Network Exploitation).
CNE was an ambiguous enterprise, legally and operationally, and Hayden—who was sensitive to legal niceties and the precise wiggle room they allowed—knew it. The term’s technical meaning was straightforward: the use of computers to exploit the vulnerabilities of an adversary’s networks—to get inside those networks, in order to gain more intelligence about them. But there were two ways of looking at CNE. It could be the front line of Computer Network Defense, on the logic that the best way to defend a network was to learn an adversary’s plans for attack—which required getting inside his network. Or, CNE could be the gateway for Computer Network Attack—getting inside the enemy’s network in order to map its passageways and mark its weak points, to “prepare the battlefield” (as commanders of older eras would put it) for an American offensive, in the event of war.I
The concept of CNE fed perfectly into Hayden’s desire to fuse cyber offense and cyber defense, to make them indistinguishable. And while Hayden may have articulated the concept in a manner that suited his agenda, he didn’t invent it; rather, it reflected an intrinsic aspect of modern computer networks themselves.
In one sense, CNE wasn’t so different from intelligence gathering of earlier eras. During the Cold War, American spy planes penetrated the Russian border in order to force Soviet
officers to turn on their radar and thus reveal information about their air-defense systems. Submarine crews would tap into underwater cables near Russian ports to intercept communications, and discover patterns, of Soviet naval operations. This, too, had a dual purpose: to bolster defenses against possible Soviet aggression; and to prepare the battlefield (or airspace and oceans) for an American offensive.
But in another sense, CNE was a completely different enterprise: it exposed all society to the risks and perils of military ventures in a way that could not have been imagined a few decades earlier. When officials in the Air Force or the NSA neglected to let Microsoft (or Cisco, Google, Intel, or any number of other firms) know about vulnerabilities in its software, when they left a hole unplugged so they could exploit the vulnerability in a Russian, Chinese, Iranian, or some other adversary’s computer system, they also left American citizens open to the same exploitations—whether by wayward intelligence agencies or by cyber criminals, foreign spies, or terrorists who happened to learn about the unplugged hole, too.
This was a new tension in American life: not only between individual liberty and national security (that one had always been around, to varying degrees) but also between different layers and concepts of security. In the process of keeping military networks more secure from attack, the cyber warriors were making civilian and commercial networks less secure from the same kinds of attack.
These tensions, and the issues they raised, went beyond the mandate of national security bureaucracies; only political leaders could untangle them. As the twenty-first century approached, the Clinton administration—mainly at the feverish prodding of Dick Clarke—had started to grasp the subject’s complexities. There was the Marsh Report, followed by PDD-63, the National Plan for Information Systems Protection, and the creation of Information Sharing and Analysis Centers, forums in which the government and private companies could jointly devise ways to secure their assets from cyber attacks.
Then came the election of November 2000, and, as often happens when the White House changes party, all this momentum ground to a halt. When George W. Bush and his aides came to power on January 20, 2001, the contempt they harbored for their predecessors seethed with more venom than usual, owing to the sex scandal and impeachment that tarnished Clinton’s second term, compounded by the bitter aftermath of the election against his vice president, Al Gore, which ended in Bush’s victory only after the Supreme Court halted a recount in Florida.
Bush threw out lots of Clinton’s initiatives, among them those having to do with cyber security. Clarke, the architect of those policies, stayed on in the White House and retained his title of National Coordinator for Security, Infrastructure Protection, and Counterterrorism. But, it was clear, Bush didn’t care about any of those issues, nor did Vice President Dick Cheney or the national security adviser, Condoleezza Rice. Under Clinton, Clarke had the standing, even if not the formal rank, of a cabinet secretary, taking part in the NSC Principals meetings—attended by the secretaries of defense, state, treasury, and other departments—when they discussed the issues in his portfolio. Rice took away this privilege. Clarke interpreted the move as not only a personal slight but also a diminution of his issues.
During the first few months of Bush’s term, Clarke and CIA director George Tenet, another Clinton holdover, warned the president repeatedly about the looming danger of an attack on America by Osama bin Laden. But the warnings were brushed aside. Bush and his closest advisers were more worried about missile threats from Russia, Iran, and North Korea; their top priority was to abrogate the thirty-year-old Anti-Ballistic Missile Treaty, the landmark Soviet-American arms-control accord, so they could build a missile-defense system. (On the day of the 9/11 attacks, Rice was scheduled to deliver a speech on the major threats facing the land; the draft didn’t so much as mention bin Laden or al Qaeda.)
In June 2001, Clarke submitted his resignation. He was the chief White House adviser on counterterrorism, yet nobody was paying attention to terrorism—or to him. Rice, taken aback, urged him not to leave. Clarke relented, agreeing to stay but only if they limited his responsibilities to cyber security, gave him his own staff (which eventually numbered eighteen), and let him set up and run an interagency Cyber Council. Rice agreed, in part because she didn’t care much about cyber; she saw the concession as a way to keep Clarke onboard while keeping him out of issues that did interest her. However, she needed time to find a replacement for the counterterrorism slot, so Clarke agreed to stay in that position as well until October 1.
He still had a few weeks to go as counterterrorism chief when the hijacked planes smashed into the World Trade Center and the Pentagon. Bush was in Florida, Cheney was dashed to an underground bunker, and, by default, Clarke sat in the Situation Room as the crisis manager, running the interagency conference calls and coordinating, in some cases directing, the government’s response.
The experience boosted his standing somewhat, not enough to let him rejoin the Principals meetings, but enough for Rice to start paying a bit of attention to cyber security. However, she balked when Clarke suggested renewing the National Plan for Information Systems Protection, which he’d written for Clinton in his last year as president. She vaguely remembered that the plan set mandatory standards for private industry, and that would be anathema to President Bush.
In fact, much as Clarke wished that it had, the plan—the revised version, after he had to drop his proposal for a federal intrusion-detection network—called only for public-private cooperation, with corporations in the lead. But Clarke played along, agreeing with Rice that the Clinton plan was deeply flawed and that he wanted to do a drastic rewrite. Rice let him draft an executive order, which Bush signed on September 30, calling for a new plan. For the next several months, Clarke and some of his staff went on the road, doing White House “cyber town halls” in ten cities—including Boston, New York, Philadelphia, Atlanta, San Francisco, Los Angeles, Portland, and Austin—inviting local experts, corporate executives, IT managers, and law-enforcement officers to attend.
Clarke would start the sessions on a modest note. Some of you, he would say, criticized the Clinton plan because you had no involvement in it. Now, he went on, the Bush administration was writing a new plan, and the president wants you, the people affected by its contents, to write the annexes that deal with your sector of critical infrastructure. Some of the experts and executives in some of the cities actually submitted ideas; those in telecommunications were particularly enthused.
In fact, though, Clarke wasn’t interested in their ideas. He did, however, need to melt their opposition; the whole point, the only point, of the town hall theatrics was to get their buy-in—to co-opt them into believing that they had something to do with the report. As it turned out, the final draft—a sixty-page document called The National Strategy to Secure Cyberspace, signed by President Bush on February 14, 2003—contained more passages kowtowing to industry, and it assigned some responsibility for securing nonmilitary cyberspace to the new Department of Homeland Security. But otherwise, the language on the vulnerability of computers came straight out of the Marsh Report, and the ideas on what to do about it were nearly identical to the plan that Clarke had written for Clinton.
The document set the framework for how cyber security would be handled over the next several years—as well as the limits in the government’s ability to handle it at all, given industry’s resistance to mandatory standards and (a problem that would soon become apparent) the Homeland Security Department’s bureaucratic and technical inadequacies.
Clarke didn’t stick around to fight the political battles of enforcing and refining the new plan. On March 19, Bush ordered the invasion of Iraq. In the buildup to the war, Clarke had argued that it would divert attention and resources from the fight against bin Laden and al Qaeda. Once the war’s wheels were firmly in motion, Clarke resigned in protest.
But a few years after the invasion, as the war devolved from liberation to occupation and the enemy switch
ed from Saddam Hussein to a disparate array of insurgents, the cyber warriors at Fort Meade and the Pentagon stepped onto the battlefield for the first time as a significant, even decisive force.
* * *
I. Out of CNE sprang a still more baroque subdivision of signals intelligence: C-CNE, for Counter-Computer Network Exploitation—penetrating an adversary’s networks in order to watch him penetrating our networks.
CHAPTER 9
* * *
CYBER WARS
WHEN General John Abizaid took the helm of U.S. Central Command on July 7, 2003, overseeing American military operations in the Middle East, Central Asia, and North Africa, his political bosses in Washington thought that the war in Iraq was over. After all, the Iraqi army had been routed, Saddam Hussein had fled, the Baathist regime had crumbled. But Abizaid knew that the war was just beginning, and he was flustered that President Bush and his top officials neither grasped its nature nor gave him the tools to fight it. One of those tools was cyber.
Abizaid had risen through the Army’s ranks in airborne infantry, U.N. peacekeeping missions, and the upper echelon of Pentagon staff jobs. But early on in his career, he tasted a slice of the unconventional. In the mid-1980s, after serving as a company commander in the brief battle for Grenada, he was assigned to the Army Studies Group, which explored the future of combat. The Army vice chief of staff, General Max Thurman, was intrigued by reports of the Soviet army’s research into remote sensing and psychic experiments. Nothing came of them, but they exposed Abizaid to the notion that war might be about more than just bullets and bombs.