Book Read Free

Dark Territory

Page 4

by Fred Kaplan


  There were two distinct branches of the agency’s SIGINT Directorate: the “A Group,” which monitored the Soviet Union and its satellites; and the “B Group,” which monitored the rest of the world. As its title suggested, the A Group was the elite branch, and everyone in the building knew it. Its denizens imbibed a rarefied air: they were the ones protecting the nation from the rival superpower; they had learned the imponderably specialized skills, and had immersed themselves so deeply into the Soviet mindset, that they could take a stream of seemingly random data and extract patterns and shifts of patterns that, pieced together, gave them (at least in theory) a picture of the Kremlin’s intentions and the outlook for war and peace. Now that the Cold War was over, what good were those skills? Should the Kremlin-watchers still be called the A Group?

  A still larger uncertainty was how the NSA, as a whole, would continue to do its watching—and listening. Weeks into his tenure as director, McConnell learned that some of the radio receivers and antennas, which the NSA had set up around the globe, were no longer picking up signals. Studeman’s “Global Access Study”—which predicted the rate at which the world would switch to digital—was coming true.

  Around the same time, one of McConnell’s aides came into his office with two maps. The first was a standard map of the world, with arrows marking the routes that the major shipping powers navigated across the oceans—the “sea lines of communication,” or SLOCs, as a Navy man like McConnell would have called them. The second map showed the lines and densities of fiber-optic cable around the world.

  This is the map that you should study, the aide said, pointing to the second one. Fiber-optic lines were the new SLOCs, but they were to SLOCs what wormholes were to the galaxies: they whooshed you from one point to any other point instantaneously.

  McConnell got the parallel, and the hint of transformation, but he didn’t quite grasp its implications for his agency’s future.

  Shortly after that briefing, he saw a new movie called Sneakers. It was a slick production, a comedy-thriller with an all-star cast. The only reason McConnell bothered to see the film was that someone had told him it was about the NSA. The plot was dopey: a small company that does white-hat hacking and high-tech sleuthing is hired to steal a black box sitting on a foreign scientist’s desk; the clients say that they’re with the NSA and that the scientist is a spy; as it turns out, the clients are spies, the scientist is an agency contractor, the black box is a top secret device that can decode all encrypted data, and the NSA wants it back; the sleuths are on the case.

  Toward the end of the film, there was a scene where the evil genius (played by Ben Kingsley), a former computer-hacking prankster who turns out to have ordered the theft of the black box, confronts the head sleuth (played by Robert Redford), an old friend and erstwhile comrade from their mischievous college days, and uncorks a dark soliloquy, explaining why he stole the box:

  “The world isn’t run by weapons anymore, or energy, or money,” the Kingsley character says at a frenzied clip. “It’s run by ones and zeroes, little bits of data. It’s all just electrons. . . . There’s a war out there, old friend, a world war. And it’s not about who’s got the most bullets. It’s about who controls the information: what we see and hear, how we work, what we think. It’s all about the information.”

  McConnell sat up as he watched this scene. Here, in the unlikely form of a Hollywood movie, was the NSA mission statement that he’d been seeking: The world is run by ones and zeroes . . . There’s a war out there . . . It’s about who controls the information.

  Back at Fort Meade, McConnell spread the word about Sneakers, encouraged every employee he ran into to go see it. He even obtained a copy of the final reel and screened it for the agency’s top officials, telling them that this was the vision of the future that they should keep foremost in their minds.

  He didn’t know it at the time, but the screenplay for Sneakers was cowritten by Larry Lasker and Walter Parkes—the same pair that, a decade earlier, had written WarGames. And, though not quite to the same degree, Sneakers, too, would have an impact on national policy.

  Soon after his film-inspired epiphany, McConnell called Rich Wilhelm, who’d been the NSA representative—in effect, his right-hand man—on the Joint Intelligence Center during Desert Storm. After the war, Wilhelm and Rich Haver had written a report, summarizing the center’s activities and listing the lessons learned for future SIGINT operations. As a reward, Wilhelm was swiftly promoted to take command of the NSA listening station at Misawa Air Base in Japan, one of the agency’s largest foreign sites. In the order of NSA field officers, Wilhelm was king of the hill.

  But now, McConnell was asking Wilhelm to come back to Fort Meade and take on a new job that he was creating just for him. Its title would be Director of Information Warfare. (There’s a war out there . . . It’s about who controls the information.)

  The concept, and the nomenclature, spread. The following March, General Colin Powell, chairman of the Joint Chiefs of Staff, issued a policy memorandum on “information warfare,” which he defined as operations to “decapitate the enemy’s command structure from its body of combat forces.” The military services responded almost at once, establishing the Air Force Information Warfare Center, the Naval Information Warfare Activity, and the Army Land Information Warfare Activity. (These entities already existed, but under different names.)

  By the time McConnell watched Sneakers, he’d been fully briefed on the Navy and NSA programs in counter-C2 warfare, and he was intrigued with the possibilities of applying the concept to the new era. In its modern incarnation (“information warfare” was basically counter-C2 warfare plus digital technology), he could turn SIGINT on its ear, not just intercepting a signal but penetrating its source—and, once inside the mother ship, the enemy’s command-control system, he could feed it false information, altering, disrupting, or destroying the machine, disorienting the commanders: controlling the information to keep the peace and win the war.

  None of this came as news to Wilhelm; he’d been skirmishing on the information war’s front lines for years. But six weeks into the new job, he came to McConnell’s office and said, “Mike, we’re kind of fucked here.”

  Wilhelm had been delving into the details of what information war—a two-way war, in which both sides use the same weapons—might look like, and the sight wasn’t pretty. The revolution in digital signals and microelectronics was permeating the American military and American society. In the name of efficiency, generals and CEOs alike were hooking up everything to computer networks. The United States was growing more dependent on these networks than any country on earth. About 90 percent of government files, including intelligence files, were flowing alongside commercial traffic. Banks, power grids, pipelines, the 911 emergency call system—all of these enterprises were controlled through networks, and all of them were vulnerable, most of them to very simple hacking.

  When you think about attacking someone else’s networks, Wilhelm told McConnell, keep in mind that they can do the same things to us. Information warfare wasn’t just about gaining an advantage in combat; it also had to be about protecting the nation from other countries’ efforts to gain the same advantage.

  It was a rediscovery of Willis Ware’s warning from a quarter century earlier.

  McConnell instantly grasped the importance of Wilhelm’s message. The Computer Security Center, which Bobby Ray Inman created a decade earlier, had since lured little in the way of funding or attention. The Information Security (now called Information Assurance) Directorate was still—literally—a sideshow, located a twenty-minute drive from headquarters.

  Meanwhile, the legacy of Reagan’s presidential directive on computer security, NSDD-145, lay in tatters. Congressman Jack Brooks’s overhaul of the directive, laid out in the Computer Security Act of 1987, gave NSA control over the security of military computers and classified networks, but directed the National Bureau of Standards, under the Department of Commerce, to handle the rest. T
he formula was doomed from the start: the NBS lacked technical competence, while the NSA lacked institutional desire. When someone at the agency’s Information Assurance Directorate or Computer Security Center discovered a flaw in a software program that another country might also be using, the real powers at NSA—the analysts in the SIGINT Directorate—wanted to exploit it, not fix it; they saw it as a new way to penetrate a foreign nation’s network and to intercept its communications.

  In other words, it wasn’t so much that the problem went ignored; rather, no one in power saw it as a problem.

  McConnell set out to change that. He elevated the Information Assurance Directorate, gave it more money at a time when the overall budget—not just for the NSA but for the entire Defense Department—was getting slashed, and started moving personnel back and forth, between the SIGINT and Information Assurance directorates, just for short-term tasks, but the idea was to expose the two cultures to one another.

  It was a start, but not much more than that. McConnell had a lot on his plate: the budget cuts, the accelerating shift from analog circuits to digital packets, the drastic decline in radio signals, and the resulting need to find new ways to intercept communications. (Not long after McConnell became director, he found himself having to shut down one of the NSA antennas in Asia; it was picking up no radio signals; all the traffic that it had once monitored, in massive volume at its peak, had moved to underground cables or cyberspace.)

  In the fall of 1994, McConnell saw a demonstration, in his office, of the Netscape Matrix—one of the first commercial computer network browsers. He thought, “This is going to change the world.” Everyone was going to have access to the Net—not just allied and rival governments, but individuals, including terrorists. (The first World Trade Center bombing had taken place the year before; terrorism, seen as a nuisance during the nuclear arms race and the Cold War, was emerging as a major threat.) With the rise of the Internet came commercial encryption, to keep network communications at least somewhat secure. Code-making was no longer the exclusive province of the NSA and its counterparts; everyone was doing it, including private firms in Silicon Valley and along Route 128 near Boston, which were approaching the agency’s technical prowess. McConnell feared that the NSA would lose its unique luster—its ability to tap into communications affecting national security.

  He was also coming to realize that the agency was ill equipped to seize the coming changes. A young man named Christopher Mellon, on the Senate Intelligence Committee’s staff, kept coming around, asking questions. Mellon had heard the briefings on Fort Meade’s adaptations to the new digital world; but when he came to headquarters and examined the books, he discovered that, of the agency’s $4 billion budget, just $2 million was earmarked for programs to penetrate communications on the Internet. Mellon asked to see the personnel assigned to this program; he was taken to a remote corner of the main floor, where a couple dozen techies—out of a workforce numbered in the tens of thousands—were fiddling with computers.

  McConnell hadn’t known just how skimpy these efforts were, and he assured the Senate committee that he would beef up the programs as a top priority. But he was diverted by what he saw as a more urgent problem—the rise of commercial voice encryption, which would soon make it very difficult for the NSA (and the FBI) to tap phone conversations. McConnell’s staff devised what they saw as a solution to the problem—the Clipper Chip, an encryption key that they billed as perfectly secure. The idea was to install the chip in every telecommunications device. The government could tap in and listen to a phone conversation, only if it followed an elaborate, two-key procedure. An agent would have to go to the National Institute of Standards and Technology, as the National Bureau of Standards was now called, to get one of the crypto-keys, stored on a floppy disk; another agent would go to the Treasury Department to get the other key; then the two agents would go to the Marine base at Quantico, Virginia, to insert both disks into a computer, which would unlock the encryption.

  McConnell pushed hard for the Clipper Chip—he made it his top priority—but it was doomed from the start. First, it was expensive: a phone with a Clipper Chip installed would cost more than a thousand dollars. Second, the two-key procedure was baroque. (Dorothy Denning, one of the country’s top cryptologists, took part in a simulated exercise. She obtained the key from Treasury, but after driving out to Quantico, learned that the person from NIST had picked up the wrong key. They couldn’t unlock the encryption.) Finally, there was the biggest obstacle: very few people trusted the Clipper Chip, because very few people trusted the intelligence agencies. The revelations of CIA and NSA domestic surveillance, unleashed by Senator Frank Church’s committee in the mid-1970s, were still a fresh memory. Nearly everyone—even those who weren’t inclined to distrust spy agencies—suspected that the NSA had programmed the Clipper Chip with a secret back door that its agents could open, then listen to phone conversations, without going through Treasury, NIST, or any legal process.

  The Clipper Chip ended with a whimper. It was McConnell’s well-intentioned, but misguided, effort to forge a compromise between personal privacy and national security—and to do so openly, in the public eye. The next time the NSA created or discovered back doors into data, it would do so, as it had always done, under the cloak of secrecy.

  CHAPTER 3

  * * *

  A CYBER PEARL HARBOR

  ON April 19, 1995, a small gang of militant anarchists, led by Timothy McVeigh, blew up a federal office building in Oklahoma City, killing 168 people, injuring 600 more, and destroying or damaging 325 buildings across a sixteen-block radius, causing more than $600 million in damage. The shocking thing that emerged from the subsequent investigation was just how easily McVeigh and his associates had pulled off the bombing. It took little more than a truck and a few dozen bags of ammonium nitrate, a common chemical in fertilizers, obtainable in many supply stores. Security around the building was practically nonexistent.

  The obvious question, in and out of the government, was what sorts of targets would get blown up next: a dam, a major port, the Federal Reserve, a nuclear power plant? The damage from any of those hits would be more than simply tragic; it could reverberate through the entire economy. So how vulnerable were they, and what could be done to protect them?

  On June 21, Bill Clinton signed a Presidential Decision Directive, PDD-39, titled “U.S. Policy on Counterterrorism,” which, among other things, put Attorney General Janet Reno in charge of a “cabinet committee” to review—and suggest ways to reduce—the vulnerability of “government facilities” and “critical national infrastructure.”

  Reno turned the task over to her deputy, Jamie Gorelick, who set up a Critical Infrastructure Working Group, consisting of other deputies from the Pentagon, CIA, FBI, and the White House. After a few weeks of meetings, the group recommended that the president appoint a commission, which in turn held hearings and wrote a report, which culminated in the drafting of another presidential directive.

  Several White House aides, who figured the commission would come up with new ways to secure important physical structures, were startled when more than half of its report and recommendations dealt with the vulnerability of computer networks and the urgent need for what it called “cyber security.”

  The surprise twist came about because key members of the Critical Infrastructure Working Group and the subsequent presidential commission had come from the NSA or the Navy’s super-secret black programs and were thus well aware of this new aspect of the world.

  Rich Wilhelm, the NSA director of information warfare, was among the most influential members of the working group. A few months before the Oklahoma City bombing, President Clinton had put Vice President Al Gore in charge of overseeing the Clipper Chip; Mike McConnell sent Wilhelm to the White House as the NSA liaison on the project. The chip soon died, but Gore held on to Wilhelm and made him his intelligence adviser, with a spot on the National Security Council staff. Early on at his new job, Wilhelm told some o
f his fellow staffers about the discoveries he’d made at Fort Meade, especially those highlighting the vulnerability of America’s increasingly networked society. He wrote a memo on the subject for Clinton’s national security adviser, Anthony Lake, who signed it with his own name and passed it on to the president.

  When Jamie Gorelick put together her working group, it was natural that Wilhelm would be on it. One of its first tasks was to define its title, to figure out which infrastructures were critical—which sectors were vital to the functioning of a modern society. The group came up with a list of eight: telecommunications, electrical power, gas and oil, banking and finance, transportation, water supply, emergency services, and “continuation of government” in the event of war or catastrophe.

  Wilhelm pointed out that all of these sectors relied, in some cases heavily, on computer networks. Terrorists wouldn’t need to blow up a bank or a rail line or a power grid; they could merely disrupt the computer network that controlled its workings, and the result would be the same. As a result, Wilhelm argued, “critical infrastructure” should include not just physical buildings but the stuff of what would soon be called cyberspace.

  Gorelick needed no persuading on this point. As deputy attorney general, she served on several interagency panels, one of which dealt with national security matters. She co-chaired that panel with the deputy director of the CIA, who happened to be Bill Studeman, the former NSA director (and Bobby Ray Inman protégé). In his days at Fort Meade, Studeman had been a sharp advocate of counter-C2 warfare; at Langley he was promoting the same idea, now known as information warfare, both its offensive and its defensive sides—America’s ability to penetrate the enemy’s networks and the enemy’s ability to penetrate America’s.

 

‹ Prev