Dark Territory
Page 5
Studeman and Gorelick met to discuss these issues every two weeks, and his arguments had resonance. Before her appointment as deputy attorney general, Gorelick had been general counsel at the Pentagon, where she heard frequent briefings on hackings of defense contractors and even of the Defense Department. Now, at the Justice Department, she was helping to prosecute criminal cases of hackers who’d penetrated the computers of banks and manufacturers. One year before Oklahoma City, Gorelick had helped draft the Computer Crime Initiative Action Plan, to boost the Justice Department’s expertise in “high-tech matters,” and had helped create the Information Infrastructure Task Force Coordinating Committee.
These ventures weren’t mere hobbyhorses; they were mandated by the Justice Department’s caseload. In recent times, a Russian crime syndicate had hacked into Citibank’s computers and stolen $10 million, funneling it to separate accounts in California, Germany, Finland, and Israel. A disgruntled ex-employee of an emergency alert network, covering twenty-two states, crashed the system for ten hours. A man in California gained control of the computer running local phone switches, downloaded information about government wiretaps on suspected terrorists, and posted the information online. Two teenage boys, malicious counterparts to the hero of WarGames, hacked into the computer network at an Air Force base in Rome, New York; one of the boys later sneered that military sites were the easiest to hack on the entire Internet.
From all this—her experiences as a government lawyer, the interagency meetings with Studeman, and now the discussions with Rich Wilhelm on the working group—Gorelick was coming to two disturbing conclusions. First, at least in this realm, the threats from criminals, terrorists, and foreign adversaries were all the same: they used the same means of attack; often, they couldn’t be distinguished. This wasn’t a problem for the Department of Justice or Defense alone; the whole government had to deal with it, and, since most computer traffic moved along networks owned by corporations, the private sector had to help find, and enforce, solutions, too.
Second, the threat was wider and deeper than she’d imagined. Looking over the group’s list of “critical” infrastructures, and learning that they were all increasingly controlled by computers, Gorelick realized, in a jaw-drop moment, that a coordinated attack by a handful of technical savants, from just down the street or the other side of the globe, could devastate the nation.
What nailed this new understanding was a briefing by the Pentagon’s delegate to the working group, a retired Navy officer named Brenton Greene, who had recently been named to a new post, the director for infrastructure policy, in the office of the undersecretary of defense.
Greene had been involved in some of the military’s most highly classified programs. In the late 1980s and early 1990s, he was a submarine skipper on beyond-top-secret spy missions. After that, he managed Pentagon black programs in a unit called the J Department, which developed emerging technologies that might give America an advantage in a coming war. One branch of J Department worked on “critical-node targeting.” The idea was to analyze the infrastructures of every adversary’s country and to identify the key targets—the smallest number of targets that the American military would have to destroy in order to make a huge impact on the course of a war. Greene helped to develop another branch of the department, the Strategic Leveraging Project, which focused on new ways of penetrating and subverting foreign adversaries’ command-control networks—the essence of information warfare.
Working on these projects and seeing how easy it was, at least in theory, to devastate a foreign country with a few well-laid bombs or electronic intrusions, Greene realized—as had several others who’d journeyed down this path before him—the flip side of the equation: what we could do to them, they could do to us. And Greene was also learning that America was far more vulnerable to these sorts of attacks—especially information attacks—than any other country on the planet.
In the course of his research, Greene came across a 1990 study by the U.S. Office of Technology Assessment, a congressional advisory group, called Physical Vulnerability of Electric Systems to Natural Disasters and Sabotage. In its opening pages, the authors revealed which power stations and switches, if disabled, would take down huge chunks of the national grid. This was a public document, available to anyone who knew about it.
One of Greene’s colleagues in the J Department told him that, soon after George Bush entered the White House in January 1989, Senator John Glenn showed the study to General Brent Scowcroft, Bush’s national security adviser. Scowcroft was concerned and asked a Secret Service officer named Charles Lane to put together a small team—no more than a half dozen technical analysts—to do a separate study. The team’s findings were so disturbing that Scowcroft shredded all of their work material. Only two copies of Lane’s report were printed. Greene obtained one of them.
At this point, Greene concluded that he’d been working the wrong side of the problem: protecting America’s infrastructure was more vital—as he saw it, more urgent—than seeking ways to poke holes in foreign infrastructures.
Greene knew Linton Wells, a fellow Navy officer with a deep background in black programs, who was now military assistant to Walter Slocombe, the undersecretary of defense for policy. Greene told Wells that Slocombe should hire a director for infrastructure policy. Slocombe approved the idea. Greene was hired.
In his first few months on the new job, Greene worked up a briefing on the “interdependence” of the nation’s infrastructure, its concentration, and the commingling of one segment with the others—how disabling a few “critical nodes” (a phrase from J Department) could severely damage the country.
For instance, Greene knew that the Bell Corporation distributed a CD-ROM listing all of its communications switches worldwide, so that, say, a phone company in Argentina would know how to connect circuits for routing a call to Ohio. Greene looked at this guide with a different question in mind: where were all the switches in the major American cities? In each case he examined, the switches were—for reasons of economic efficiency—concentrated at just a couple of sites. For New York City, most of them were located at two addresses in Lower Manhattan: 140 West Street and 104 Broad Street. Take out those two addresses—whether with a bomb or an information warfare attack—and New York City would lose almost all of its phone service, at least for a while. The loss of phone service would affect other infrastructures, and on the cascading would go.
Capping Greene’s briefing, the CIA—where Bill Studeman was briefly acting director—circulated a classified report on the vulnerability of SCADA systems. The acronym stood for Supervisory Control and Data Acquisition. Throughout the country, again for economic reasons, utility companies, waterworks, railway lines—vast sectors of critical infrastructure—were linking one local stretch of the sector to another, through computer networks, and controlling all of them remotely, sometimes with human monitors, often with automated sensors. Before the CIA report, few on the working group had ever heard of SCADA. Now, everyone realized that they were probably just scratching the surface of a new danger that came with the new technology.
Gorelick wrote a memo, alerting her superiors that the group was expanding the scope of its inquiry, “in light of the breadth of critical infrastructures and the multiplicity of sources and forms of attack.” It was no longer enough to consider the likelihood and impact of terrorists blowing up critical buildings. The group—and, ultimately, the president—also had to consider “threats from other sources.”
What to call these “other” threats? One word was floating around in stories about hackings of one sort or another: “cyber.” The word had its roots in “cybernetics,” a term dating back to the mid-nineteenth century, describing the closed loops of information systems. But in its present-day context of computer networks, the term stemmed from William Gibson’s 1984 science-fiction novel, Neuromancer, a wild and eerily prescient tale of murder and mayhem in the virtual world of “cyberspace.”
Michael Vatis, a Justice Depart
ment lawyer on the working group who had just read Gibson’s novel, advocated the term’s adoption. Others were opposed: it sounded too sci-fi, too frivolous. But once uttered, the word snugly fit. From that point on, the group—and others who studied the issue—would speak of “cyber crime,” “cyber security,” “cyber war.”
What to do about these cyber threats? That was the real question, the group’s raison d’être, and here they were stuck. There were too many issues, touching too many interests—bureaucratic, political, fiscal, and corporate—for an interagency working group to settle.
On February 6, 1996, Gorelick sent the group’s report to Rand Beers, Clinton’s intelligence adviser and the point of contact for all issues related to PDD-39, the presidential directive on counterterrorism policy, which had set this study in motion. The report’s main point—noting the existence of two kinds of threats to critical infrastructure, physical and cyber—was novel, even historic. As for a plan of action, the group fell back on the usual punt by panels of this sort when they don’t know what else to do: it recommended the creation of a presidential commission.
* * *
For a while, nothing happened. Rand Beers told Gorelick that her group’s report was under consideration, but there was no follow-up. A spur was needed. She found it in the personage of Sam Nunn, the senior Democrat on the Senate Armed Services Committee.
Gorelick knew Nunn from her days as the Pentagon’s general counsel. Both were Democratic hawks, not quite a rare breed but not so common either, and they enjoyed discussing the issues with each other. Gorelick told him about her group’s findings. In response, Nunn inserted a clause in that year’s defense authorization bill, requiring the executive branch to report to Congress on the policies and plans to ward off computer-based attacks against the national infrastructure.
Nunn also asked the General Accounting Office, the legislature’s watchdog agency, to conduct a similar study. The resulting GAO report, “Information Security: Computer Attacks at Department of Defense Pose Increasing Risks,” cited one estimate that the Defense Department “may have experienced as many as 250,000 attacks last year,” two thirds of them successful, and that “the number of attacks is doubling each year, as Internet use increases along with the sophistication of ‘hackers’ and their tools.”
Not only was this figure unlikely (a quarter million attacks a year meant 685 per day, with 457 actual penetrations), it was probably pulled out of a hat: as the GAO authors themselves acknowledged, only “a small portion” of attacks were “actually detected and reported.”
Still, the study sent a shockwave through certain corridors. Gorelick made sure that Beers knew about the wave’s reverberations and warned him that Nunn was about to hold hearings on the subject. The president, she hinted, would do well to get out in front of the storm.
Nunn scheduled his hearing for July 16. On July 15, Clinton issued Executive Order 13010, creating the blue-ribbon commission that Gorelick’s working group had suggested. The order, a near-exact copy of the working group’s proposed draft three months earlier, began: “Certain national infrastructures are so vital that their incapacity or destruction would have a debilitating impact on the defense or economic security of the United States.” Listing the same eight “critical” sectors that the working group had itemized, the order went on, “Threats to these critical infrastructures fall into two categories: physical threats to tangible property (‘physical threats’) and threats of electronic, radio-frequency, or computer-based attacks on the information or communications components that control critical infrastructures (‘cyber threats’).”
The next day, the Senate Governmental Affairs Committee, where Nunn sat as a top Democrat, held its much-anticipated hearing on the subject. One of the witnesses was Jamie Gorelick, who warned, “We have not yet had a terrorist cyber attack on the infrastructure. But I think that that is just a matter of time. We do not want to wait for the cyber equivalent of Pearl Harbor.”
The cyber age was officially under way.
* * *
So, behind the scenes, was the age of cyber warfare. At one meeting of the Critical Infrastructure Working Group, Rich Wilhelm took Jamie Gorelick aside and informed her, in broad terms, of the ultrasecret flip side of the threat she was probing—that we had long been doing to other countries what some of those countries, or certain people in those countries, were starting to do to us. We weren’t robbing their banks or stealing their industrial secrets, we had no need to do that; but we were using cyber tools—“electronic, radio-frequency, or computer-based attacks,” as Clinton’s executive order would put it—to spy on them, scope out their networks, and prepare the battlefield to our advantage, should there someday be a war.
The important thing, Wilhelm stressed, was that our cyber offensive capabilities must be kept off the table—must not even be hinted at—when discussing our vulnerability to other countries’ cyber offensive capabilities. America’s programs in this realm were among the most tightly held secrets in the entire national security establishment.
When Rand Beers met with deputies from various cabinet departments to discuss Clinton’s executive order, John White, the deputy secretary of defense, made the same point to his fellow deputy secretaries, in the same solemn tone: no one can so much as mention America’s cyber offensive capabilities.
The need for secrecy wasn’t the only reason for the ensuing silence on the matter. No one around the table said so, but, clearly, to acknowledge America’s cyber prowess, while decrying the prowess of others, would be awkward, to say the least.
* * *
It took seven months for the commission to get started. Beers, who once again served as the White House point man, first had to find a place for the commissioners to meet. The Old Executive Office Building, the mansion next door to the White House, wasn’t sufficiently wired for computer connections (in itself, a commentary on the dismal state of preparedness for a cyber crisis). John Deutch, the new CIA director, pushed for the commissioners to work at his headquarters in Langley, where they could have secure access to anything they needed; but officials in other departments feared this might breed insularity and excessive dependence on the intelligence community. In the end, Beers found a vacant suite of offices in a Pentagon-owned building in Arlington; to sweeten the deal, the Defense Department offered to pay all expenses and to offer technical support.
Then came the matter of filling the commission. This was a delicate matter. Nearly all of the nation’s computer traffic flowed through networks owned by private corporations; those corporations should have a say in their fate. Beers and his staff listed the ten federal departments and agencies that would be affected by whatever recommendations came out of this enterprise—Defense, Justice, Transportation, Treasury, Commerce, the Federal Emergency Management Administration, the Federal Reserve, the FBI, the CIA, and the NSA—and decided that each agency head would pick two delegates for the commission: one official and one executive from a private contractor. In addition to deputy assistant secretaries, there would also be directors or technical vice presidents from the likes of AT&T, IBM, Pacific Gas & Electric, and the National Association of Regulatory Utility Commissioners.
There was another delicate matter. The commission’s final report would be a public document, but its working papers and meetings would be classified; the commissioners would need to be vetted for top secret security clearances. That, too, would take time.
Finally, Beers and the cabinet deputies had to pick a chairman. There were tried-and-true criteria for such a post: he (and it was almost always a he) should be eminent, but not famous; somewhat familiar with the subject at hand, but not an expert; respected, amiable, but not flush with his own agenda; someone with time on his hands, but not a reject or a duffer. They came up with a retired Air Force four-star general named Robert T. Marsh.
Tom Marsh had risen through the ranks on the technical side of the Air Force, especially in electronic warfare. He wound up his career as com
mander of the electronic systems division at Hanscom Air Force Base in Massachusetts, then as commander of Air Force Systems Command at Andrews Air Force Base, near Washington. He was seventy-one years old; since retiring from active duty, he’d served on the Defense Science Board and the usual array of corporate boards; at the moment, he was director of the Air Force Aid Society, the service’s main charity organization.
In short, he seemed ideal.
John White, the deputy secretary of defense, called Marsh to ask if he would be willing to serve the president as chairman of a commission to protect critical infrastructure. Marsh replied that he wasn’t quite sure what “critical infrastructure” meant, but he’d be glad to help.
To prepare for the task, Marsh read the report by Gorelick’s Critical Infrastructure Working Group. It rang true. He recalled his days at Hanscom in the late 1970s and early 1980s, when the Air Force crammed new technologies onto combat planes with no concern for the vulnerabilities they might be sowing. The upgrades were all dependent on command-control links, which had no built-in redundancies. A few technically astute junior officers on Marsh’s staff warned him that, if the links were disrupted, the plane would be disabled, barely able to fly, much less fight.
Still, Marsh had been away from day-to-day operations for twelve years, and this focus on “cyber” was entirely new to him. For advice and a reality check, Marsh called an old colleague who knew more about these issues than just about anybody—Willis Ware.
Ware had kept up with every step of the Internet revolution since writing his seminal paper, nearly thirty years earlier, on the vulnerability of computer networks. He still worked at the RAND Corporation, and he was a member of the Air Force Scientific Advisory Board, which is where Marsh had come to know and trust him. Ware assured Marsh that Gorelick’s report was on the right track; that this was a serious issue and growing more so by the day, as the military and society grew more dependent on these networks; and that too few people were paying attention.