Book Read Free

Dark Territory

Page 17

by Fred Kaplan


  The Estonian operation was a stab at political coercion, though in that sense it failed: in the end, the statue of the Red Army soldier was moved from the center of Tallinn to a military graveyard on the town’s outskirts.

  The other three operations were successes, but cyber’s role in each was tactical: an adjunct to conventional military operations, in much the same way as radar, stealth technology, and electronic countermeasures had been in previous conflicts. Its effects were probably short-lived as well; had the conflicts gone on longer, the target-nations would likely have found ways to deflect, diffuse, or disable the cyber attacks, just as the Estonians did with the help of Western allies. Even in its four-day war in South Ossetia, Georgia managed to reroute some of its servers to Western countries and filter some of the Russian intrusions; the cyber attack evolved into a two-way cyber war, with improvised tactics and maneuvers.

  In all its incarnations through the centuries, information warfare had been a gamble, its payoff lasting a brief spell, at best—just long enough for spies, troops, ships, or planes to cross a border undetected, or for a crucial message to be blocked or sent and received.

  One question that remained about this latest chapter, in the Internet era, was whether ones and zeroes, zipping through cyberspace from half a world away, could inflict physical damage on a country’s assets. The most alarming passages of the Marsh Report and a dozen other studies had pointed to the vulnerability of electrical power grids, oil and gas pipelines, dams, railroads, waterworks, and other pieces of a nation’s critical infrastructure—all of them increasingly controlled by computers run on commercial systems. The studies warned that foreign intelligence agents, organized crime gangs, or malicious anarchists could take down these systems with cyber attacks from anywhere on earth. Some classified exercises, including the simulated phase of Eligible Receiver, posited such attacks. But were the scenarios plausible? Could a clever hacker really destroy a physical object?

  On March 4, 2007, the Department of Energy conducted an experiment—called the Aurora Generator Test—to answer that question.

  The test was run by a retired naval intelligence officer named Michael Assante. Shortly after the 9/11 attacks, Assante was tasked to the FBI’s National Infrastructure Protection Center, which had been set up in the wake of Solar Sunrise and Moonlight Maze, the first major cyber intrusions into American military networks. While most of the center’s analysts focused on Internet viruses, Assante examined the vulnerability of the automated control systems that ran power grids, pipelines, and other pieces of critical infrastructure that the Marsh Report had catalogued.

  A few years later, Assante retired from the Navy and went to work as vice president and chief security officer of the American Electrical Power Company, which delivered electricity to millions of customers throughout the South, Midwest, and Mid-Atlantic. Several times he raised these problems with his fellow executives. In response, they’d acknowledge that someone could hack into a control system and cause power outages, but, they would add, the damage would be short-term: a technician would replace the circuit breaker, and the lights would go back on. But Assante would shake his head. Back at the FBI, he’d talked with protection and control engineers, the specialists’ specialists, who reminded him that circuit breakers were like fuses: their function was to protect very costly components, such as power generators, which were much harder, and would take much longer, to replace. A malicious hacker wouldn’t likely stop at blowing the circuit breaker; he’d go on to damage or destroy the generator.

  Finally persuaded that this might be a problem, Assante’s bosses sent him to the Idaho National Laboratory, an 890-square-mile federal research facility in the prairie desert outside Idaho Falls, to examine the issues more deeply. First, he did mathematical analyses, then bench tests of miniaturized models, and finally set up a real-life experiment. The Department of Homeland Security had recently undertaken a project on the most worrisome dangers in cyberspace, so its managers agreed to help fund it.

  The object of the Aurora test was a 2.25-megawatt power generator, weighing twenty-seven tons, installed inside one of the lab’s test chambers. On a signal from Washington, where officials were watching the test on a video monitor, a technician typed a mere twenty-one lines of malicious code into a digital relay, which was wired to the generator. The code opened a circuit breaker in the generator’s protection system, then closed it just before the system responded, throwing its operations out of sync. Almost instantly, the generator shook, and some parts flew off. A few seconds later, it shook again, then belched out a puff of white smoke and a huge cloud of black smoke. The machine was dead.

  Before the test, Assante and his team figured that there would be damage; that’s what their analyses and simulations had predicted. But they didn’t expect the magnitude of damage or how quickly it would come. Start-up to breakdown, the test lasted just three minutes, and it would have lasted a minute or two shorter, except that the crews paused to assess each phase of damage before moving on.

  If the military clashes of 2007—in Iraq, Syria, and the former Soviet republics—confirmed that cyber weapons could play a tactical role in new-age warfare, the Aurora Generator Test revealed that they might play a strategic role, too, as instruments of leverage or weapons of mass destruction, not unlike that of nuclear weapons. They would, of course, wreak much less destruction than atomic or hydrogen bombs, but they were much more accessible—no Manhattan Project was necessary, only the purchase of computers and the training of hackers—and their effects were lightning fast.

  There had been similar, if less dramatic, demonstrations of these effects in the past. In 2000, a disgruntled former worker at an Australian water-treatment center hacked into its central computers and sent commands that disabled the pumps, allowing raw sewage to flow into the water. The following year, hackers broke into the servers of a California company that transmitted electrical power throughout the state, then probed its network for two weeks before getting caught.

  The problem, in other words, was long known to be real, not just theoretical, but few companies had taken any steps to solve it. Nor had government agencies stepped in: those with the ability lacked the legal authority, while those with the legal authority lacked the ability; since action was difficult, evasion was easy. But for anyone who watched the video of the Aurora Generator Test, evasion was no longer an option.

  One of the video’s most interested viewers, who showed it to officials all around the capital, from the president on down, was the former NSA director who coined the phrase “information warfare,” Rear Admiral Mike McConnell.

  * * *

  I. Ironically, while complaining that Alexander might not handle NSA data in a strictly legal manner, Hayden was carrying out a legally dubious domestic-surveillance program that mined the same NSA database, including phone conversations and Internet activity of American citizens. Hayden rationalized this program, code-named Stellar Wind, as proper because it had been ordered by President Bush and deemed lawful by Justice Department lawyers.

  CHAPTER 10

  * * *

  BUCKSHOT YANKEE

  ON February 20, two weeks before the Aurora Generator Test, Mike McConnell was sworn in as director of national intelligence. It was a new job in Washington, having been created just two years earlier, in the wake of the report by the 9/11 Commission concluding that al Qaeda’s plot to attack the World Trade Center succeeded because the nation’s scattered intelligence agencies—FBI, CIA, NSA, and the rest—didn’t communicate with one another and so couldn’t connect all the dots of data. The DNI, a cabinet-level post carrying the additional title of special adviser to the president, was envisioned as a sort of supra-director who would coordinate the activities and findings of the entire intelligence community; but many saw it as just another bureaucratic layer. When the position was created, President Bush offered it to Robert Gates, who had been CIA director and deputy national security adviser during his father’s presidency, bu
t Gates turned it down upon learning that he would have no power to set budgets or hire and fire personnel.

  McConnell had no problem with the job’s bureaucratic limits. He took it with one goal in mind: to put cyber, especially cyber security, on the president’s agenda.

  Back in the early- to mid-1990s, as NSA director, McConnell had gone through the same roller-coaster ride that many others at Fort Meade had experienced: a thrilled rush at the marvels that the agency’s SIGINT teams could perform—followed by the realization that whatever we can do to our enemies, our enemies could soon do to us: a dread deepened, in the decade since, by America’s growing reliance on vulnerable computer networks.

  After McConnell left the NSA in early 1996, he was hired by Booz Allen, one of the oldest management consulting firms along the capital’s suburban Beltway, and transformed it into a powerhouse contractor for the U.S. intelligence agencies—an R&D center for SIGINT and cyber security programs, as well as a haven of employment for senior NSA and CIA officials as they ferried back and forth between the public and private sectors.

  Taking the DNI job, McConnell gave up a seven-figure salary, but he saw it as a singular opportunity to push his passions on cyber into policy. (Besides, the sacrifice was hardly long-term; after his two-year stint back in government, he returned to the firm.) In pursuit of this goal, he stayed as close as he could to the Oval Office, delivering the president’s intelligence briefing at the start of each day. A canny bureaucratic player with a casual drawl masking his laser-beam intensity, McConnell also dropped in, at key moments, on the aides and cabinet secretaries who had an interest in cyber security policy, whether or not they realized it. These included not only the usual suspects at State, Defense, and the National Security Council staff, but also the Departments of Treasury, Energy, and Commerce, since banks, utilities, and other corporations were particularly prone to attack. To McConnell’s dismay, but not surprise, few of these officials displayed the slightest awareness of the problem.

  So, McConnell pulled a neat trick out of his bag. He would bring the cabinet secretary a copy of a memo. Here, McConnell would say, handing it over. You wrote this memo last week. The Chinese hacked it from your computer. We hacked it back from their computer.

  That grabbed their attention. Suddenly officials who’d never heard of cyber started taking a keen interest in the subject; a few asked McConnell for a full-scale briefing. Slowly, quietly, he was building a high-level constituency for his plan of action.

  In late April, President Bush received a request to authorize cyber offensive operations against the insurgents in Iraq. This was the plan that Generals Abizaid, Petraeus, McChrystal, and Alexander had honed for months—finally sent up the chain of command through the new secretary of defense, Robert Gates, who had returned to government just two months earlier than McConnell, replacing the ousted Donald Rumsfeld.

  From his experiences at the NSA and Booz Allen, McConnell understood the nature and importance of this proposal. Clearly, there were huge gains to be had from getting inside the insurgents’ networks, disrupting their communications, sending them false emails on where to go, then dispatching a special-ops unit to kill them when they got there. But there were also risks: inserting malware into insurgents’ email might infect other servers in the area, including those of American armed forces and of Iraqi civilians who had no involvement in the conflict. It was a complex endeavor, so McConnell scheduled an hour with the president to explain its full dimensions.

  It was still a rare thing for a president to be briefed on cyber offensive operations—there hadn’t been many of them, at this point—and the proposal came at a crucial moment: a few months into Bush’s troop surge and the shift to a new strategy, new commander, and new defense secretary. So McConnell’s briefing, which took place on May 16, was attended by a large group of advisers: Vice President Cheney, Secretary Gates, Secretary of State Condoleezza Rice, National Security Adviser Stephen Hadley, the Joint Chiefs of Staff vice chairman Admiral Edmund Giambastiani (the chairman, General Peter Pace, was traveling), Treasury Secretary Henry Paulson, and General Keith Alexander, the NSA director, in case someone asked about technical details.

  As it turned out, there was no need for discussion. Bush quickly got the idea, finding the upside enticing and the downside trivial. Ten minutes into McConnell’s hour-long briefing, he cut it short and approved the plan.

  The room turned quiet. What was McConnell going to say now? He hadn’t planned on the prospect, but it seemed an ideal moment to make the pitch that he’d taken this job to deliver. He switched gears and revved up the spiel.

  Mr. President, he began, we come to talk with you about cyber offense because we need your permission to carry out those operations. But we don’t talk with you much about cyber defense.

  Bush looked at McConnell quizzically. He’d been briefed on the subject before, most fully when Richard Clarke wrote his National Strategy to Secure Cyberspace, but that was four years earlier, and a lot of crises had erupted since; cyber had never been more than a sporadic blip on his radar screen.

  McConnell swiftly recited the talking points from two decades of analyses—the vulnerability of computer systems, their growing use in all aspects of American life, the graphic illustration supplied by the Aurora Generator Test, which had taken place just two months earlier. Then he raised the stakes, stating his case in the most urgent terms he could muster: those nineteen terrorists who mounted the 9/11 attack—if they’d been cyber smart, McConnell said, if they’d hacked into the servers of one major bank in New York City and contaminated its files, they could have inflicted more economic damage than they’d done by taking down the Twin Towers.

  Bush turned to Henry Paulson, his treasury secretary. “Is this true, Hank?” he asked.

  McConnell had discussed this very point with Paulson in a private meeting a week earlier. “Yes, Mr. President,” he replied from the back of the room. The banking system relied on confidence, which an attack of this sort could severely damage.

  Bush was furious. He got up and walked around the room. McConnell had put him in a spot, spelling out a threat and describing it as greater than the threat weighing on his and every other American’s mind for the past five and a half years—the threat of another 9/11. And he’d done this in front of his most senior security advisers. Bush couldn’t just let it pass.

  “McConnell,” he said, “you raised this problem. You’ve got thirty days to solve it.”

  It was a tall order: thirty days to solve a problem that had been kicking around for forty years. But at least he’d seized the president’s attention. It was during precisely such moments—rare in the annals of this history—that leaps of progress in policy had been plotted: Ronald Reagan’s innocent question after watching WarGames (“could something like this really happen?”) led to the first presidential directive on computer security; Bill Clinton’s crisis mentality in the wake of the Oklahoma City bombing spurred the vast stream of studies, working groups, and, at last, real institutional changes that turned cyber security into a mainstream public issue. Now, McConnell hoped, Bush’s pique might unleash the next new wave of change.

  McConnell had been surveying the landscape since returning to government, and he was shocked how little progress had been made in the decade that he’d been out of public life. The Pentagon and the military services had plugged a lot of the holes in their networks, but—despite the commissions, simulations, congressional hearings, and even the presidential decrees that Dick Clarke had written for Clinton and Bush—conditions elsewhere in government, and still more so in the private sector, were no different, no less vulnerable to cyber attacks.

  The reasons for this rut were also the same: private companies didn’t want to spend the money on cyber security, and they resisted all regulations to make them do so; meanwhile, federal agencies lacked the talent or resources to do the job, except for the NSA, which had neither the legal authority nor the desire.

  Entities had be
en created during the most recent spate of interest, during Clarke’s reign as cyber coordinator under Clinton and the first two years of Bush, most notably the interagency Cyber Council and the ISACs—Information Sharing and Analysis Centers—that paired government experts with the private owners of companies involved in critical infrastructure (finance, electrical power, transportation, and so forth). But most of those projects stalled after Clarke resigned four years earlier. Now, with Bush’s marching orders in hand, McConnell set out to bulk up these entities or create new ones, this time backed by serious money.

  McConnell delegated the task to an interagency cyber task force, run by one of his assistants, Melissa Hathaway, the former director of an information operations unit at Booz Allen, whom he’d brought with him to be his chief cyber aide at the National Intelligence Directorate.

  Protecting the civilian side of government from cyber attacks was new terrain. Fifteen years earlier, when the military services began to confront the problem, the first step they took was to equip their computers with intrusion-detection systems. So, as a first step, Hathaway’s task force calculated what it would take to detect intrusions of civilian networks. The requirements turned out to be massive. When the tech crew at Kelly Air Force Base started monitoring computer networks in the mid-1990s, all of the Air Force servers, across the nation, had about one hundred points of access to the Internet. Now, the myriad agencies and departments of the entire federal government had 4,300 access points.

  More than this, the job of securing these points was assigned, by statute, to the Department of Homeland Security, a mongrel organization slapped together from twenty-two agencies, once under the auspices of eight separate departments. The idea had been to take all the agencies with even the slightest responsibility for protecting the nation from terrorist attacks and to consolidate them into a single, strong cabinet department. But in fact, the move only dispersed power, overloading the department’s secretary with a portfolio much too large for any one person to manage and burying once-vibrant organizations—such as the Pentagon’s National Communications System, which ran the alert programs for attacks of all sorts, including cyber attacks—in the dunes of a remote bureaucracy. The department was remote physically as well as politically, its headquarters crammed into a small campus on Nebraska Avenue in far Northwest Washington, five miles from the White House—the same campus where the NSA had stuck its Information Security Directorate until the late 1960s, when it was moved to the airport annex a half hour’s drive (somewhat closer than Nebraska Avenue’s hour-long trek) from Fort Meade.

 

‹ Prev