by Fred Kaplan
Some in the cyber world were perplexed. Hundreds of American banks, retailers, utilities, defense contractors, even Defense Department networks had been hacked routinely, sometimes at great cost, with no retributive action by the U.S. government, at least not publicly. But a Hollywood studio gets breached, over a movie, and the president pledges retaliation in a televised news conference?
Obama did have a point in making the distinction. Jeh Johnson, the secretary of homeland security, said on the same day that the Sony attack constituted “not just an attack against a company and its employees,” but “also an attack on our freedom of expression and way of life.” A Seth Rogen comedy may have been an unlikely emblem of the First Amendment and American values; but so were many other works that had come under attack through the nation’s history, yet were still worth defending, because an attack on basic values had to be answered—however ignoble the target—lest some future assailant threaten to raid the files of some other studio, publisher, art museum, or record company if their executives didn’t cancel some other film, book, exhibition, or album.
The confrontation ticked off a debate inside the Obama White House, similar to the debates discussed, but never resolved, under previous presidents: What was a “proportional” response to a cyber attack? Did this response have to be delivered in cyberspace? Finally, what role should government play in responding to cyber attacks on citizens or private corporations? A bank gets hacked, that’s the bank’s problem; but what if two, three, or a dozen banks—big banks—were hacked? At what point did these assaults become a concern for national security?
It was a broader version of the question that Robert Gates had asked the Pentagon’s general counsel eight years earlier: at what point did a cyber attack constitute an act of war? Gates never received a clear reply, and the fog hadn’t lifted since.
On December 22, three days after Obama talked about the Sony hack at his press conference, someone disconnected North Korea from the Internet. Kim Jong-un’s spokesmen accused Washington of launching the attack. It was a reasonable guess: Obama had pledged to launch a “proportional” response to the attack on Sony; shutting down North Korea’s Internet for ten hours seemed to fit the bill, and it wouldn’t have been an onerous task, given that the whole country had just 1,024 Internet Protocol addresses (fewer than the number on some blocks in New York City), all of them connected through a single service provider in China.
In fact, though, the United States government played no part in the shutdown. A debate broke out in the White House over whether to deny the charge publicly. Some argued that it might be good to clarify what a proportional response was not. Others argued that making any statement would set an awkward precedent: if U.S. officials issued a denial now, then they’d also have to issue a denial the next time a digital calamity occurred during a confrontation; otherwise everyone would infer that America did launch that attack, whether or not it actually had, at which point the victim might fire back.I
In this instance, the North Koreans didn’t escalate the conflict, in part because they couldn’t. But another power, with a more robust Internet, might have.
Gates’s question was more pertinent than ever, but it was also, in a sense, beside the point. Because of its lightning speed and the initial ambiguity of its source, a cyber attack could provoke a counterattack, which might escalate to war, in cyberspace and in real space, regardless of anyone’s intentions.
At the end of Bush’s presidency and the beginning of Obama’s, in casual conversations with aides and colleagues in the Pentagon and the White House, Gates took to mulling over larger questions about cyber espionage and cyber war.
“We’re wandering in dark territory,” he would say on these occasions.
It was a phrase from Gates’s childhood in Kansas, where his grandfather worked for nearly fifty years as a stationmaster on the Santa Fe Railroad. “Dark territory” was the industry’s term for a stretch of rail track that was uncontrolled by signals. To Gates, it was a perfect parallel to cyberspace, except that this new territory was much vaster and the danger was greater, because the engineers were unknown, the trains were invisible, and a crash could cause far more damage.
Even during the darkest days of the Cold War, Gates would tell his colleagues, the United States and the Soviet Union set and followed some basic rules: for instance, they agreed not to kill each other’s spies. But today, in cyberspace, there were no such rules, no rules of any kind. Gates suggested convening a closed-door meeting with the other major cyber powers—the Russians, Chinese, British, Israelis, and French—to work out some principles, some “rules of the road,” that might diffuse our mutual vulnerabilities: an agreement, say, not to launch cyber attacks on computer networks controlling dams, waterworks, electrical power grids, and air traffic control—critical civilian infrastructure—except perhaps in wartime, and maybe not even then.
Those who heard Gates’s pitch would furrow their brows and nod gravely, but no one followed up; the idea went nowhere.
Over the next few years, this dark territory’s boundaries widened, and the volume of traffic swelled.
In 2014, there were almost eighty thousand security breaches in the United States, more than two thousand of which resulted in losses of data—a quarter more breaches, and 55 percent more data losses, than the year before. On average, the hackers stayed inside the networks they’d breached for 205 days—nearly seven months—before being detected.
These numbers were likely to soar, with the rise of the Internet of Things. Back in 1996, Matt Devost, the computer scientist who simulated cyber attacks in NATO war games, co-wrote a paper called “Information Terrorism: Can You Trust Your Toaster?” The title was a bit facetious, but twenty years later, with the most mundane items of everyday life—toasters, refrigerators, thermostats, and cars—sprouting portals and modems for network connectivity (and thus for hackers too), it seemed prescient.II
President Obama tried to stem the deluge. On February 12, 2015, he signed an executive order titled “Improving Critical Infrastructure Cybersecurity,” setting up forums in which private companies could share data about the hackers in their midst—with one another and with government agencies. In exchange, the agencies—mainly the NSA, working through the FBI—would provide top secret tools and techniques to protect their networks from future assaults.
These forums were beefed-up versions of the Information Sharing and Analysis Centers that Richard Clarke had established during the Clinton administration—and they were afflicted with the same weakness: both were voluntary; no company executives had to share information if they didn’t want to. Obama made the point explicitly: “Nothing in this order,” his document stated, “shall be construed to provide an agency with authority for regulating the security of critical infrastructure.”
Regulation—it was still private industry’s deepest fear, deeper than the fear of losing millions of dollars at the hands of cyber criminals or spies. As the white-hat hacker Peiter “Mudge” Zatko had explained to Dick Clarke fifteen years earlier, these executives had calculated that it cost no more to clean up after a cyber attack than to prevent one in the first place—and the preventive measures might not work anyway.
Some industries had altered their calculations in the intervening years, notably the financial sector. Its business consisted of bringing in money and cultivating trust; hackers had made an enormous dent on both, and sharing information demonstrably lowered risk. But the big banks were exceptions to the pattern.
Obama’s cyber policy aides had made a run, early on, at drafting mandatory security standards, but they soon pulled back. Corporate resistance was too stiff; the secretaries of treasury and commerce argued that onerous regulations would impede an economic recovery, the number-one concern to a president digging the country out of its deepest recession in seventy years. Besides, the executives had a point: companies that had adopted tight security standards were still getting hacked. The government had offered tools, techni
ques, and a list of “best practices,” but “best” didn’t mean perfect—after the hacker adapted, erstwhile best practices might not even be good—and, in any case, tools were just tools: they weren’t solutions.
Two years earlier, in January 2013, a Defense Science Board task force had released a 138-page report on “the advanced cyber threat.” The product of an eighteen-month study, based on more than fifty briefings from government agencies, military commands, and private companies, the report concluded that there was no reliable defense against a resourceful, dedicated cyber attacker.
In several recent exercises and war games that the panel reviewed, Red Teams, using exploits that any skilled hacker could download from the Internet, “invariably” penetrated even the Defense Department’s networks, “disrupting or completely beating” the Blue Team.
The outcomes were all too reminiscent of Eligible Receiver, the 1997 NSA Red Team assault that first exposed the U.S. military’s abject vulnerability.
Some of the task force members had observed up close the early history of these threats, among them Bill Studeman, the NSA director in the late 1980s and early 1990s, who first warned that the agency’s radio dishes and antennas were “going deaf” in the global transition from analog to digital; Bob Gourley, one of Studeman’s acolytes, the first intelligence chief of the Pentagon’s Joint Task Force-Computer Network Defense, who traced the Moonlight Maze hack to Russia; and Richard Schaeffer, the former director of the NSA Information Assurance Directorate, who spotted the first known penetration of the U.S. military’s classified network, prompting Operation Buckshot Yankee.
Sitting through the briefings, collating their conclusions, and writing the report, these veterans of cyber wars past—real and simulated—felt as if they’d stepped into a time machine: the issues, the dangers, and, most surprising, the vulnerabilities were the same as they’d been all those years ago. The government had built new systems and software, and created new agencies and directorates, to detect and resist cyber attacks; but as with any other arms race, the offense—at home and abroad—had devised new tools and techniques as well, and, in this race, the offense held the advantage.
“The network connectivity that the United States has used to tremendous advantage, economically and militarily, over the past twenty years,” the report observed, “has made the country more vulnerable than ever to cyber attacks.” It was the same paradox that countless earlier commissions had observed.
The problem was basic and inescapable: the computer networks, the panelists wrote, were “built on inherently insecure architectures.” The key word here was inherently.
It was the problem that Willis Ware had flagged nearly a half century earlier, in 1967, just before the rollout of the ARPANET: the very existence of a computer network—where multiple users could gain access to files and data online, from remote, unsecured locations—created inherent vulnerabilities.
The danger, as the 2013 task force saw it, wasn’t that someone would launch a cyber attack, out of the blue, on America’s military machine or critical infrastructure. Rather, it was that cyber attacks would be an element of all future conflicts; and given the U.S. military’s dependence on computers—in everything from the GPS guidance systems in its missiles, to the communications systems in its command posts, to the power stations that generated its electricity, to the scheduling orders for resupplying the troops with ammunition, fuel, food, and water—there was no assurance that America would win this war. “With present capabilities and technology,” the report stated, “it is not possible to defend with confidence against the most sophisticated cyber attacks.”
Great Wall defenses could be leapt over or maneuvered around. Instead, the report concluded, cyber security teams, civilian and military, should focus on detection and resilience—designing systems that could spot an attack early on and repair the damage swiftly.
More useful still would be figuring out ways to deter adversaries from attacking even in the most tempting situations.
This had been the great puzzle in the early days of nuclear weapons, when strategists realized that the atomic bomb and, later, the hydrogen bomb were more destructive than any war aim could justify. As Bernard Brodie, the first nuclear strategist, put it in a book called The Absolute Weapon, published just months after Hiroshima and Nagasaki, “Thus far the chief purpose of our military establishment has been to win wars. From now on its chief purpose must be to avert them.” The way to do that, Brodie reasoned, was to protect the nuclear arsenal, so that, in the event of a Soviet first strike, the United States would have enough bombs surviving to “retaliate in kind.”
But what did that mean in modern cyberspace? The nations most widely seen as likely foes in such a war—Russia, China, North Korea, Iran—weren’t plugged into the Internet to nearly the same extent as America. Retaliation in kind would inflict far less damage on those countries than the first strike had inflicted on America; therefore, the prospect of retaliation might not deter them from attacking. So what was the formula for cyber deterrence: threatening to respond to an attack by declaring all-out war, firing missiles and smart bombs, escalating to nuclear retaliation? Then what?
The fact was, no one in a position of power or high-level influence had thought this through.
Mike McConnell had pondered the question in the transition between the Bush and Obama presidencies, when he set up the Comprehensive National Cybersecurity Initiative. The CNCI set twelve tasks to accomplish in the ensuing few years: among other things, to install a common intrusion-detection system across all federal networks, boost the security of classified networks, define the U.S. government’s role in protecting critical infrastructure—and there was this (No. 10 on the list): “Define and develop enduring deterrence strategies and programs.”
Teams of aides and analysts were formed to work on the twelve projects. The team assigned to Task No. 10 came up short: a paper was written, but its ideas were too vague and abstract to be described as “strategies,” much less “programs.”
McConnell realized that the problem was too hard. The other tasks were hard, too, but in most of those cases, it was fairly clear how to get the job done; the trick was getting the crucial parties—the bureaucracies, Congress, and private industry—to do it. Figuring out cyber deterrence was a conceptual problem: which hackers were you trying to deter; what were you trying to deter them from doing; what penalties were you threatening to impose if they attacked anyway; and how would you make sure they wouldn’t strike back harder in response? These were questions for policymakers, maybe political philosophers, not for midlevel aides on a task force.
The 2013 Defense Science Board report touched lightly on the question of cyber deterrence, citing parallels with the advent of the A-bomb at the end of World War II. “It took decades,” the report noted, “to develop an understanding” of “the strategies to achieve stability with the Soviet Union.” Much of this understanding grew out of analyses and war-game exercises at the RAND Corporation, the Air Force–sponsored think tank where civilian economists, physicists, and political scientists—among them Bernard Brodie—conceived and tested new ideas. “Unfortunately,” the task force authors wrote, they “could find no evidence” that anyone, anywhere, was doing that sort of work “to better understand the large-scale cyber war.”
The first official effort to find some answers to these questions got underway two years later, on February 10, 2015, with the opening session of yet another Defense Science Board panel, this one called the Task Force on Cyber Deterrence. It would continue meeting in a highly secure chamber in the Pentagon for two days each month, through the end of the year. Its goal, according to the memo that created the panel, was “to consider the requirements for effective deterrence of cyber attack against the United States and allies/partners.”
Its panelists included a familiar group of cyber veterans, among them Chris Inglis, deputy director of the NSA under Keith Alexander, now a professor of cyber studies at the U.S. Naval
Academy in Annapolis, Maryland; Art Money, the former Pentagon official who guided U.S. policy on information warfare in the formative era of the late 1990s, now (and for the previous decade) chairman of the NSA advisory board; Melissa Hathaway, the former Booz Allen project manager who was brought into the Bush White House by Mike McConnell to run the Comprehensive National Cybersecurity Initiative, now the head of her own consulting firm; and Robert Butler, a former officer at the Air Force Information Warfare Center who’d helped run the first modern stab at information warfare, the campaign against Serbian president Slobodan Milosevic and his cronies. The chairman of the task force was James Miller, the undersecretary of defense for policy, who’d been working cyber issues in the Pentagon for more than fifteen years.
All of them were longtime inside players of an insiders-only game; and, judging from their presence, the Pentagon’s permanent bureaucrats wanted to keep it that sort of game.
Meanwhile, the power and resources were concentrated at Fort Meade, where U.S. Cyber Command was amassing its regiments, and drawing up battle plans, even though broad questions of policy and guidance had barely been posed, much less settled.
In 2011, when Robert Gates realized that the Department of Homeland Security would never be able to protect the nation’s critical infrastructure from a cyber attack (and after his plan for a partnership between DHS and the NSA went up in smoke), he gave that responsibility to Cyber Command as well.
Cyber Command’s original two core missions were more straightforward. The first, to support U.S. combatant commanders, meant going through their war plans and figuring out which targets could be destroyed by cyber means rather than by missiles, bullets, or bombs. The second mission, to protect Defense Department computer networks, was right up Fort Meade’s alley: those networks had only eight points of access to the Internet; Cyber Command could sit on all of them, watching for intruders; and, of course, it had the political and legal authority to monitor, and roam inside, those networks, too.