Dark Territory
Page 8
Four months later, another attack on defense networks occurred—something that looked like Eligible Receiver, but coming from real, unknown hackers in the real, outside world.
CHAPTER 5
* * *
SOLAR SUNRISE, MOONLIGHT MAZE
ON February 3, 1998, the network monitors at the Air Force Information Warfare Center in San Antonio sounded the alarm: someone was hacking into a National Guard computer at Andrews Air Force Base on the outskirts of Washington, D.C.
Within twenty-four hours, the center’s Computer Emergency Response Team, probing the networks more deeply, detected intrusions at three other bases. Tracing the hacker’s moves, the team found that he’d broken into the network through an MIT computer server. Once inside the military sites, he installed a “packet sniffer,” which collected the directories of usernames and passwords, allowing him to roam the entire network. He then created a back door, which let him enter and exit the site at will, downloading, erasing, or distorting whatever data he wished.
The hacker was able to do all this because of a well-known vulnerability in a widely used UNIX operating system. The computer specialists in San Antonio had been warning senior officers of this vulnerability—Ken Minihan had personally repeated these warnings to generals in the Pentagon—but no one paid attention.
When President Clinton signed the executive order on “Critical Infrastructure Protection,” back in July 1996, one consequence was the formation of the Marsh Commission, but another—less noticed at the time—was the creation of the Infrastructure Protection Task Force inside the Justice Department, to include personnel from the FBI, the Pentagon (the Joint Staff and the Defense Information Systems Agency), and, of course, the National Security Agency.
By February 6, three days after the intrusion at Andrews Air Force Base was spotted, this task force was on the case, with computer forensics handled by analysts at NSA, DISA, and a unit in the Joint Staff called the Information Operations Response Cell, which had been set up just a week earlier as a result of Eligible Receiver. They found that the hacker had exploited a specific vulnerability in the UNIX systems, known as Sun Solaris 2.4 and 2.6. And so, the task force code-named its investigation Solar Sunrise.
John Hamre, the deputy secretary of defense who’d seen the Eligible Receiver exercise eight months earlier as the wake-up call to a new kind of threat, now saw Solar Sunrise as the threat’s fulfillment. Briefing President Clinton on the intrusion, Hamre warned that Solar Sunrise might be “the first shots of a genuine cyber war,” adding that they may have been fired by Iraq.
It wasn’t a half-baked suspicion. Saddam Hussein had recently expelled United Nations inspectors who’d been in Iraq for six years to ensure his compliance with the peace terms that ended Operation Desert Storm—especially the clause that barred him from developing weapons of mass destruction. Many feared that Saddam’s ouster of the inspectors was the prelude to resuming his WMD program. Clinton had ordered his generals to plan for military action; a second aircraft carrier was steaming to the Persian Gulf; American troops were prepared for possible deployment.
So when the Solar Sunrise hack expanded to more than a dozen military bases, it struck some, especially inside the Joint Staff, as a pattern. The targets included bases in Charleston, Norfolk, Dover, and Hawaii—key deployment centers for U.S. armed forces. Only unclassified servers were hacked, but some of the military’s vital support elements—transportation, logistics, medical teams, and the defense finance system—ran on unclassified networks. If the hacker corrupted or shut down these networks, he could impede, maybe block, an American military response.
Then came another unsettling report: NSA and DISA forensics analysts traced the hacker’s path to an address on Emirnet, an Internet service provider in the United Arab Emirates—lending weight to fears that Saddam, or some proxy in the region, might be behind the attacks.
The FBI’s national intelligence director sent a cable to all his field agents, citing “concern that the intrusions may be related to current U.S. military actions in the Persian Gulf.” At Fort Meade, Ken Minihan came down firmer still, telling aides that the hacker seemed to be “a Middle Eastern entity.”
Some were skeptical. Neal Pollard, a young DISA consultant who’d studied cryptology and international relations in college, was planning a follow-on exercise to Eligible Receiver when Solar Sunrise, a real attack, took everyone by surprise. As the intrusions spread, Pollard downloaded the logs, drafted briefings, tried to figure out the hacker’s intentions—and, the more he examined the data, the more he doubted that this was the work of serious bad guys.
In the exercise that he’d been planning, a Red Team was going to penetrate an unclassified military network, find a way in to its classified network (which, Pollard knew from advance probing, wasn’t very secure), hop on it, and crash it. By contrast, the Solar Sunrise hacker wasn’t doing anything remotely as elaborate: this guy would poke around briefly in one unclassified system after another, then get out, leaving behind no malware, no back door, nothing. And while some of the servers he attacked were precisely where a hacker would go to undermine the network of a military on the verge of deployment, most of the targets seemed selected at random, bearing no significance whatever.
Still, an international crisis was brewing, war might be in the offing; so worst-case assumptions came naturally. Whatever the hacker’s identity or motive, his work was throwing commanders off balance. They remembered Eligible Receiver, when they didn’t know they’d been hacked; the NSA Red Team had fed some of them false messages, which they’d assumed were real. This time around, they knew they were being hacked, and it wasn’t a game. They didn’t detect any damage, but how could they be sure? When they read a message or looked at a screen, could they trust—should they trust—what they were seeing?
This was the desired effect of what Perry had called counter command-control warfare: just knowing that you’d been hacked, regardless of its tangible effects, was disorienting, disrupting.
Meanwhile, the Justice Department task force was tracking the hacker twenty-four hours a day. It was a laborious process. The hacker was hopping from one server to another to obscure his identity and origins; the NSA had to report all these hops to the FBI, which took a day or so to investigate each report. At this point, no one knew whether Emirnet, the Internet service provider in the United Arab Emirates, was the source of the attacks or simply one of several landing points along the hacker’s hops.
Some analysts in the Joint Staff’s new Information Operations Response Cell noticed one pattern in the intrusions: they all took place between six and eleven o’clock at night, East Coast time. The analysts calculated what time it might be where the hacker was working: he might, it turned out, be on the overnight shift in Baghdad or Moscow, or maybe the early morning shift in Beijing.
One possibility they didn’t bother to consider: it was also after-school time in California.
By February 10, after just four days of sleuthing, the task force found the culprits. They weren’t Iraqis or “Middle Eastern entities” of any tribe or nationality. They were a pair of sixteen-year-old boys in the San Francisco suburbs, malicious descendants of the Matthew Broderick character in WarGames, hacking the Net under the usernames Makaveli and Stimpy, who’d been competing with their friends to hack into the Pentagon the fastest.
In one day’s time, FBI agents obtained authority from a judge to run a wiretap. They took the warrant to Sonic.net, the service provider the boys were using, and started tracking every keystroke the boys typed, from the instant they logged on through the phone line of Stimpy’s parents. Physical surveillance teams confirmed that the boys were in the house—eyewitness evidence of their involvement, in case a defense lawyer later claimed that the boys were blameless and that someone else must have hacked into their server.
Through the wiretap, the agents learned that the boys were getting help from an eighteen-year-old Israeli, an already notorious hacker named Ehud
Tenenbaum, who called himself The Analyzer. All three teenagers were brazen—and stupid. The Analyzer was so confident in his prowess that, during an interview with an online forum called AntiOnline (which the FBI was monitoring), he gave a live demonstration of hacking into a military network. He also announced that he was training the two boys in California because he was “going to retire” and needed successors. Makaveli gave an interview, too, explaining his own motive. “It’s power, dude,” he typed out. “You know, power.”
The Justice Department task force was set to let the boys hang themselves a bit longer, but on February 25, John Hamre spoke to reporters at a press breakfast in Washington, D.C. Still frustrated with the military’s inaction on the broader cyber threat, he outlined the basic facts of Solar Sunrise (which, until then, had been kept secret), calling it “the most organized and systematic attack” on American defense systems to date. And he disclosed that the suspects were two teenagers in Northern California.
At that point, the FBI had to scramble before the boys heard about Hamre’s remarks and erased their files. Agents quickly obtained a search warrant and entered Stimpy’s house. There he was, in his bedroom, sitting at a computer, surrounded by empty Pepsi cans and half-eaten cheeseburgers. The agents arrested the boys while carting off the computer and several floppy disks.
Stimpy and Makaveli (whose real names were kept under seal, since they were juveniles) were sentenced to three years probation and a hundred hours of community service; they were also barred from going on the Internet without adult supervision. Israeli police arrested Tenenbaum and four of his apprentices, who were all twenty years old; he served eight months in prison, after which he started an information security firm, then moved to Canada, where he was arrested for hacking into financial sites and stealing credit card numbers.
At first, some U.S. officials were relieved that the Solar Sunrise hackers turned out to be just a couple of kids—or, as one FBI official put it in a memo, “not more than the typical hack du jour.” But most officials took that as the opposite of reassurance: if a couple of kids could pull this off, what could a well-funded, hostile nation-state do?
They were about to find out.
* * *
In early March, just as officials at NSA, DISA, and the Joint Staff’s Information Operations Response Cell were closing their case files on Solar Sunrise and going back to their workaday tasks, word came through that someone had hacked into the computers at Wright-Patterson Air Force Base, in Ohio, and was pilfering files—unclassified but sensitive—on cockpit design and microchip schematics.
Over the next few months, the hacker fanned out to other military facilities. No one knew his location (the hopping from one site to another was prodigious, swift, and global); his searches bore no clear pattern (except that they involved high-profile military R&D projects). The operation was a sequel of sorts to Solar Sunrise, though more elaborate and puzzling; so, just as night follows day, the task force called it Moonlight Maze.
Like the Solar Sunrise gang, this hacker would log in to the computers of university research labs to gain access to military sites and networks. But in other ways, he didn’t seem at all like some mischievous kid on a cyber joyride. He didn’t dart in and out of a site; he was persistent; he was looking for specific information, he seemed to know where to find it, and, if his first path was blocked, he stayed inside the network, prowling for other approaches.
He was also remarkably sophisticated, employing techniques that impressed even the NSA teams that were following his moves. He would log on to a site, using a stolen username and password; when he left, he would rewrite the log so that no one would know he’d ever been there. Finding the hacker was touch-and-go: the analysts would have to catch him in the act and track his moves in real time; even then, since he erased the logs when exiting, the on-screen evidence would vanish after the fact. It took a while to convince some higher-ups that there had been an intrusion.
A year earlier, the analysts probably wouldn’t have detected a hacker at all, unless by pure chance. About a quarter of the servers in the Air Force were wired to the network security monitors in San Antonio; but most of the Army, Navy, and civilian leaders in the Pentagon would have had no way of knowing whether an intruder was present, much less what he was doing or where he was from.
That all changed with the one-two-three punch of Eligible Receiver, the Marsh Commission Report, and Solar Sunrise—which, over a mere eight-month span, from June 1997 to February 1998, convinced high-level officials, even those who had never thought about the issue, that America was vulnerable to a cyber attack and that this condition endangered not only society’s critical infrastructure but also the military’s ability to act in a crisis.
Right after Eligible Receiver, John Hamre called a meeting of senior civilians and officers in the Pentagon to ask what could be done. One solution, a fairly easy gap-filler, was to authorize an emergency purchase of devices known as intrusion-detection systems or IDS—a company in Atlanta, Georgia, called Internet Security Systems, could churn them out in quantity—and to install them on more than a hundred Defense Department computers. As a result, when Solar Sunrise and Moonlight Maze erupted, far more Pentagon personnel saw what was happening, far more quickly, than they otherwise would have.
Not everyone got the message. After Eligible Receiver, Matt Devost, who’d led the aggressor team in war games testing the vulnerability of American and allied command-control systems, was sent to Hawaii to clean up the networks at U.S. Pacific Command headquarters, which the NSA Red Team had devastated. Devost found gaps and sloppiness everywhere. In many cases, software vendors had long ago issued warnings about the vulnerabilities along with patches to fix them; the user had simply to push a button, but no one at PacCom had done even that. Devost lectured the admirals, all of them more than twice his age. This wasn’t rocket science, he said. Just put someone in charge and order him to install the repairs. When Solar Sunrise erupted, Devost was working computer forensics at the Defense Information Systems Agency. He came across PacCom’s logs and saw that they still hadn’t fixed their problems: despite his strenuous efforts, nothing had changed. (He decided at that point to quit government and do computer-attack simulations in the private sector.)
Even some of the officers who’d made the changes, and installed the devices, didn’t understand what they were doing. Six months after the order went out to put intrusion-detection systems on Defense Department computers (still a few weeks before Solar Sunrise), Hamre called a meeting to see how the devices were working.
An Army one-star general furrowed his brow and grumbled that he didn’t know about these IDS things: ever since he’d put them on his computers, they were getting attacked every day.
The others at the table suppressed their laughter. The general didn’t realize that his computers might have been getting hacked every day for months, maybe years; all the IDS had done was to let him know it.
Early on in Solar Sunrise, Hamre called another meeting, imbued with the same sweat of urgency as the one he’d called in the wake of Eligible Receiver, and asked the officers around him the same question he’d asked before: “Who’s in charge?”
They all looked down at their shoes or their notepads, because, in fact, nothing had changed; no one was still in charge. The IDS devices may have been in place, but no one had issued protocols on what to do if the alarm went off or how to distinguish an annoying prank from a serious attack.
Finally, Brigadier General John “Soup” Campbell, the commander of the secret J-39 unit, who’d been the Joint Staff’s point man on Eligible Receiver, raised his hand. “I’m in charge,” he said, though he had no idea what that might mean.
By the time Moonlight Maze started wreaking havoc, Campbell was drawing up plans for a new office called Joint Task Force-Computer Network Defense—or JTF-CND. Orders to create the task force had been signed July 23, and it had commenced operations on December 10. It was staffed with just twenty-three office
rs, a mix of computer specialists and conventional weapons operators who had to take a crash course on the subject, all crammed into a trailer behind DISA headquarters in the Virginia suburbs, not far from the Pentagon. It was an absurdly modest effort for an outfit that, according to its charter, would be “responsible for coordinating and directing the defense of DoD computer systems and computer networks,” including “the coordination of DoD defensive actions” with other “government agencies and appropriate private organizations.”
Campbell’s first steps would later seem elementary, but no one had ever taken them—few had thought of them—on such a large scale. He set up a 24/7 watch center, established protocols for alerting higher officials and combatant commands of a cyber intrusion, and—the very first step—sent out a communiqué, on his own authority, advising all Defense Department officials to change their computer passwords.
By that point, Moonlight Maze had been going on for several months, and the intruder’s intentions and origins were still puzzling. Most of the intrusions, the ones that were noticed, took place in the same nine-hour span. Just as they’d done during Solar Sunrise, some intelligence analysts in the Pentagon and the FBI looked at a time zone map, did the math, and guessed that the attacker must be in Moscow. Others, in the NSA, noted that Tehran was in a nearby time zone and made a case for Iran as the hacker’s home.
Meanwhile, the FBI was probing all leads. The hacker had hopped through the computers of more than a dozen universities—the University of Cincinnati, Harvard, Bryn Mawr, Duke, Pittsburgh, Auburn, among others—and the bureau sent agents to interview students, tech workers, and faculty on each campus. A few intriguing suspects were tagged here and there—an IT aide who answered questions nervously, a student with a Ukrainian boyfriend—but none of the leads panned out. The colleges weren’t the source of the hack; like the Lawrence Berkeley computer center in Cliff Stoll’s The Cuckoo’s Egg, they were merely convenient transit points from one target site to another.