Book Read Free

The One Device

Page 26

by Brian Merchant


  For hackers, there are two main ways to break through a password. The first is via social engineering—watching (or “sniffing”) a mark to gather enough information to guess it. The second is “brute-forcing” it—methodically guessing every single code combination until you hit the right one. Hackers—and security agencies—use sophisticated software to pull this off, but it can nonetheless take ages. (Imagine fiddling through every potential combination on a Master Lock.) With Farook dead—he was killed in a shootout with police—the FBI had to brute-force the phone.

  But the iPhone is designed to resist brute-force attempts, and newer models eventually delete the encryption key altogether, rendering the data inaccessible. So the FBI needed a different way around this. First, they asked the National Security Agency to break into the phone. When the NSA couldn’t, they asked Apple to open it for them. Apple refused and eventually issued a straightforward public response, essentially saying, We couldn’t do it even if we wanted to. And we don’t want to.

  The company says it designs iPhone hardware and software to prioritize user security and privacy, and many cybersecurity experts agree that it’s one of the most secure devices on the market. One reason for this is that Apple doesn’t know your personal passcode—it’s stored on the phone itself, in an area called the Secure Enclave, and paired with an ID number specific to your iPhone.

  This maximizes consumer security but is also a proactive maneuver against federal agencies, like the FBI and the NSA, that push tech companies to install back doors (ways to covertly access user data) in their products. The documents leaked by ex-NSA whistleblower Edward Snowden reveal that the NSA has pressed major tech companies to participate in programs, like PRISM, that allow the agency to request access to user data. The documents also indicate that, as of 2012, Apple (along with Google, Microsoft, Facebook, Yahoo, and other tech companies) had been participating, though the company denies it.

  So when the FBI asked Apple to give them access to Farook’s phone, the company couldn’t just hand over the passcode. But. The code that enables the time delays between failed password attempts is a part of the iPhone operating system. So the FBI made an extraordinary—and maybe unprecedented—demand: they told Apple to hack its own marquee product so they could break into the killer’s phone. The FBI’s court order required Apple to write new software, basically creating a custom version of its iOS—a program security experts took to calling FBiOS—that would override the delay system that prevented brute-force attacks.

  Apple refused, saying that the request was an unreasonable burden and would set a dangerous precedent. The feds disagreed, arguing that Apple wrote code for its products all the time, so why not just help unlock a terrorist’s cell phone?

  The clash made headlines around the world. Security experts and civil libertarians praised Apple for protecting its consumers even when it was deeply unpopular to do so, while hawks and public opinion turned against the company.

  Regardless, the episode gave rise to a number of pressing questions increasingly being asked of a society conducted on smartphones: How secure should our devices be? Should they be impenetrable to anyone but the user? Are there circumstances when the government should be able to gain access to a citizen’s private data—like, when that citizen is a known mass murderer? That’s an extreme example. But authorities are pursuing less sensational use cases too; take the NSA’s routine surveillance of cell phone metadata, for example, or police departments proposing a system that would enable them to open drivers’ smartphones if they’ve been spotted texting and driving.

  This is a glitchy paradox of the moment. We share more information than ever across social networks and messaging platforms, and our phones collect more data about us than any mainstream device before—location data, fingerprints, payment info, and private pictures and files. But we have the same, or stronger, expectations of privacy as people did in generations past.

  So, to keep safe the things that hackers might want most—bank-account info, passwords—Apple designed the Secure Enclave.

  “We want the user’s secrets to not be exposed to Apple at any point,” Krsti´c told a packed house in Mandalay Bay Casino in Las Vegas. The Secure Enclave is “protected by a strong cryptographic master key from the user’s passcode… offline attack is not possible.”

  And what, pray tell, does it do?

  Dan Riccio, senior vice president of hardware engineering at Apple, explained it thusly when he first introduced the chip to the public: “All fingerprint information is encrypted and stored inside a secure enclave in our new A7 chip. Here it’s locked away from everything else, accessible only by the Touch ID sensor. It’s never available to other software, and it’s never stored on Apple servers or backed up to iCloud.” Basically, the enclave is a brand-new subcomputer built specifically to handle encryption and privacy without ever involving Apple’s servers. It’s designed to interface with your iPhone in such a way that your most crucial data stays private and entirely inaccessible to Apple, the government, or anyone else.

  Or, in Krsti´c’s words: “We can emit secret data into a page that the process can execute but that we cannot read.” The enclave automatically encrypts the data that enters it, and that includes data from the Touch ID sensor.

  So why do we need all these layers of extra protection? Can’t Apple trust users to safeguard their own data?

  “Users tend to not choose cryptographically strong passwords,” Krsti´c said. A more aggressive quotation appeared on the screen behind him: “Humans are incapable of securely storing high quality cryptography.” At the end of his talk, I was still curious about what sort of security issues Apple deals with on a regular basis. Krsti´c was hosting a Q-and-A, but I was fully aware that nobody keeps a high-level job in Cupertino without being skilled at some class-A evasion. Still, I had to try. I walked up the center aisle to the mike.

  “What are the most persistent security issues Apple faces in iOS?” I asked.

  “Tough audience questions,” he replied after a moment of silence. The crowd went wild—or at least as wild as an auditorium filled with enterprise info-sec professionals at three o’clock on the last day of a conference could—with applause and laughter. “Thank you,” he said as I waited for an answer. “No—thank you,” he repeated. That was all the answer I was going to get.

  The FBI’s efforts to hack into the iPhone may have drawn cybersecurity into the spotlight, but hackers have been cracking the device since the first day of its launch. As with most other modern electronics, hacking has helped shape the culture and contours of the products themselves. It boasts a storied and slightly noble legacy; hacking has been around for as long as people have been transmitting information electronically.

  One of the first and most amusing historical hacks was launched in 1903 over a wireless network. The Italian radio entrepreneur Guglielmo Marconi had organized a public demonstration of his brand-new wireless-communications network, which, as he had boldly announced, could transmit Morse code messages over great distances. And, he claimed, it could do so entirely securely. He said that by tuning his apparatus to a specific wavelength, only the intended party could receive the message being sent.

  His associate Sir John Ambrose Fleming set up a receiver in the Royal Institution’s lecture hall in London; Marconi would transmit a message to it from a hilltop station three hundred miles away in Poldhu, Cornwall. As the time for the demonstration grew near, a strange, rhythmic tapping sound became audible. It was Morse code, and someone was beaming it into the lecture hall. At first, it was the same word repeated over and over: Rats. Then the sender got poetic with a limerick that began There was a young fellow of Italy, who diddled the public quite prettily. Marconi and Fleming had been hacked.

  A magician named Nevil Maskelyne declared himself the culprit. He’d been hired by the Eastern Telegraph Company, which stood to lose a fortune if someone found a way to transmit messages that was cheaper than the company’s terrestrial networks. A
fter Marconi announced his secure wireless line, Maskelyne built a one-hundred-and-fifty-foot radio mast near the transmission route to see if he could eavesdrop on it. Marconi’s system was, in hindsight, anything but secure. His patented technology that allowed him to tune his transmission to a specific wavelength is now essentially what a radio station does to broadcast its programs to all of the public; if you have the wavelength, you can listen in.

  When Maskelyne demonstrated that fact to the audience at the lecture hall, the public learned of a major security flaw in new technology, and Maskelyne enjoyed some of the first lulz.

  Hacking as the techno-cultural phenomenon that we know today probably picked up steam with the counterculture-friendly phone phreaks of the 1960s. At the time, long-distance calls were signaled in AT&T’s computer routing system with a certain pitch, which meant that mimicking that pitch could open the system. One of the first phone phreaks was Joe Engressia, a seven-year-old blind boy with perfect pitch (he’d later rename himself Joybubbles). He discovered that he could whistle at a certain frequency into his home phone and gain access to the long-distance operator, for free. John Draper, another legendary hacker who came to be known as Captain Crunch, found that the pitch of a toy whistle that came free in Cap’n Crunch cereal boxes could be used to open long-distance call lines; he built blue boxes, electronic devices that generated the tone, and demonstrated the technology to a young Steve Wozniak and his friend Steve Jobs. Jobs famously turned the blue boxes into his first ad hoc entrepreneurial effort; Woz built them, and Jobs sold them.

  The culture of hacking, reshaping, and bending consumer technologies to one’s personal will is as old as the history of those technologies. The iPhone is not immune. In fact, hackers helped push the phone toward adopting its most successful feature, the App Store.

  The fact that the first iPhones were sold exclusively through AT&T meant that they were, in a sense, a luxury phone. At $499 for the low-end 4G model, they were expensive. Every Apple diehard around the world wanted one immediately, but unless you were willing to sign on with AT&T and you lived in the United States, you were out of luck.

  It took a seventeen-year-old hacker from New Jersey a few weeks to change that.

  “Hi, everyone, this is Geohot. And this is the world’s first unlocked iPhone,” George Hotz announced in a YouTube video that was uploaded in July 2007. It’s since been viewed over two million times. Working with a team of online hackers intent on freeing the iPhone from its AT&T bondage, Hotz logged five hundred hours investigating the phone’s weaknesses before finding a road map to the holy grail. He used an eyeglass screwdriver and a guitar pick to remove the phone’s back and found the baseband processor, the chip that locked the phone onto AT&T networks. Then he overrode that chip by soldering a wire to it and running enough voltage through it to scramble its code. On his PC, he wrote a program that enabled the iPhone to work on any wireless carrier.

  He filmed the result—placing a call with an iPhone using a T-Mobile SIM card—and shot to fame. A wealthy entrepreneur traded him a sports car for the unlocked phone. Apple’s stock price rose on the day the news broke, and analysts attributed that to the fact that people had heard you could get the Jesus phone without AT&T.

  Meanwhile, a group of veteran hackers calling themselves the iPhone Dev Team had organized a break into the iPhone’s walled garden.

  “Back in 2007, I was in college, and I didn’t have a lot of money,” David Wang says. As a gearhead, he was intrigued when the iPhone was announced. “I thought it was a really impressive, important milestone for a device—I really wanted it.” But the iPhone was too expensive for him, and you had to buy it with AT&T. “But they also announced the iPod Touch, and I was like, I can afford that… I thought, you know, I could buy an iPod Touch, and they’ll eventually release a capability to let it make web calls, right?”

  Or he could just try to hack it into one.

  “At the time, there was no App Store, there was no third-party apps at all,” Wang says. “I was hearing stuff about people who were modding it, the iPhone Dev Team, and the hackers, and how they got code execution on the iPhone. I was waiting for them to do the same with iPod Touch.”

  The iPhone Dev Team was perhaps the most prominent hacker collective to take aim at the iPhone. They started probing the phone for vulnerabilities in its code, bugs they would be able to exploit to take over the phone’s operating system. Wang was watching, and waiting.

  “Every product starts out in an unknown state,” the cybersecurity expert Dan Guido tells me. Guido is the co-founder of the cybersecurity firm Trail of Bits, which advises the likes of Facebook and DARPA. He was formerly an intelligence lead at the New York Federal Reserve, and he’s an expert on mobile security. Apple, he says, “lacked a lot of exploit mitigations, they had lots of bugs in really critical services.” But that was to be expected. It was a new frontier, and there were going to be pitfalls.

  “One person found that the iPhone and the iPod Touch was vulnerable to this TIFF exploit,” Wang said. A TIFF is a large file format commonly used for images by desktop publishers. When the device went to a site displaying a TIFF, Wang says, “Safari would crash, because the parser had a bug in it”—and you could take control of the entire OS.

  It took hackers only a day or two to break into the iPhone’s software. Hackers would post proof of pwning the system—uploading a video of the phone with an unauthorized ringtone, for example—and then typically follow up with set of how-to instructions so other hackers could replicate it.

  “When [the iPhone] came out, it was just for Mac,” Wang says. In 2007, the Mac’s market share was still relatively small, just 8 percent of the U.S. market. Remember the iPod lesson: Restricting users to Mac limits the audience. “I didn’t want to wait for people to come up with Windows instructions, so I figured out how they were doing it, and made a set of instructions for Windows users… it turned out to be seventy-six steps.” That was a turning point. Wang, whose handle is planetbeing, posted his instructions online, and it set off a frenzy. “So if you Google seventy-six-step jailbreak, you would see my name. It was the first thing that I did.”

  Jailbreaking became the popular term for knocking down the iPhone’s security system and allowing users to treat the device as an actual personal computer—letting them modify settings, install new apps, and so forth. But breaking in was only the first step. “After you do that, you still have to do a lot, like install the installer app that enables you to easily install applications and all the tools, and set the root file system, read/write, and all those things, and so my steps were to help you do that. So I wrote a tool for that,” Wang says.

  Hacking is a competitive sport. Collectives function a bit like pro teams; you can’t just show up with a ball and expect to play. Hackers have to prove themselves. “They were pretty closed off,” Wang says. “There’s a problem with the hacking community—they didn’t want to share their techniques, [were] annoyed by kids like me who had a little skill and wanted to learn. But once you do something awesome, they let you in.”

  Shortly after uploading the jailbreak instructions, Wang saw a blog post by the security expert H. D. Moore, who’d taken apart, step by step, that TIFF exploit. Moore had, in essence, laid out a blueprint for an automatic jailbreak. Wang wrote the predecessor of what would become perhaps the most legendary iPhone jailbreak mechanism, an online app you could access on Safari that would immediately jailbreak the phone. Fellow Dev Team member Comex, aka Nicholas Allegra, built the actual JailbreakMe app.

  “Some of the exploits that came out, like the JailbreakMe attack,” were really fun, Guido says. At the time you could go into an Apple Store, open up JailBreakMe.com on a display phone, hit its Swipe to Unlock button, and “it would run the exploit and root the phone from the internet,” Guido says. The Swipe to Unlock was a play on the iPhone’s famous opening mechanism, a double entendre highlighting the fact that you were being freed from a closed, locked system by the Dev Team. �
��And you could just go to an Apple Store and jailbreak every single phone they had on display.”

  That’s exactly what in-the-know hackers did. “A lot of people started doing that,” Wang says, “because suddenly it was really, really easy.”

  Apple, aware that jailbreaking was becoming an increasingly mainstream trend, broke its silence on the practice on September 24, 2007, and issued a statement: “Apple has discovered that many of the unauthorized iPhone unlocking programs available on the internet cause irreparable damage to the iPhone’s software, which will likely result in the modified iPhone becoming permanently inoperable when a future Apple-supplied iPhone software update is installed.”

  There were genuine reasons that Apple was concerned about jailbreaking. Guido says that the JailbreakMe episode “could have been turned around really quickly into an attack tool kit and we’re lucky that it wasn’t.”

  The vast majority of the jailbreakers, like Wang, were eager to expand the capabilities of a clearly capable machine. The majority weren’t hacking into other people’s phones (besides jailbreaking Apple Store display models, an easily reversible prank) and they were only jailbreaking their own to customize and open them up—and, of course, for the sport of it.

  Apple’s threat went unheeded. Apple patched the bug that enabled the TIFF exploit, setting off what would be a years-long battle. The iPhone Dev Team and other jailbreaking crews would find a new vulnerability and release new jailbreaks. The first to find a new one would get cred. Then Apple would fix the bug and brick the jailbroken phones. When asked about jailbreaking at a press event, Steve Jobs called it “a cat and mouse game” between Apple and the hackers. “I’m not sure if we are the cat or the mouse. People will try to break in, and it’s our job to stop them breaking in.”

 

‹ Prev