The One Device

Home > Other > The One Device > Page 14
The One Device Page 14

by Brian Merchant


  “Do you remember back in the 2001, 2002 time frame, video on the laptop was like a little window of video that was fifteen frames per second and had horrible artifacts?” Compression artifacts are what you see when you try to watch YouTube over a slow internet connection or, in ye olden times, when you’d try to watch a DVD on an old computer with a full hard drive—you’d get that dreaded picture distortion in the form of pixelated blocks. This happens when the system applies what’s called lossy compression, which dumps parts of the media’s data until it becomes simple enough to be stored on the available disk space (or be streamed within the bandwidth limitations, to use the YouTube example). If the compressor can’t reconstruct enough data to reproduce the original video, quality tanks and you get artifacts. “The problem that we were having was, you would spend half of a video frame decoding the frame, and then the other half of the frame trying to remove as many artifacts as you could to make the picture not look like it sucked.”

  This was a mounting problem, as video streaming was becoming a more central part of computer use. Fixing the problem would hold the key to porting the external iSight into the hardware of a device.

  “And I had this epiphany in the shower,” he says. “If we don’t create the blocks, we don’t have to remove them. Now that sounds obvious, but how do you reconstruct video if you don’t have a block?” His idea, he says, was building out an entire screen that was nothing but blocks. He wrote an algorithm that allowed the device to avoid de-blocking, making the entire frame of video available for playback. “So we all of a sudden were able to play full video streams on a portable Mac for that reason. One of my patents is exactly that: a de-blocking algorithm.” Knowing that, he was ready to tackle the iSight issue. “Here’s what I had up my sleeve: CCD imagers, which the external iSight was, were much better quality than the cheap CMOS small imagers.”

  There are two primary kinds of sensors used in digital cameras: charge-coupled devices, or CCDs, and complementary metal-oxide semiconductors, or CMOSs. The CCD is a light-sensitive integrated circuit that stores and displays the data for a given image so that each pixel is converted into an electrical charge. The intensity of that charge is related to a specific color on the color spectrum. In 2002, CCDs traditionally produced much better image quality but were slower and sucked down more power. CMOSs were cheaper, smaller, and allowed for faster video processing, but they were plagued with problems. Still, Bilbrey had a plan.

  He would send the video from the camera down to the computer’s graphics processing unit (GPU), where its extra muscle could handle color correction and clean up the video. He could offload the work of the camera sensor to the computer, basically.

  So his team got to work rerouting the iSight for a demo that was now mere days away. “I developed a bunch of video algorithms for enhancement, cleanup, and filtering, and we employed many of those to create the demo,” he says. One of his crack engineers started building the hardware for the unit. But the hardest part of the process wasn’t the engineering—it was the politics. Building the demo meant messing with how other parts of the computer worked—and that meant messing with other teams’ stuff.

  “The politics of this was a nightmare,” he says. “No one wanted to change the architecture that drastically. The way I got that to happen is I put it before Steve before anyone could stop me. Once Steve blesses something, no one’s going to stand in the way. If you ever wanted to get something done, you just say, ‘Oh, well, Steve wants to do this’ and you had carte blanche, because no one was going to check with Steve to see if [he] actually said that, and no one was going to question you. So if you really wanted to win an argument in a meeting, you’d just go, ‘Steve!’ And then everyone would go, ‘Crap.’”

  With the new algorithms in place and the new hardware ready—the camera was built into the laptop lid; no more obtrusive wart-cam—Bilbrey’s team headed into the boardroom the night before the demonstration. They tested it, and the much more compact CMOS-powered system seemed to be flying. Seamless video in a tiny module that could fit in a laptop.

  “We said, ‘Okay, no one touch it—let’s go home.’ We’re all set. It was set up in the boardroom. We’ll come back tomorrow, Steve will see it, and everything’s fine,” Bilbrey says.

  The team showed up the next day shortly before the meeting and turned the iSight on. There were two displays; the one on the left was showing the CCD, and the one on the right was showing the new-and-improved internal-ready CMOS. And, well, that image was purple. Bilbrey was flummoxed and, suddenly, terrified. “We’re like, What happened?”

  Just then, Jobs walked in. He looked at it and got right to the point.

  “The one on the right looks purple,” he said.

  “Yeah, we don’t know what happened,” Bilbrey said.

  One of the software guys chimed in: “Yeah, I updated the software last night and I didn’t see it then.”

  Bilbrey groaned. “I was like, You did what?” he says. To this day, he sounds a little incredulous. “He updated the software! I know he was just trying to do a good thing. But when everything works, you leave it alone.”

  Steve looked at him “with kind of this smirk” and said, simply, “Fix it. And then show it to me again.” At least he wasn’t going to be fired. They worked out the glitch and showed it to Jobs the next day; he signed off on it with as much brevity as he’d dismissed it the day before: “It looks great.”

  That was that. It was, Bilbrey says, one of the first moves in the industry toward the now-ubiquitous internal webcam.

  “We got the patents for the internal camera,” he says. And then, the much smaller internal iPhone cam. Bilbrey would go on to advise the engineer in charge of the first iPhone’s camera, Paul Alioshin. (Alioshin, by all accounts a good and well-liked engineer, sadly passed away in a car crash in 2013.) To this day, the camera is still called the iSight. “As far as I’m aware, they’re still doing it the same way. The architecture that we created to make this work is still the architecture in place.”

  The CMOS, meanwhile, is in the iPhone and has beaten out the CCD as the go-to technology for phone cameras today.

  You can’t talk about iPhone cameras without talking about selfies. FaceTime video streaming, which Bilbrey’s algorithms still help de-clutter, was launched as a key feature of the iPhone 4 and would join Skype and Google Hangouts as burgeoning videoconferencing apps. Apple placed the FaceTime camera on the front side of the phone, pointed toward the user, to enable the feature, which had the added effect of making it well-designed for taking selfies.

  Selfies are as old as cameras themselves (even older, if you count painted self-portraits). In 1839, Robert Cornelius was working on a new way to create daguerreotypes, the predecessor of photography. The process was much slower, so, naturally, he indulged the urge to uncover the lens, run into the shot, wait ten minutes, and replace the lens cap. He wrote on the back, The first light picture ever taken, 1839. The first teenager to take a photo of herself in the mirror was apparently the thirteen-year-old Russian duchess Anastasia Nikolaevna, who snapped a selfie to share with her friends in 1914. The 2000s gave rise to Myspace Photos, leading more mobile dumb phones to ship with front-facing cameras. The word selfies first appeared in 2002 on an Australian internet forum, but selfies really exploded with the iPhone, which, with the addition of the FaceTime cam in 2010, gave people, for better or worse, an easy way to snap photos of themselves and then filter the results.

  The deluge of integrated cameras hasn’t led solely to narcissistic indulgence, of course. Most of the time, people use the iSight to snap photos of their food or their babies or some striking-at-the-moment-not-so-much-at-home landscape shot. But it’s also given us all the ability to document a lot more when the need comes. The immensely portable high-quality camera has given rise to citizen journalism on an unprecedented scale.

  Documentation of police brutality, criminal behavior, systematic oppression, and political misconduct has ramped up in the s
martphone era. Mobile video like that of Eric Garner getting choked by police, for instance, helped ignite the Black Lives Matter movement, and it has in other cases provided crucial evidence for officer wrongdoing. And protesters from Tahrir to Istanbul to Occupy have used iPhones to take video of forceful suppression, generating sympathy, support, and sometimes useful legal evidence in the process.

  “Nobody even talks about that at Apple,” says one Apple insider who’s worked on the iPhone since the beginning. “But it’s one of the things I’m most proud of being involved in. The way we can document things like that has totally changed. But I’ll go into the office after Eric Garner or something like that, and nobody will ever say anything.”

  It goes both ways, however, and those in power can use the tool to maintain said power, as when Turkey’s authoritarian leader Recep Erdog˘an used FaceTime to rally supporters amid a coup.

  When my wife called me, in overwhelmed ecstatic tears, to tell me that she was pregnant—a total surprise to both of us—we immediately FaceTimed. I snapped screenshots of our conversation without thinking as we tried to process this incredible development. I just did it. They’re some of the most amazing images I’ve ever captured, full of adrenaline and love and fear and artifacts.

  Before 2007, we expected to pay hundreds of dollars if we wanted to take solid digital photos. We brought digital cameras on trips, to events; we didn’t expect to bring them everywhere.

  Today, smartphone camera quality is close enough to the digital point-and-clicks that the iPhone is destroying large swaths of the industry. Giants like Nikon, Panasonic, and Canon are losing market share, fast. And, in a small twist of irony, Apple is using technology that those companies pioneered in order to push them out.

  One of the features that routinely gets top billing in Apple’s ever improving iPhone cameras is the optical image stabilization. It’s a key component; without it, the ultralight iPhone, influenced by every tiny movement of your hands, would produce impossibly blurry photos.

  It was developed by a man whom you’ve almost certainly never heard of: Dr. Mitsuaki Oshima. And thanks to Oshima, and a vacation he took to Hawaii in the 1980s, every single photo and video we’ve described above has come out less blurry.

  A researcher with Panasonic, Oshima was working on vibrating gyroscopes for early car-navigation systems. The project ended abruptly, right before he took a fortuitous holiday in Hawaii in the summer of 1982.

  “I was on vacation in Hawaii and out driving with a friend who was filming the local landscape from the car,” Oshima tells me. “My friend was complaining about the difficulty of handling a huge video camera in the moving car; he couldn’t stop it from shaking.” Shoulder-mounted video cameras like the one his friend was using were heavy, cumbersome, and expensive, yet they still couldn’t keep the picture from blurring. He made the connection between the jittery camera and his vibrating gyroscope: It occurred to him that he could eliminate blur by measuring the rotation angle of a camera with a vibrating gyro, and then correct the image accordingly. That, in the simplest terms, is what image stabilizers do today. “As soon as I returned to Japan, I started researching the possibilities for image stabilization using the gyro sensor.”

  Alas, his superiors at Panasonic weren’t interested, and he couldn’t secure a budget. But Oshima was so confident he could prove the merits of image stabilization that he took to working nights, building a prototype with a laser display-mirror device. “I remember the moment when I first turned on the prototype camera with nervous excitement. Even with a shake of the camera, the image did not blur at all. It was too good to be true! That was the most wonderful moment in my life.”

  Oshima chartered a helicopter to fly low around Osaka castle—one of Japan’s iconic landmarks—and shot the scenery with and without his stabilization technology. The results impressed his bosses, who funded the project. Even so, after years of work, with a commercial prototype at the ready, the brass was still reluctant to bring it to market.

  “There was some opposition to the commercialization of the product,” Oshima says. “The Japanese market was focused on the miniaturization of the video camera, a craze that had not yet caught on in the U.S.” So he turned to his counterparts in North America. “In 1988, the PV-460 video camera became the first image stabilization–equipped video camera in the world. It was a big hit in the U.S., even though it was $2,000 dollars.” More expensive than the competition, sure—but the allure of steadying blurry shots proved powerful enough to warrant the extra cost.

  The technology migrated to Nikon’s and Canon’s digital cameras in 1994 and 1995, respectively. “After that the invention immediately spread worldwide, and, consequently, my invention is employed in all digital camera image stabilizers.” Over the years, he continued to work to bring the technology to more and smaller devices, and he seems awed by the ubiquity of the technology he helped pioneer.

  “It is true and unbelievable that this technology is still used in almost every camera thirty-four years after the first invention,” he tells me. “Now, almost every device that has a camera, including iPhones and Androids, has this image stabilization technology. My dream of equipping this technology on all cameras has finally come true.”

  To Oshima, innovation is the act of creating new networks between ideas—or new routes through old networks. “Inspiration is, in my understanding, a phenomenon in which one idea in the brain is stimulated to be unexpectedly associated with, and linked to, a totally different idea.” It’s the work of an expanded ecosystem.

  Looking back through the photos Luraschi took with my phone in Paris, I’m struck by how evocative they are: an elderly woman engrossed in her book in the park; the dancing woman cutting her own path through the crowded piazza; a man standing in the open cage of a suspension bridge. Each taken quickly and seamlessly, each crystal and vivid.

  My favorite shot is of a little girl climbing carelessly on an iron fence built over a retaining wall; it frames her lithe figure in an orderly web that stretches to a point on the horizon. The photo took seconds to capture and a few more to edit and share.

  CHAPTER 7

  Sensing Motion

  From gyroscopes to GPS, the iPhone finds its place

  Underneath hulking stone columns and arches, I’m standing next to Foucault’s pendulum, which swings hundreds of feet down from the ceiling of this capacious, cathedral-quiet room. The pointed tip of the lead-coated bob slowly grazes a round glass table, with morning light filtering in through stained-glass windows. This is probably the closest thing to a religious experience you can have over a science experiment.

  Maybe it’s the stained glass. Maybe I’m still jet-lagged. Or maybe it’s because the century-and-a half-old pendulum is still a little humbling to take in. But seeing it feels a bit like wandering into St. Peter’s Cathedral for the first time, or peering into the Grand Canyon. There are, after all, few better ways to be viscerally reminded that you are standing on the surface of an incomprehensibly massive rock that is spinning through the void of space than staring at undeniable proof that said rock is in fact spinning.

  This is the Musée des Arts et Métiers, founded in 1794, one of the oldest science and technology museums in the world. A former church abbey tucked in the middle of Paris’s third arrondissement, it’s at once sprawling and unassuming. A mold of the Statue of Liberty greets visitors in the stone courtyard. You’ll find some of the most important precursors to the modern computer here, from Pascal’s calculator (the first automatic calculator) to the Jacquard loom (which inspired Charles Babbage to automate his Analytical Engine). And you’ll find the pendulum.

  Jean-Bernard-Léon Foucault—not to be confused with the more strictly philosophical Foucault, Michel—had set out to prove that the Earth rotated on its axis. In 1851, he suspended a bob and a wire from the ceiling of the Paris Observatory to show that the free-swinging pendulum would slowly change direction over the course of the day, thus demonstrating what we now call th
e Coriolis effect. A mass moving in a rotating system experiences a force perpendicular to the direction it’s moving in and to the axis of rotation; in the Earth’s Northern Hemisphere, that force deflects moving objects to the right, leading to the Coriolis effect. The experiment drew the attention of Napoléon III, who instructed him to do it again, with a bigger pendulum, at the Paris Panthéon. So Foucault built a pendulum with a wire that stretched sixty-seven meters, which impressed the emperor (and the public; Foucault’s pendulum is one of the most popular exhibits at science centers around the world). The bob that Foucault used for Napoléon III swings in the Musée des Arts et Métiers today.

  For his next experiment, Foucault used a gyroscope—essentially a spinning top with a structure that maintains its orientation—to more precisely demonstrate the same effect. At its fundamental level, it’s not so different from the gyroscope that’s lodged in your iPhone—which also relies on the Coriolis effect to keep the iPhone’s screen properly oriented. It’s just that today, it takes the form of a MEMS—a microelectromechanical system—crammed onto a tiny and, frankly, beautiful chip. The minuscule MEMS architectures look like blueprints for futuristic and symmetrical sci-fi temples.

  The gyroscope in your phone is a vibrating structure gyroscope (VSG). It is—you guessed it—a gyroscope that uses a vibrating structure to determine the rate at which something is rotating. Here’s how it works: A vibrating object tends to continue vibrating in the same plane if, when, and as its support rotates. So the Coriolis effect—the result of the same force that causes Foucault’s pendulum to rotate to the right in Paris—makes the object exert a force on its support. By measuring that force, the sensor can determine the rate of rotation. Today, the machine that does this can fit on your thumbnail. VSGs are everywhere; in addition to your iPhone, they’re in cars and gaming platforms. MEMS have actually been used in cars for decades; along with accelerometers, they help determine when airbags need to be deployed.

 

‹ Prev