Dogfight: How Apple and Google Went to War and Started a Revolution
Page 19
* * *
The similarities between Singer’s and Howe’s patent fight and the smartphone wars of today are striking. It’s tempting to distinguish today’s fights by arguing that software is so much harder to understand. But USPTO judges and juries have always struggled with understanding technologies. In 1912, Judge Learned Hand was overseeing a patent case in the biomedical industry challenging whether adrenaline could be patented. He ruled that it could be, but he also wondered why he was being asked to decide at all. “I cannot stop without calling attention to the extraordinary condition of the law which makes it possible for a man without any knowledge of even the rudiments of chemistry to pass upon such questions as these. The inordinate expense of time is the least of the resulting evils, for only a trained chemist is really capable of passing upon such facts.”
What is different with software patents today is that despite thirty years of trying, the legal precedents governing what makes a good or a bad patent remain in dispute. In the early days of the computer industry the answer to this question was easy: software wasn’t patentable. It wasn’t viewed as a product separate from the computer itself. Courts didn’t think software did much anyway beyond telling a machine to do mathematical calculations faster. Math, being a part of nature, was unpatentable.
But by 1981, with the PC gaining traction in businesses and an entire industry of software entrepreneurs emerging, the U.S. Supreme Court changed that in Diamond v. Diehr. It said that a computer program used to calculate how long a machine would heat and cure rubber was patentable. The software was more than just a series of mathematical equations, the court ruled. It was a unique process for determining the best way to mold rubber. The patent on molding rubber without software had long expired. But the addition of software had created a new, unique, and patentable way of doing it.
This decision became critical to Silicon Valley entrepreneurs in the 1990s. Up until then the legal convention had been to protect software under copyright law, given courts’ previous resistance to permitting software patents. Writing software is creative just as writing books or music is, so it should be protected the same way, attorneys thought. With the English language, letters are used to form words and express ideas. With the language of music, notes are used to tell musicians what sounds to play with their instruments. With computer language, software code is written to tell machines what to do.
But in 1987 Quattro, a spreadsheet software program from Borland, pushed the limits of copyright law in software and successfully rendered it useless. Back then there were many spreadsheet programs for the PC, with Lotus 1-2-3 being the most dominant and successful. Quattro, in an effort to make its product easier to use, copied the words and menu hierarchy of Lotus. It didn’t want customers to be confused switching back and forth between Lotus and Quattro. It didn’t use any of Lotus’s underlying code, it just provided users with a “Lotus Emulation Interface,” which allowed them to switch between the Quattro look and the Lotus look.
Lotus sued, saying its menu system was protected under copyright law. But, to the surprise of all Silicon Valley, it lost. “In many ways, the Lotus menu command hierarchy is like the buttons used to control, say, a video cassette recorder (‘VCR’),” said Judge Norman Stahl of the First Federal Circuit Court of Appeals in New Hampshire in 1995. “A VCR is a machine that enables one to watch and record video tapes. Users operate VCRs by pressing a series of buttons that are typically labeled ‘Record, Play, Reverse, Fast Forward, Pause, Stop/Eject.’ That the buttons are arranged and labeled does not make them a ‘literary work,’ nor does it make them an ‘expression’ of the abstract ‘method of operating’ a VCR via a set of labeled buttons. Instead, the buttons are themselves the ‘method of operating’ the VCR.”
The repercussions were huge. Apple’s lawsuits against Microsoft, which continued into the 1990s—long after Jobs had left the company—would run afoul of this ruling, for example. Left with no other tool to protect entrepreneurs’ creations, attorneys turned to the Diamond v. Diehr case and started using patents for protection instead.
Using patents to protect software has proven only slightly more effective than using copyright, however. One big problem is simply technological: The patent office database is searchable, but the search is not sophisticated like Google’s, for example. Google’s search engine not only finds items you are looking for, but similar items it thinks you are looking for based on your search history. That means that when the patent office tries to find previously issued patents on an idea, it often misses relevant filings.
Two years before Apple even started working on the iPhone—in 2003—a company called Neonode received a patent for activating their handheld device by swiping a finger across the screen. Apple later received a patent for exactly the same thing, known to many as the “slide to unlock” feature on the iPhone and the iPad. The patent office didn’t know it had already issued this patent because Apple and Neonode described the same behavior slightly differently. Neonode’s patent called the process “gliding the object along the touch sensitive area from left to right” instead of “slide to unlock.” Though the Apple patent has been challenged in Europe, it is still valid in the United States. And Apple maintains that its and Neonode’s patents describe different things. “Apple’s lawyers claim that continuously moving the finger isn’t specified in other prior art,” said Boston University economist James Bessen at a Santa Clara conference on patent reform. “We’re into a world of magic words, of word games. Courts and patent drafters play word games.”
Mark Lemley, director of Stanford University’s program in law, science, and technology, whom many consider to be the leading guru of reforming software patent law, says that the problem is more the patent office’s fault: the office hasn’t stopped thinking about software the way it was originally conceived—as a set of processes run by a computer. No one else thinks of software that way anymore, he said. People categorize software innovation by the solution it provides to some problem. At issue is not whether the code itself or the process it runs is unique but whether the software taken as a whole does something unique.
“We let people get away with claiming the invention in terms of the problem they solved, not the solution they provided. We don’t allow that in anything else,” he said. “We don’t allow people to claim a configuration of atoms to cure cancer. Your solution is a particular chemical.”
9
Remember Convergence? It’s Happening
Within a year of the iPad’s release it seemed remarkable that Jobs had spent a moment worrying about Android’s rise in 2009 and 2010—or at all. Android continued its astonishing growth, but iPhone sales accelerated just as fast. Quarterly sales of the iPhone 4, unveiled in 2010, doubled those of the iPhone 3GS. Sales of the iPhone 4S, unveiled in 2011, doubled those of the iPhone 4. By the fall of 2011, Apple was selling nearly 40 million iPhones a quarter. Google was doing well. It said Android was profitable. But it was still hard to see Android’s financial impact on the company. Meanwhile at Apple, the iPhone and the app store were propelling the company to record profits. In 2011 Apple made $33 billion, as much as Google and Microsoft combined. In 2010 it had passed Microsoft to become the biggest technology company in stock market valuation. In 2011 it had passed Exxon to become the biggest company, period, in stock market valuation. By the end of 2011 it was sitting on so much cash—$100 billion—that if it had wanted to use that money to become a bank, it would have ranked among the top ten in the world.
Most notably, by the middle of 2011, the iPad was proving to be a more revolutionary product than even the iPhone—and certainly the iPod. The iPod and iTunes changed the way people bought and listened to music. The iPhone changed what people could expect from their cell phones. But the iPad was turning five industries upside down. It was changing the way consumers bought and read books, newspapers, and magazines. And it was changing the way they watched movies and television. Revenues from these businesses totaled about $250 bill
ion, or about 2 percent of the GDP.
The iPad wouldn’t have been possible without the iPhone. It would have been too expensive to build and sell for $600 in 2007. The required low-power ARM chips weren’t fast enough to run something with a screen that big. And without all the content in the app store, consumers would not have known what to do with it. At least that’s what Apple thought. But by 2011, with the app store in place and the learning curve for using Apple’s touchscreen eliminated, the iPad was spawning seemingly endless new ways to consume and interact with content.
On top of all that, the iPad was also upending the personal computer business. It was eating into PC sales the same way that in the 1980s PCs ate into sales of minicomputers and mainframes from such companies as Digital Equipment and IBM. Some iPad buyers did indeed make the iPad their third device, as Jobs had predicted. But many others decided they now needed only two, and they started ditching their Microsoft-run Dell, HP, Toshiba, Acer, and Lenovo laptops at an accelerating clip. The shift hit Dell so hard that by the beginning of 2013 it was trying to take itself private to retrench.
Jobs was particularly satisfied with this development, a confidant said—even though in the context of the other upheavals the iPad was unleashing it was almost a footnote. Thirty-five years after starting Apple with Steve Wozniak, Jobs was finally doing what he had set out to do all along: he was transforming what consumers and businesses expected from their computers. The Macintosh in 1984—the first mainstream machine to use a mouse—was supposed to have been the machine that did this. It was supposed to have taken a complicated device—the PC—and made it a consumer product that anyone could use. That failed. As everyone knows, Macs didn’t go away, but Microsoft Windows and Office get the credit for making the PC mainstream.
Yet by 2011 the world had come full circle. If you counted desktop and mobile operating systems together, Apple’s computing platform was now about as big as Microsoft Windows and Windows Mobile. And it was fitting that Dell had been hit among the hardest. When Jobs returned to Apple in 1997, Michael Dell had declared he had so little faith in an Apple recovery that if he were Jobs, he’d “shut Apple down and give the money back to the shareholders.” “Steve hated the fact that the Macintosh wasn’t mainstream right away—that everyone wasn’t just fucking sweating to get one,” the Jobs confidant said. “So we talked a lot about how we could make sure the iPad caught on right away.”
Andy Rubin and the Android team at Google scrambled to keep up with the relentless pace of Apple’s innovations. But in 2011 they were being outflanked on almost every front. Yes, there were more Android devices in use than iPhones or iPads combined. But platform size was turning out to be just one, not the only, measurement of dominance in the Apple/Google fight. With the iPhone and the iPad, Apple still had the coolest, most cutting-edge devices. It had the best content for those devices. It had the easiest-to-use software. And it had the best platform for making content owners and software developers money. What Jobs understood—and what Google executives were furiously trying to get their heads around—was that this was more than just a fight over which company would dominate the future of technology. It was a battle over which would control the future of media too. The iPod was a great-looking device, but what made it popular was all the music consumers could easily buy for it. iPhone sales didn’t really take off until Jobs introduced the app store. And the iPad became mainstream only when Jobs convinced big media companies to let consumers shop for an endless supply of books, newspapers, magazines, movies, and TV shows.
Indeed, the more successful Apple became, the more Google and Android hewed toward Apple’s “we control everything” approach. To make Android software look cooler and be easier to use, Rubin hired the designer Matias Duarte from Palm in the middle of 2010. And to make Android phones and tablets sell better he started dictating how certain Android phones were designed. While these so-called Nexus devices are built by manufacturers such as Samsung or LG or HTC, they are largely designed and sometimes even marketed by Google.
But this was not an easy adjustment for Google’s and Android’s very engineering-driven culture. It wasn’t until Google released the Nexus S at the end of 2010 that it had a top-selling phone made this way. And it wasn’t until it released the Nexus 7 in 2012 that it had a top-selling tablet. Google didn’t have a meaningful competitor to the iTunes store until it released Google Play in 2012, which combined its Android app store with its own efforts at distributing movies, books, games, and TV shows.
You’d think that over the years Google would have developed some affinity for the sales and marketing skills necessary to make a dent in the media business. Virtually all its revenue came from advertising. It owned YouTube—arguably the biggest distributor of video in the world. But Google had succeeded because it rejected these social and business mores. It had used technology to take most of the sales and marketing out of advertising and distribution and turn them into giant number-crunching exercises.
Google is trying to develop these sales and marketing skills today, but throughout 2012 and 2013 it continued to demonstrate that it has a long way to go. In 2012, when Google showed off an orblike device called the Nexus Q—which wirelessly streamed music, TV shows, and movies to any device in the house—the public response was so negative that Google decided to scrap the project entirely and not even offer it for sale. The Nexus Q was supposed to challenge the dominant streaming-media devices made by Apple and Roku. But Google said the Nexus Q would cost three times that of its competitors’ boxes and would work only with consumers’ existing entertainment libraries or with Google-supplied content from its store. Consumers couldn’t, for example, watch Netflix or Hulu Plus on it. In mid-2013 Google tacked the opposite way with Chromecast, a $35 TV dongle that turns any smartphone into a remote.
In 2013, it also offered the Chromebook Pixel, a laptop with a touchscreen that was one of the sharpest ever made. But it seemed like more of an experiment than a real product that anyone would buy. Conceptually it worked like a smartphone or tablet—that is, with most of consumers’ information stored not on the machine but in the cloud. It came with a tiny 64 GB hard drive, no DVD drive, and not an operating system from Microsoft or Apple but Google’s own browser-based setup called Chrome. It wouldn’t run Microsoft Office.
Consumers might have made that adjustment if the Pixel were lighter than a typical laptop, had a cooler, more functional design, or better battery life. Google’s Office substitutes have become very competitive. But the Pixel wasn’t lighter, better looking, or less power hungry. And it cost $1,300, twice the price of an iPad with a similar screen.
* * *
In retrospect it’s odd that it took the iPad and not the iPhone to help the media business see a future they wanted to be part of, not fight. One of its holy grails has always been to reach customers wherever they might be. Nothing was better at that than a smartphone connected to the Internet. No other device could consistently reach customers everywhere—not only when they planned on reading a book or watching a movie, but also during their many in-between moments—while they were standing in line, using the restroom, or having a moment of boredom in a meeting or a show. But back then content executives thought the screen was too small—they couldn’t imagine their customers watching a movie or reading a book on it. And advertisers couldn’t imagine flashy, high-budget campaigns on it.
The iPad, on the other hand, with a screen nearly the size of some magazines, offered all manner of possibilities. Could publishers offer digital subscriptions that consumers would actually pay for and wean them off the expectation that their content would always be free? Could publishers sell advertising at the same price as in their printed publications? Could Hollywood change the way it charged cable and satellite distributors for its content by making it more mobile and offering new interactive features?
The answer to most of these questions turned out to be yes. By the time Jobs died, in October 2011, users could read or watch virtual
ly anything on an iPad. Fueled with books, magazines, newspapers, movies, and TV shows from iTunes and the app store, live TV from cable, plus content from other online services such as Amazon, Netflix, Hulu, and HBO, the iPad had become the most important new media-consumption device since television. Subscriptions to hundreds of magazines were available through iTunes. More than 1 million e-books were available for instant download through Amazon’s Kindle app or the iTunes bookstore. Almost any movie or TV show you could think of could be found on one of the streaming services.
Media executives’ negotiations with Apple and one another were bumpy at first. Newspaper and magazine publishers were worried that selling their content through iTunes would give Apple ownership of their subscriber lists, perhaps their most critical asset. Television studios such as Viacom and News Corp. worried that the cable companies would use the iPad to massively expand their audience and ad revenues but not pay them a cent.
Indeed, for about eighteen months it looked as if few of the issues would be resolved. In 2010 and 2011 separate teams of executives from Condé Nast and Time Inc. made almost monthly pilgrimages to Apple’s headquarters in California to explain why they would never negotiate the rights to their subscriber lists. But after roughly a dozen meetings, it still seemed as if Apple didn’t understand. Apple’s only big concession was agreeing to put up an “opt-in” notice each time someone subscribed through iTunes. Apple would ask subscribers, in effect, “Would it be okay if we shared the name, address, and contact information you’ve just given us with the publisher?” Condé Nast and Time Inc. executives were convinced this was just a mealymouthed way of turning them down. Their research showed that subscribers almost always responded no when presented with questions like this.