I Live in the Future & Here's How It Works: Why Your World, Work, and Brain Are Being Creatively Disrupted

Home > Nonfiction > I Live in the Future & Here's How It Works: Why Your World, Work, and Brain Are Being Creatively Disrupted > Page 22
I Live in the Future & Here's How It Works: Why Your World, Work, and Brain Are Being Creatively Disrupted Page 22

by Nick Bilton


  One reason for increased digital book sales is the price, but the other factor provides more evidence that people pay for experiences, not just content. The purchasing experience on the Kindle is seamless, simple, and instantaneous. Let’s say you hear about a new book from a friend. You can navigate to the Kindle bookstore through the device, and your new book is in your hands minutes later.

  But Amazon doesn’t sell just books in its online store. It also sells magazines and newspapers, yet the number of subscribers is surprisingly small. The exact numbers of magazine and newspaper subscribers for the Kindle are not made public, but as Time magazine reporter Josh Quittner wrote in May 2009, “The Wall Street Journal is the second best-read newspaper and has sold a mere 5,000 subscriptions to date.” An internal memo that was leaked on the Web from the New York Times, which is the number one selling newspaper on Kindle, said that the Times had 10,000-plus subscribers. Although I was unable to find the exact numbers, a source at Amazon told me that the highest newspaper or magazine subscription for all three Kindle devices combined is in the mid-tens of thousands. So why are book sales so high and sales of other publications so low? Because the newspaper and magazine experience is terrible; they are selling only the content. Amazon allows publishers to distribute their content only into a booklike layout. There are no images, no typography, no consistency with the brands, just text on a page.

  Other players in the early e-reader space included the consumer technology giant Sony, which tried to jump into the e-reader business by promoting comfort and ease of use but fell short on both. The Sony Reader never managed to make the inroads that Amazon has because the entire experience was flawed. The first generations of the product required a USB cable to move books to the device, and the company didn’t announce digital newspapers until December 2009. I owned first- and second-generation Sony Readers, and getting books out of the proprietary Sony bookstore was utterly painful.

  Now Amazon, Sony, and other players may fall behind Apple’s iPad and a wave of e-readers built using Google’s mobile Android operating system, which both offer book reading as one of dozens of applications.

  Apple, mimicking its iPod experience, has aimed for the high end, assuming that a color screen, a very fast response time, and the “cool” factor will help it grab market share in books the way it has in music. The iPad, at least initially, sold for twice the price of a Kindle, and electronic books in Apple’s iBookstore sell mostly for $14.99, a price that pleases many publishers but sets up a duel with Amazon.

  When Steve Jobs, Apple’s chief executive officer, presented the iPad to an audience of six hundred geeks in January 2010, he talked about consistency, simplicity, and a uniform interface. Jobs walked his audience through the simple experience of buying and then reading a book and while doing so talked about those three points in detail. He explained: “We’ve created the new iBookstore, fully integrated with the iBooks apps, to allow you to discover and purchase and download e-Books.” As Jobs sat in a black chair on stage, he ushered the audience through the iBooks application and navigated through the store features. He explained that “if you’ve used iTunes or the App store, you’re already familiar with this [interface].” Jobs then purchased a book that downloaded onto the device instantly. The entire four-minute demo probably doesn’t sound that exciting, but for Jobs and Apple, that’s the point. They don’t want people to have to think about anything other than the decision to make a purchase. The rest should be a seamless and simple experience.

  The search engine giant Google, which has been busily scanning millions of books for the last few years while also navigating a copyright lawsuit with authors, joined the scrum. It is selling works from a Google electronic store called the E-book Marketplace that will be readable on any device, including e-readers and mobile phones, and can also be sold through bookstores. A Google executive I spoke with for a Times article said that the company hopes to use its prowess in searching to create a flawless experience for customers.

  The shopping experience for the e-reader device is only one of the challenges booksellers need to grapple with. There’s also the way the story is told. Consumers yearn for more interactivity and the better kinds of storytelling that are afforded by color screens, multitouch interactivity, and social interaction with friends. In some instances, content creators will have to experiment and engage readers, or viewers, in new ways.

  In the end, all these companies are going to be on the same footing. Google, Apple, Sony, Amazon, Barnes & Noble, small digital booksellers, and even some publishers will all offer books directly to consumers and will all be selling the same content. Both booksellers and book creators will have to figure out how to offer a better experience for consumers to entice them to come to their stores—either with a different kind of adventure around the purchase of a new book or with the additional and supplemental type of storytelling offered on these newfangled devices.

  It’s impossible to predict what will be the draw for an individual customer. Some may base a purchasing decision on price alone. Others will be drawn by the ease of the purchasing experience, the level of interactivity within the story, or the extended life of a narrative. Still others may base their decisions on immediacy alone. But one thing’s for sure: The content is only one tiny piece of the puzzle.

  What the Future Will Look Like: A Storytelling World, with Participation

  As the big guys wrestle with which products will produce the best and most significant experience, people like me will keep experimenting with what already has been created, demanding immediacy, personalization, networking, and easy access. I’m an early adopter of technology and excitedly embrace and try any early technologies I can get my hands on. For some, it may seem like I live in the future. Before too long, though, you will be there with me. Or as the science fiction writer William Gibson once said: “The future is already here—it is just unevenly distributed.”

  I realize that although these technologies are creating some amazing changes in the way we live and work, they’ve also upended entire industries and generated a great deal of fear and anxiety. Among the points I hope you will take away from this journey into what’s ahead is that such fears are a normal part of adapting to radical changes in the way we live. It’s understandable to feel unsettled and disoriented. But again and again in our history, we have adapted and moved forward, and in doing so we’ve learned to tell better stories than previous technologies allowed. We survived and flourished as trains replaced wagons and cars replaced horses; as radio and then television brought information directly into our homes and then fought for ownership of the living room; as comic books, video games, and iPods provided new forms of entertainment. As a society and as an economy, we will survive and then flourish amid this new influx of fast-moving information as well.

  In addition, be reassured that storytelling will remain a central part of our lives. We may tweet in 140 characters, but more and more of those missives already are delivered with links to photos, videos, and stories. In other words, they’ve become essentially personal headlines with more detailed information attached. And even when we all transition from paper to pixels, we will still read book-length content and consumer news articles written by people who are paid a salary to share a 1,000-, 5,000-, or 70,000-word story.

  Long-form content will not disappear even if we consume it in forms different from paper, even if it comes with embedded videos or with sensors and augmentation as part of the narrative. And people will still pay for all these forms, with significant and meaningful content as a crucial part of that experience.

  Working in the newspaper industry, I’m completely aware of the constant level of anxiety from both my colleagues and my readers over the fate of news. The uneasiness is apparent and real; newspapers have been going out of business at alarming rates, leaving the question in the balance: What is the future of news, and does it exist?

  I believe that there will still be a number of news outlets and a news busine
ss in the future—though they will look drastically different from the way they look today. Some of these organizations might be increasingly specific or personal, serving the needs of a relatively small number of readers rather than a mass market, sort of like in the olden days. Before newspapers and journalists as we know them, individuals were paid to be professional correspondents for rich merchants and powerful clergy. In the sixteenth century, those correspondents were sent to other cities to gather information and send letters back to their benefactors detailing shipping news and prices. The first newspapers, then, were private intelligence for individuals.

  As the first rough newspapers started to take shape, people were still part of the conversation. It is believed by some historians in England that several of the first newspapers encouraged readers to write their thoughts on the pages before they passed a paper along to another reader. It wasn’t until the eighteenth century that publishers began to sell news to the broad public.

  For news to remain relevant to consumers in the future, many newspapers and news organizations are going to have to adapt and change. The theories behind the role of news shifted dramatically in the 1920s as two writers and thinkers, Walter Lippmann and John Dewey, entered a growing public debate about the role of the newspaper in society.6 Lippmann argued that the public was unable to govern itself properly. Instead, he believed, experts, or journalists and those in government, were needed to tell the people what they had to know. It was their job to explain science and politics to the masses. He argued that workers had plenty to worry about trying to pay their bills and put food on the table. And most important, they didn’t have the time or even the knowledge to ask informed questions about government or society. Lippmann essentially argued that the role of a journalist was to tell people what they need to know and what to think about it.

  Dewey, in contrast, argued that the person who wears the shoe knows where it hurts. He believed democracy worked only if people understood the problems their country faced—and that newspapers and journalism were a perfect vehicle for that conversation. Even if the masses were limited in what they understood, he thought the job of educated people, communicators, and journalists was to use their best tools to engage people in the news as participants. Essentially he said, Let’s allow the people to work with journalists and tell them what to report.

  For the most part, Lippmann’s theories won, mostly because the people who owned the newspapers and the printing presses concluded that the role of journalists was to tell people what they needed to know, not to hold a conversation. But today, the pendulum is swinging back in the other direction. With the advent of social technologies such as blogs, comment opportunities, Twitter, Facebook, YouTube, and other simple sharing tools, the masses have gained a collective voice on an unprecedented scale. The public now has an equal voice with the printing press and no longer wants to sit idle as mainstream media dictate the day’s news. The result may be a change in the way news is reported and told in the twenty-first century, a process that may become more conversational and more personal for those who want to participate in the experience. That’s an evolution that would be perfectly consistent with the history of newspapers.

  Other types of news will come in the form of bytes and bits. As more information becomes available from our digital devices and sensors, we will see reporters emerge from sensors and algorithms. Open government initiatives and the creation of websites such as data.gov are forming hubs for the government to share information and data to be used in stories and information gathering. We are entering an era of news reporting that will blur the line between algorithmic news gathering and storytelling with human curation and explanation. Stir in social aspects and shared voices and you’ve got a perfect mix of Lippmann, Dewey, computing, and the general public.

  What the Future Will Look Like: Undercutting Yourself

  For newspapers and other media businesses, the changes have been wrenching, and some news outlets have lost ground to technology companies that aggregate news, such as Google and Yahoo!, which are more nimble in posting news as it happens. Responding quickly to the changes can be at odds with responding thoughtfully, and some companies end up paralyzed by the challenge. But with tastes and technology evolving rapidly, the ones that hesitate may truly be lost and the ones that move aggressively may win the game.

  Look at Apple, the early computer company that has moved into music, music players, cell phones, and new electronic readers. In 2007 Steve Jobs, Apple’s chief executive officer, had to decide whether the company should introduce a new product that could drastically hurt the sales of a successful current product.

  For almost thirty years, Apple’s bread and butter was selling personal computers, related software, and peripherals. But in 2001 Apple introduced the iPod, a small music player that eventually would change the entire shape of the music industry. By 2006, the iPod accounted for the majority of its core business. In late 2006, Apple reported that it had sold an astounding 21 million iPods in the last quarter of the year. The iPod business and iTunes combined brought in $4 billion of the company’s $7.1 billion in revenue for the quarter. In comparison, its Mac computer sales accounted for $2.4 billion in revenue. You would think that Apple would do everything it could to keep those iPod revenues. But the company had other plans.

  Apple recognized that music players were eventually going to simply be additional pieces of software built right into a phone or another device. So in 2007, Jobs stood on the stage at the Macworld developer conference in San Francisco and made two announcements: First, the company was changing its name from Apple Computer to simply Apple—a clear recognition of the company’s metamorphosis. Second, Apple was introducing a new product line: the iPhone.

  Jobs explained to the crowd of wide-eyed geeks that this new sleek shiny gadget wasn’t just a phone. Sure, it would make phone calls (though not especially well, thanks to the AT&T network). It was also designed for e-mail and surfing the Web and had a mapping application and a calendar, and by the way, it had a free iPod stuffed inside.

  This was a risky move. Customers who purchased an iPhone surely wouldn’t need an iPod, too, and the phone would definitely cannibalize the sales of the company’s core business. But Jobs knew that if he didn’t move ahead from the iPod, another company would.

  The move paid off. In the first quarter of 2010, Apple announced that its revenue had jumped to $13.4 billion, nearly twice the total from late 2006. Apple grew dramatically after the iPhone launch. In early 2010 its market capitalization was a staggering $222 billion, surpassing its biggest rival, Microsoft, as the world’s biggest technology company. Although the company sold 10.9 million iPods, half the number it sold three years earlier, it also sold 8.75 million iPhones.

  Jobs knew that if he didn’t undercut himself, someone else would. There was an incredible amount of risk involved in subverting his core business with a new product, but this is a philosophy Jobs understands from the early days of computing when Apple lost the computer wars with Microsoft, the dominant force in the computing world. Innovation has clearly played a huge role in the company’s rise since Jobs returned in 1996. But that innovation has been coupled with a willingness to make a popular product obsolete, creating one of the most profitable and successful technology companies in the world.

  The same challenge applies to other business industries too. Some newspapers, magazines, book publishers, and music houses are trying to hold on to their cash cow—their paper or plastic products. In the interim digital-only companies are springing up out of nowhere to compete without the same infrastructure, expenses, or business traditions.

  What the Future Will Look Like: TMI?

  Vinton Cerf, considered by many to be the “father of the Internet” and now the chief Internet evangelist at Google, has a message about your socks.

  During a presentation at Google several years ago, Cerf explained that some day, everything will be connected to the Internet. That includes a person’s socks, so
if one falls behind the washing machine, it will be able to notify him, or the other sock, of its new location.

  In Cerf’s vision—“the Internet of Things”—sensors eventually will be everywhere, embedded in our T-shirts and the medicine we take, and will be able to deliver real-time information and analysis to our persons.

  In a blog post I wrote about this topic for the Times, I explained that we’re already seeing the beginnings of this: “Doctors are using tiny cameras, about the size of a pill, to look at the digestive tract and send back information and pictures. Farming equipment can collect data from remote satellites and sensors in the ground, anticipate weather, and adapt the fertilizer to be used. And billboards in Asia can change displays based on the preferences of passers-by.”

  Understandably, the Internet of Things, as it is called, scares some people. Embedding the Internet into everything could make us reliant on technology that could crash at any moment. But even more, it means that even greater masses of information will be created, much of it increasingly personal and unique. These technologies raise new and difficult questions about privacy and appropriate use of what we know. Some of the folks who live much further into the future than me highlight this challenge.

  For example, if you bumped into Steve Mann at any point in the past few decades, you would definitely remember him: He looks like a cross between a computer and a human. Mann is considered one of the first digital cyborgs and has been experimenting with wearable computing for the last thirty years.7 He invented a patented system he calls EyeTap, which he says should be used “for electronic newsgathering, documentary video, photojournalism and personal safety,” where the wearer becomes part of an “intelligence network.”

 

‹ Prev