Book Read Free

Breakpoint_Why the Web will Implode, Search will be Obsolete, and Everything Else you Need to Know about Technology is in Your Brain

Page 6

by Jeff Stibel


  Image 4.1: Will the Web Collapse?

  For the web, collapse won’t be as spectacular as the mass extinction of reindeer on St. Matthew Island or cannibalism on Easter Island. Collapse on the web means that we simply won’t use it anymore. We’re already seeing signs of this, in large part as a result of the rise of mobile applications, or apps. If the World Wide Web is a Swiss Army knife, an app is the fish hook disgorger or a wood chisel. It performs one specific task. There’s no temptation to unnecessarily fiddle with the scissors or the can opener because those are completely different apps that you must make a conscious effort to use. Note that while many apps pull content from the web and reformat it (The Economist or Yelp), most apps bypass the web completely (Taxi Magic or Uber). They use the internet to connect to content servers, but they do not actually download from, or upload to, the World Wide Web. The distinction isn’t always obvious, but apps are fundamentally different from the web, and apps are taking from the web an increasingly larger share of the internet’s growth.

  The average iPhone owner already has 108 apps and spent 127 minutes a day using them in 2012 (up from 94 minutes in 2011); numbers for Android users are comparable. That is already almost double the time the average person currently spends on the web. Of course, part of the reason we’re increasingly using apps instead of the web is that we’re mobile, but that’s only part of the story, as anyone who prefers to use their iPad rather than their laptop knows. A weather app is not only more convenient than watching the evening news, it’s also more convenient than going to weather.com.

  Imagine if you had to choose between giving up your apps or your browser on your mobile phone or tablet. That is a no-brainer for me: I’d ditch the browser, which I tend to use no more than any other app. Definitely when I’m on the go, and often when I’m just hanging out at home, my personal collection of apps is far more useful to me than the web. As tablets replace computers, the web will be relegated to a position no higher than that of a “super app.”

  The rise of mobile isn’t doing the web any favors, and by all accounts, mobile is skyrocketing. It more than doubled in 2011 for the fourth year in a row. Cisco predicts that mobile traffic will increase by a factor of 18 in the next five years. That growth will come at the expense of the web. It should be noted that this doesn’t necessarily mean that all that traffic will be apps; smartphones and tablets come equipped with web browsers, and almost all major websites (and many smaller ones) offer mobile versions of their content. Mobile versions are often simpler, pared down, and curated versions of the real thing—just like an app.

  When you click on the tech category on CNN using a mobile browser, you’re presented with a half dozen of the most popular CNN technology stories of the day. After you get to the bottom of one story, you’ll see another half dozen links to similar stories at the bottom of the page (if you use the CNN app, your only choice is to go back—there are no additional links at the end of each story). By contrast, if you read the same story on the regular CNN site, you’ll find about 30 links to other articles, a dozen sponsored ads, and a full navigation bar that will help you find even more content. It’s tough to properly surf on a phone browser, and that’s a problem for a network that depends upon its users’ gliding from link to link.

  Clearly, mobile sites must be small in order to accommodate the small physical size of a mobile phone. But that’s only part of the story. Mobile sites are small because there is now more value, more utility, in offering less. The web has hit its breakpoint under the weight of having too much of a good thing. The newer, more nimble mobile net is cutting through the clutter in the same way that search engines cut through the clutter of the early World Wide Web.

  VI

  For all of its weaknesses, the web is still pretty awesome. But it must collapse somewhat to find equilibrium. It must become smarter, denser, more relevant. But how? We need look no further than the brain for a roadmap. Both on the web and in the brain, links are key. And not just the number of links, but also the depth and dimensionality of those links. If we can mimic the structure of the brain on the web, we can make it more meaningful and, ultimately, more useful.

  Links are vital to the survival of a network, and the web is already sparse in these terms. Each website is connected to an average of 60 other websites. By contrast, each neuron in the brain links to thousands of other neurons in a tightly connected fashion. If certain neuronal links aren’t used regularly, the links disappear and the neuron dies. Not so with the web, and that’s part of the reason it is cluttered with things we don’t need.

  The brain has two types of links: inbound (axons) and outbound (dendrites), and sometimes two neurons are connected by both an inbound and an outbound link. Two-way links are obviously more meaningful than one-way links. The brain’s software networks have similar connections. Language, for instance, is stored in memory by linking relevant information with either one-way or two-way links. The idea that a Toyota is a car creates a one-way relationship in our minds because all Toyotas are cars but not all cars are Toyotas. The idea that a car is an auto creates a two-way relationship because cars and autos are synonymous. Information is retrieved in the brain by traversing these links in a manner in which one memory activates another until the right information is located.

  Adding this layer of meaning to the web will require a change in its underlying structure, but it’s not technically difficult. Currently, links are royal blue, indicating a connection from one page to another. It wouldn’t be hard to show a two-way link in a different color, or perhaps a different font size. The power of this minor change in structure would be immense: it would immediately give users the ability to know just how strong a link is across two websites or just how close a relationship is between two pieces of information.

  Think about it this way—if Joe’s Plumbing site is linked to the New York Times, that link is probably far more relevant if the New York Times also links to Joe’s Plumbing. In the latter case, it’s less likely that Joe’s Plumbing is trying to artificially strengthen its position by linking to the authority of the New York Times. In the social world, it’s like following someone on Twitter. I can follow Natalie Portman if I want to, and that link has some value, but not nearly as much as if Natalie (please) follows me as well. When it’s a two-way link, there’s more meaning. It’s an intimate connection, an actual relationship.

  Neurons also have weight to their links. In other words, there are different values to thoughts and their relationship to other thoughts. This is reflected in the relative strengths of habits and memories. There isn’t a similar link-weighting system currently on the web, but there’s no reason we can’t eventually build this facet into the web’s very fabric.

  What dimensions should we consider for link weighting? Relevance, usefulness, significance, and prominence are some of the characteristics that should factor in. Besides whether a link is one-way or two-way, the relevance and importance of a link need to be demarcated. Again, it could be a color code, where blue might represent the best links on the page, green second best, yellow third, red fourth, and so on until a link is rendered irrelevant and automatically removed.

  In the beginning, site owners may choose their most important links, but ultimately the web should be allowed to evolve through natural selection. The web could integrate how many people click on a link, how much time is spent on that page, and whether users eventually return to the original site. It could consider a user’s demographics and history to make a personalized prediction of link relevance. For example, perhaps links to locations geographically close to a user should be weighted more heavily. Past history could also play a role: if a user has clicked on a link before, or if he has spent time on other sites that also connect to the link, those are important factors. Taken together, all of this data could allow for dynamic link-weighting based on relevance and utility.

  All of these innovations would make the
web more meaningful, but the ultimate feat would be to also make it smaller. To do this, we could allow links to automatically fade and ultimately disappear if they aren’t used after a period of time. This could be true of unused websites as well. This is directly analogous to the brain’s process in which irrelevant neurons selflessly commit cellular suicide. We need websites to do the same. It’s what makes us smart, and it’s what will make the web smart.

  Of course, much of this may be untenable, as websites are governed by companies, not nature. The greater good is not always a primary business concern. But we can still, at a minimum, quarantine or mark as offensive those websites that make up the clutter of the web, at least until they prove to do more good than harm.

  VII

  The web will continue to be useful despite having hit a breakpoint. Our goal now should be to shepherd it toward equilibrium so that it doesn’t implode. We must make it less distracting and more meaningful, but I personally don’t subscribe to the theory that it’s hurting our brains.

  Every new technology has its cynics. When Gutenberg invented the printing press in the fifteenth century, many believed that mass printing would alter our brains. Rumor has it that the great Socrates was against the written word itself. Many warned how the telegraph, telephone, radio, movies, and—of course—television would rot our brains.

  The journal Nature published an article in 1889 titled “Nature’s Revenge on Genius,” arguing that each new technological invention causes us to grow increasingly dim. A predecessor to Nicholas Carr, the author cited a newfangled technology called electricity as public enemy number one. It is worth reading a few sentences to see just how much fear surrounded something so commonplace to us today, and how similar it seems to the way some people describe the web now:

  At present our most dangerous pet is electricity—in the telegraph, the street lamp and the telephone. We have introduced electric power into our simplest domestic industries, and we have woven this most subtile of agents, once active only in the sublimest manifestations of Omnipotence, like a web about our dwellings, and filled our atmosphere with the filaments of death . . . It is urged that electric lighting is not essential to the public comfort. It is not a necessity but a luxury. By abolishing it we reduce our danger appreciably . . . The telephone is the most dangerous of all because it enters into every dwelling. Its interminable network of wires is a perpetual menace to life and property.

  Atomic bombs aside, history has generally shown that new technologies end up enabling us, not encasing us in “a web about our dwellings.” The medium is never the issue, though the content occasionally can be. The web itself isn’t broken; it’s up to us to separate the bad from the good in order to make it even better.

  Five

  Bread | Mobile | Social

  After the fall of the Soviet Union, an economist in London received a question from a colleague in Saint Petersburg, Russia: “Who is in charge of the supply of bread to the population of London?” As someone living in a democratic country, it is almost impossible to understand that question. To say it sounds strange understates the point. But the British economist bit his lip and gave a serious reply: What do you mean, who’s in charge—“nobody is in charge.”

  Accustomed to an environment in which every detail of life was centrally planned, officials in the newly capitalist Russia found it unbelievable that such a thriving supply-chain network could be self-organized. When you think about it, it is truly incredible and counterintuitive that elaborate tasks can be accomplished without anyone in charge.

  But that’s the way free markets work: they thrive by having limited central command, limited bureaucracy, limited regulation. In a free market, networks of companies seamlessly provide services that rely on complex chains of events. For something as simple as bread, there are dough manufacturers, bakers, storefronts, and a host of other businesses that must work seamlessly without central leadership to deliver a product to consumers. Even the British economist, after thinking about the question, retreated from the obviousness of his answer: “the answer, when one thinks carefully about it, is astonishingly hard to believe.”

  Perhaps this is why it took scientists until the twentieth century to figure out that the brain has no central command, and that neither the queen nor any other ant directs the colony. Perhaps the US government understood this when it relinquished control of the internet in 1995. Perhaps it was also clear to the European Organization for Nuclear Research (CERN), where the World Wide Web was created, when they wisely announced in 1993 that the web would be free for anyone to use.

  So all of our favorite networks—the brain, ant colonies, the internet, and the web—organize themselves. Just like London’s network of wheat growers, bakers, and grocery stores, which ensure that any Brit with a few pounds can buy a loaf of bread, no one is in charge.

  I

  Having no one in charge is great for bread distribution, great for Mother Nature, even great for the internet. But companies need to have someone in charge. Companies are more like dictatorships than democracies, and this is generally a good thing. Top-down leadership tends to work for businesses in general, as centralized decision making is often critical to success. But when a business is running a network, central command is no better than the results that we saw in Russia before the fall of Communism: long bread lines and systemic failures that led to collapse. Businesses are focused on growth and profitability, which is often at odds with efficiency and stability. Inevitably, business leaders will push a network beyond its breakpoint and just keep on pushing.

  It is ironic that one of the first people to be in charge of a major social network was Tom Anderson. As a high school freshman in 1985, he figured out how to hack into the computing system of Chase Manhattan Bank, which should have been impenetrable to a high school kid. Anderson shared his newfound knowledge with his friends, ultimately prompting a major FBI raid in which agents confiscated the computers of Anderson and 24 of his buddies, thereby shutting down the hacking scheme.

  The loss of his beloved computer didn’t slow Anderson down, and after working at several successful technology companies, he cofounded MySpace in 2003. Anderson and his team propelled the network into hypergrowth, leading it from a handful of users to several hundred million accounts by 2006, the same year it became the most visited website on earth, with more monthly visitors than Google.

  Back in 2007, I wrote a prediction for my book Wired for Thought that MySpace would soon be overtaken by a lesser-known network called Facebook. Most people, including my Harvard Press editors, thought I was delusional. MySpace was all the rage, and experts were predicting it would overtake Google, Yahoo!, the written word, even communication itself. But just like every social network before it, MySpace flamed out.

  Fast forward to the writing of this book in 2012, and it’s Facebook’s turn. By the time this book is edited, published, distributed, and in your hands, we will have some irrefutable stats about Facebook’s usage decline. This is a company that went public in May 2012 at $38 per share and was trading at less than $19 two months later. Why the huge drop? Pundits suggested many reasons, including poor investor relations, lack of revenue, and expiring lockups. Others repeated far too often that perhaps CEO Mark Zuckerberg was “in over his hoodie.” Now this is a guy who is a revered prodigy, who built Facebook from the ground up. (Criminal record aside, MySpace’s Tom Anderson was no slouch himself.)

  Despite his intelligence, Zuckerberg may in fact be part of the problem. Facebook, like all social networks before it, is a manmade network controlled by one company, and that company is controlled by one man. It could be argued that it wasn’t created with the explicit goal of making money, but it is a company with stakeholders and—as of 2012—public shareholders. Growth and profitability are necessary to the survival of the company, but that could mean the demise of the network itself.

  When networks are f
ree to grow unencumbered, like bread distribution or the internet, they naturally hit a point of equilibrium. That is why they work: efficient networks yield to the environment’s limits. But when a network is run for profit, it is only natural to want to push beyond the breakpoint. The danger is that unnatural, forced growth can cause a network to collapse. This is what happened to MySpace, and it is now happening to Facebook.

  This situation begs several questions. Is it possible to balance the needs of a for-profit company with the natural network stages of growth, breakpoint, and equilibrium? Is it possible for someone to be in charge of such a network, or does centralized control automatically kill it? Is it inevitable that controlled networks will collapse, or can the smartest among us, the Zuckerbergs of the world, shepherd them through their natural stages and help the networks reach intelligent equilibriums?

  II

  Clearly, central command hasn’t limited the first stage of networks, exponential growth. We’ve witnessed the incredible growth of social network after social network. Not just Facebook and MySpace, but also Classmates.com and Friendster. Each in their time looked as if they could grow forever. But in the case of Classmates.com, Friendster, and MySpace, each imploded after hitting its breakpoint.

  Like all software, social networks are limited in terms of their ability to engage and satisfy their users: their survival depends upon their usefulness. Users hope to accomplish specific tasks when they log on to a social network to, well . . . socialize. Having too few users creates an obvious problem: there aren’t enough people to interact with. The growth phase is necessary to build critical mass, and for every MySpace there are at least a dozen other networks that never got remotely close to the right size.

 

‹ Prev