by Jeff Stibel
Ultimately, the internet will need to evolve into using different energy sources. The internet is slow and inefficient compared to the way the brain processes information, primarily because the brain’s communication system uses chemical and electrical currents whereas the internet currently uses electricity alone. At some point, we will likely create a chemical system to increase the amount of information that can move across the internet. That insight will probably come from research on the chemical communications of the brain, or perhaps even from research on ant communication.
VI
The internet continues to evolve, grow, and increase its overall carrying capacity, but eventually we will run out of virtual lichen on our island. When that happens, it will not necessarily be a bad thing. Just as the brain gains intelligence as it overshoots and collapses, so too may the internet. The brain can be our guide to the internet because the two are so similar. We have substituted hardware for wetware, but the fundamental structures are the same: they are both complex networks capable of calculating, remembering, and communicating. Carrying capacity is never infinite, so we will eventually hit a breakpoint. But when that happens, the results will be exciting to see and will likely yield a smaller, yet more efficient, nimble, and—dare I say it—intelligent internet.
Four
Slaves | Neurons | The Web
For all their simplicity, neurons do some pretty amazing things. They are autonomous cells that aren’t physically connected, yet they communicate with one another. They are plastic, in the sense that they are able to switch tasks when called upon. The same neurons can be used for language, hearing, decision making, and virtually every other function in the brain. It is the humble neuron that allows us to think and act.
Despite that remarkable power, neurons are largely listless. Neurons turn on and off, nothing more. They aren’t aggressive, they don’t fight to survive, and they generally perform the same task over and over. They are selfless; their goals and objectives are that of the larger whole. Neurons act as a network, not as individuals.
The modest ant does some amazing things as well. Ants defend their territories against predators; they form complex social structures; they use tools and create meals from non-food substances. One species even invented air-conditioning for their nest, based on a system of pushing out warm air and pulling in fresh air, millions of years before we humans thought of it.
Unlike neurons, ants are not inert. Some ants are downright aggressive, going to surprising lengths to claw themselves to the top of the heap. There is one group that is especially so—the roughly 100 species of ants collectively known as slave-making ants.
Slave-making ants don’t clean house, cook food, or take care of their babies. They actually don’t even know how to do any of those things. They’re pretty much good at only one thing: finding others to do their work. Slave-makers raid the nests of other ant colonies and steal all their eggs. Those ants grow up as slaves, and they do pretty much everything for their masters: groom them, feed them, defend them from bigger insects, you name it. If the colony moves to a new nest, the slaves will even carry their masters to their new abode.
In ethical terms, stealing babies and making them into slaves is pretty bad. But murder is even worse. In order to steal the eggs of another colony, the slave-makers must first go to war—these prodigious ants ruthlessly kill any ant that gets in the way. Opportunistic slave-maker queens follow these raiders into a colony and take advantage of the chaos created by the raid. The young aspiring queen slips into the nest, finds the queen ant, and literally chokes her to death. Then she eats the old queen so that she smells like the queen’s pheromones. The rest of the ants never know the difference, giving the young slave-maker queen an instant colony of her own.
The peaceful, simple neuron has no direct parallel to a slave-making ant. But within the brain is another novelty that is very much a slave-maker: the idea. Ideas sit on top of neurons, riding them in much the way that slave-makers ride their slaves. But more than that, ideas propagate: they jump from brain to brain as they spread with a fervor no less intense than that of slave-maker ants attacking a colony.
Ideas can be good or bad. Some, like the cure for polio, are positive; others, like fascism, have evil consequences. But ideas tend to spread and infect the minds of others. In many ways, they are like diseases in that they enter our minds without warning and cannot be stopped. If an idea is contagious, it is committed to memory and spread to others; otherwise, it is relegated to the periphery of our unconscious. Ideas are more powerful than any physical force: they alter people’s minds, making them do things they wouldn’t otherwise do. In this way, ideas, perhaps even more than actions, change the course of history.
I
Slave-making ants are the Napoleons of the ant kingdom. For these mini-conquistadors, it seems that the sky’s the limit. So what’s stopping them from uniting all ants into a massive supercolony and taking over the world?
It is a pretty frightening thought given the sheer number of ants on the planet. There are more ants than mammals; most chillingly, the total weight of all the ants on earth exceeds that of humans. If ants could organize themselves into a supercolony, they could conceivably be the most powerful species alive.
Fortunately for us, ant colonies are networks; they only grow until they reach a breakpoint. Once they’ve reached that point, even slave-makers stop raiding to grow their ranks. As we’ve seen, carrying capacity is non-negotiable.
The carrying capacity of an ant colony is bound by physical factors including the abundance of food and the availability of materials to build nests. But some ant colonies, such as Deborah Gordon’s harvester ants, have plenty of both. They live underground in vast wastelands—they could conceivably expand their nests exponentially. There are hundreds of colonies in a small area, so clearly there is plenty of food and water. So why do harvester ant colonies top off at around 10,000 ants and maintain that population for years? Why don’t slave-making ants just steal the carrying capacity of another colony and thus increase their capacity?
It turns out that physical capacity is a necessary condition for a network, but it is not sufficient by itself. The carrying capacity of a network is limited not just by physical size but also by utility. Each ant colony reaches a certain population that maximizes its ability to meet its goals: find food, remain healthy, and reproduce. Above that breakpoint, there is no additional value for adding more ants to the colony.
Not only is there no additional value, but adding more ants is counterproductive. Ants communicate mainly through scents. If you’ve ever tried to cover a bad odor with a combination of bleach, 409, and Febreze, you know that piling too many scents on top of each other is a bad thing.
Remember, ants respond to patterns of interactions. Deborah Gordon played a critical role in this discovery, and she explains it this way: “An ant uses its recent experience of interactions to decide what to do. The pattern of interaction itself, rather than any signal transferred, acts as the message. What matters is not what one ant tells another when they meet, but simply that they meet. An ant operates according to a rule such as, ‘If I meet another ant with odor A about three times in the next 30 seconds, I will go out to forage; if not, I will stay here.’” Ants are not the best at counting and have short memories, so you can imagine that too many ants make it simply too distracting for an individual ant to focus on the task at hand. Instead of growing in numbers, it makes more sense for a mature colony to form a stable population.
II
The important thing to remember about carrying capacity is that just as a network can be made of neurons, ants, or computers, carrying capacity can differ as well. We have seen how energy is a critical factor in the carrying capacity for most hardware and biological networks. That is because physical stuff requires energy. The brain, being a physical network, is a big energy hog, which is why we see many a
nimals consumed by the task of energy consumption.
When software systems are built on physical networks, their carrying capacity depends not only upon energy but also upon utility. Survival for this type of network means staying useful and relevant.
Think of how an idea remains top of mind: through memory. Memories are the software layer of the brain. The brain’s networks of neurons form a semantic network of memories, which carry ideas. In theory, we could hold an unlimited number of ideas in our heads, but our brains would become quite cluttered. Consider Jill Price, a woman who can remember every single day of her life since she was 11 years old. “Starting on February 5th, 1980, I remember everything. That was a Tuesday.” Price remembers literally everything, even random news events that aren’t directly relevant but that happened during her lifetime. Ask Price about Bing Crosby’s death, and she’ll respond, “Oh yes, he died on a golf course in Spain,” and proceed to give you the time of death and other arbitrary details. Price has perfect memory, what in clinical terms is called hyperthymesia.
There are only about a dozen documented (i.e., real) cases of hyperthymesia. There are no tricks, gimmicks, or strategies to these perfect memories. This condition is also not like autism or savant syndrome; hyperthymesiacs are otherwise normal. Given that, it seems pretty powerful to have a perfect memory. So why haven’t we evolved as a species to remember everything if our brains have the capacity to handle it? It turns out that human memory has a breakpoint as well. Too much of a good thing, it seems, isn’t so good.
Price has battled with depression, anxiety, and migraines, the latter of which required her to take five aspirins a day starting in early childhood. Price describes her experience as “nonstop, uncontrollable and exhausting . . . I run my entire life through my head every day, and it drives me crazy.” One would think that surely a perfect memory has an upside, possibly in learning. Not for Price and other hyperthymesiacs. Despite her “gift,” Price performed unremarkably in school, often earning Bs, Cs, and Ds. “I had to study hard. I’m not a genius,” she explains.
Worse yet, hyperthymesic memory often gets in the way of other higher level functions, such as decision making. This makes sense: if we all had perfect memories, we would likely be very slow to remember, process information, and make decisions. Too much information would clog our brains and make it hard to sift the dirt from the gold. There is a reason our brains discard most information they process and only keep that which is most useful. Irrelevant ideas need to die, just like irrelevant neurons.
The theme at this point should be clear. All hardware networks grow only to the extent that there is physical carrying capacity—energy—available. This is true for ants, deer, neurons, and the internet. Software resides on hardware, so it is naturally bound by the hardware’s capacity. But in addition to physical carrying capacity, software networks must also yield to a utility breakpoint. For memories and ideas, going beyond that breakpoint is really bad.
III
On the internet, websites are the parallel to memories. Websites are the software of the internet, just as memories are the software of the mind. While the key to the survival of the internet is its physical carrying capacity (i.e., the size and energy consumption of its tubes, wires, processors, switches, and routers), the World Wide Web is something different. It is the usable layer of the
internet—the websites and programs that allow us to communicate, store memory, and transmit ideas over the physical internet.
When the World Wide Web was invented in 1993, it changed the internet overnight. Prior to that, the internet was a cool idea; with the web, it became an indispensable phenomenon. The World Wide Web is home to websites that carry ideas—they store, transport, and propagate them in a way that was never possible before. A single website can hold infinite information, instantly accessible to the world’s population. Websites transformed the internet from lackluster to blockbuster.
The web, along with the ideas it spreads, has grown enormously. There were no websites in 1993, 20 million websites in 2002, and 600 million sites by 2012. This is an astounding growth number: a person growing that large from birth would be able to touch the moon by the time he was ten. In fact, we’ve had to add new words to our vocabulary to describe the size of the web. Computers first held megabytes, then gigabytes, and we struggled to grasp the sheer magnitude of those numbers. The web is now described in petabytes and exabytes. But even those numbers are too small, as it is predicted that the web will grow to a couple of zettabytes (1021 bytes) by 2016.
Zettabyte? Just how big is a zettabyte? It’s the equivalent of all the information contained in every movie ever made traveling across the internet every three minutes for an entire year. The web, just like a mind, is a slave to the huge quantities of ideas it shares every day.
Because it is software, the web isn’t limited by any physical size maximums. But just because the web can hold an infinite number of websites doesn’t mean that it has infinite carrying capacity. The web is more than just a big collection of websites with individual addresses. The web is so named because it’s a network, like a spider’s web. As such, it is subject to the limits of carrying capacity. Much more important than the sheer number of websites is the amount of useful and accessible information.
Clearly, we find the web useful. Each of us views an estimated 2,600 webpages across almost 90 unique sites per month. We spent an average of 70 minutes a day on the web in 2012, up from only 46 minutes in 2002. When you consider that we’re also surfing, sending, and downloading at lightning fast speeds compared to ten years ago, it’s clear that we’re getting more out of the web than ever before. But it’s getting noisy and congested, and some would say there are simply too many ants.
IV
Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think . . . what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.
Thus begins Nicholas Carr’s widely circulated 2008 article in The Atlantic, “Is Google Making Us Stupid?” (Carr expanded his point in his bestselling and Pulitzer Prize–nominated book, The Shallows: What the Internet Is Doing to Our Brains.) Carr isn’t alone. His contemporaries, including Larry Rosen (iDisorder: Understanding Our Obsession with Technology and Overcoming Its Hold on Us) and Daniel Sieberg (The Digital Diet: The 4-Step Plan to Break Your Tech Addiction and Regain Balance in Your Life), agree that our dependence on the web is dangerous and that it’s changing us for the worse. Their colleague Dr. Kimberly Young (author of both Caught in the Net and Tangled in the Web) runs the Center for Internet Addiction to help the afflicted recognize and treat their high-tech dependencies. This is now a serious problem, with the American Psychological Association in 2013 classifying “Internet-use disorder” as a condition “recommended for further study,” coinciding with a 16-year-old girl making headlines for drugging her parents with sleeping pills in order to use the internet past her curfew.
I’m not convinced that the web is damaging our brains, but it’s clearly a messy place. The web browser has become the Swiss Army knife of tools—it easily takes the place of an encyclopedia, a stack of newspapers, the dictionary, the thesaurus, the calculator, the clock, the television, and the shopping mall.
The world at our fingertips clearly comes with a price. It’s hard to use the web for even the most basic task without being distracted by links, ads, emails, tweets, alerts, and headlines. The ones we vilify and avoid are the ones there to sell us something, but even the good diversions diffuse our attention and concentra
tion. These distractions reduce the usefulness of the web.
V
The value of the web has been questioned before. Almost from its infancy, the web was too big for us to wrap our heads around. During the first ten years, we massively exceeded the carrying capacity, and the usefulness dropped significantly. People just weren’t finding what they needed. Google and other search engines were created to act as gateways, shrinking down the web to a manageable size. The engines drastically increased the utility of the web by pointing us toward what they consider to be the most valuable sites and allowing us to completely ignore the rest. And still, as USA Today reported in 2007, “the Web is just too big for any current organization scheme to handle.”
Despite the help from search engines, the World Wide Web has exceeded its breakpoint. The quantity of information available is higher than the carrying capacity or usefulness. The web continues to grow, but its utility is falling. It’s a classic case of too much of a good thing, like Price’s memory, and the whole network is set to collapse if we don’t cull it down.
Signs of the web’s breakpoint have been around for a few years. The most significant, though rarely reported, sign is that the growth of the web is actually slowing down. While growth in the number of websites was over 800 percent in the first ten years, it slowed to a paltry 19 percent in 2012 and is projected to be less than that for the next five years. The number of users is also declining; there were 4 percent fewer people using the web on their PCs in 2012 than the year before. In addition, the amount of time people are spending on the web is dropping: from 72 minutes per day in 2011 to 70 minutes in 2012. This is a small change, but the trend will continue.