Book Read Free

Breakpoint_Why the Web will Implode, Search will be Obsolete, and Everything Else you Need to Know about Technology is in Your Brain

Page 16

by Jeff Stibel


  The age of robotics is already upon us. Robots are now being used in virtually every industry. In manufacturing, robots such as the ARC Mate are on the assembly line; in offices, the HRP–4 is delivering mail and getting coffee; in the home, Roomba is doing the cleaning. Two hundred years ago, 70 percent of people were farmers. Now, all but 1 percent of those jobs have been replaced by machines. Wired Magazine estimates that in another 100 years, 70 percent of today’s occupations will share that same fate.

  Robots are interesting, but they haven’t quite achieved the goal of artificial intelligence. We have outsourced many physical tasks and even a few mental ones, but creating something in our image has largely eluded us. Computers have always been potential candidates. Even back in 1987, biologist Richard Dawkins went so far as to call computers “honorary living things” in his book The Blind Watchmaker. Computers at the time lacked the ability to act as a network of selfless cells, but that has largely changed with the internet.

  We are not merely talking about creating a listless biological system; we are talking about intelligence. If we create a life online that rivals the humble sea slug, no one will care. But if we can create something more than the composite of its parts, something that drives us toward greater intelligence—that can reproduce, learn, and drive evolution forward in a way that Darwin couldn’t have imagined—then we will have created real intelligence.

  Scientists have been racing to build the first intelligent machine, but the pursuit of artificial intelligence has been plagued by problems. The term itself may be the biggest reason: as we create machine intelligence, there will be nothing artificial about it. The field of artificial intelligence, born in the 1950s, began by trying to leverage the strength of computers to overpower human intelligence. The thought was that, with enough speed and brute force, computers could do anything that brains could do. After all, an average laptop computer can calculate five million times faster than the human mind. This approach had some success creating artificial intelligence. But it was artificial. Gammonoid quickly became the world’s best backgammon player in the 1970s. In the 1990s, the computer Deep Blue crushed its human competition at the game of chess. In 2011, IBM’s Watson computer became the Jeopardy world champion. But all of these computers were horribly bad sports: they couldn’t say hello, shake hands, or make small talk of any kind. These are big brooding machines with immense storage and calculating power but not much more. Artificial intelligence has proved woefully inept at creating real intelligence.

  The newest trend is to reverse engineer the brain. As the theory goes, once we understand all of the brain’s parts, we can re-create them to build an intelligent system. But there are two problems with this approach. First, we don’t actually understand the brain currently. The brain, especially in terms of its parts, is largely still a mystery. Neuroscience is making tremendous progress, but it is still early. New research constantly overwrites prior theory. That is a problem with science in general: science does not deal in facts, only theories. Scientists labor to prove things wrong, but they can never actually prove something right. Eventually, we can have a large degree of confidence, but perfect knowledge just doesn’t exist. The newer the field, the greater the likelihood that current theories will be undermined by future research.

  Even something as simple as the number of neurons in our brains is hotly debated. In the 1970s and again in late 2000, the prevailing theory was that we had only around 86 billion neurons; today, convention pegs it at 100 billion, roughly the same number as what we thought through the 1980s and 1990s. New research is again contesting that number. Neuroscientists’ estimates have been as low as 10 billion and as high as 1 trillion. And that’s not even considering the controversy surrounding our (supposed) 100 trillion neural connections or the trillions of surrounding glial cells.

  The second issue with reverse engineering the brain is more fundamental. Just as the Wright brothers didn’t learn to fly by dissecting birds, we will not learn to create intelligence by re-creating a brain. The Wright Flyer looked nothing like a bird, but it flew just the same. We can use the brain as a rough guide just as the Wrights used birds as a guide, but ultimately intelligence will emerge in its own way.

  To be sure, there is plenty to learn from biology. The Wrights took what concepts they could from the flight of birds and applied them: wingspans, velocity, aerodynamics. But they left most of the rest—the feathers, beak, and organs—for the birds. Gaining an understanding of something biological doesn’t mean you will be able to build or engineer it.

  The internet has a real shot at intelligence, but it is pretty clear that it will look nothing like a 3-pound wrinkly lump of clay, nor will it have cells or blood or fat. Those are all critical to brains but not to intelligence. In that way, we may never create an artificial brain, but the intelligence will be very much real. Dan Dennett, who was an advocate of reverse engineering at one point, put it this way: “I’m trying to undo a mistake I made some years ago, and rethink the idea that the way to understand the mind is to take it apart.”

  Dennett’s mistake was to reduce the brain to the neuron in an attempt to rebuild it. But that is reducing the brain one step too far, pushing us from the edge of the forest to deep into the trees. This is the danger in any kind of reverse engineering. Biologists reduced colonies down to ants, but we have now learned that the ant network, the colony, is the critical level. Reducing flight to the feathers of a bird would not have worked, but reducing it to the wingspan did the trick. Feathers are one step too far, just as are ants and neurons.

  Unfortunately, scientists have oversimplified the function of a neuron, treating it merely as a predictable switching device that fires on and off. That would be incredibly convenient if it were true. But neurons are only logical when they work; they are more fallible than they are predictable. Remember, a neuron misfires up to 90 percent of the time. Artificial intelligence almost universally ignores this fact. One can’t possibly build artificial intelligence by looking at a single, highly faulty neuron. So rather than focus on something else, the field just assumes that neurons are predictable.

  Focusing on a single neuron’s on/off switch misses what is fundamentally happening with the network of neurons. The neuron is faulty but the network performs amazing feats. The faultiness of the individual neuron allows for the plasticity and adaptive nature of the network as a whole. Intelligence cannot be replicated by creating a bunch of switches, faulty or not. Instead, we must focus on the network.

  Neurons may be good analogs for transistors and maybe even computer chips, but they’re not good building blocks of intelligence. The neural network is fundamental. The BrainGate technology works because the chip attaches not to a single neuron, but to a network of neurons. Reading the signals of a single neuron would tell us very little; it certainly wouldn’t allow BrainGate patients to move a robotic arm or a computer cursor. Scientists may never be able to reverse engineer the neuron, but they are increasingly able to interpret the communication of the network.

  It is for this reason that the internet is a better candidate for intelligence than are computers. Computers are perfect calculators composed of perfect transistors; they are like the neurons we once thought we had in our brains. But the internet has all the quirkiness of the brain: it can work in parallel, it can communicate across broad distances, and it makes mistakes. The internet is at an early stage in its evolution, but it can leverage the brain that nature has given us. It took millions of years for humans to gain intelligence, but it may only take a century for the internet. The convergence of computer networks and neural networks is the key to creating real intelligence from artificial machines.

  VI

  In the 1950s, Princeton University mathematical physics professor John von Neumann coined the term “singularity” to refer to the time when machines gain human intelligence. It is a fascinating concept, the point at which we achieve real artificial int
elligence. The idea of a singularity has been thoroughly examined by famed MIT inventor Ray Kurzweil, author of The Singularity Is Near, who has said he believes it will happen in 2045.

  But singularities don’t happen in nature. Evolution is a slow, laborious process. Our intelligence evolved over millions of years. Oftentimes we don’t even notice evolution when it happens: it took us 20,000 years to realize that our brains were shrinking. If a singularity exists, it is naive to think that we will recognize when it happens. We will not pass through some event horizon that changes history overnight and gives us intelligent machines. It has been evolving and will continue to evolve with the insights of our scientists—the Donoghues and Dennetts—and our innovators—the Fetzes and Bergers. Von Neumann was well aware of this; he even noted that the singularity of which he spoke was really just a sign that something was on the horizon.

  To some extent, we’ve already reached a singularity. Robots, computers, and the internet all show intelligence. And with BrainGate, we have fused mind and machine: neurons providing intelligence to computers. Who needs computers that think when we can have people who think with computers? In that respect, we have already made it through the von Neumann singularity.

  In another way, however, we will never reach a singularity. In our quest to create intelligent machines, we keep changing the rules. In the 1960s, we said a computer that could beat a backgammon champion would surely be intelligent. But when Gammonoid beat Luigi Villa, the world champion backgammon player, by a score of 7–1, we decided to rethink our definition. We reasoned in hindsight that backgammon is relatively easy; it’s a game of straightforward calculations. We changed the rules to focus on games of sophisticated rules and strategies. Backgammon is easy by that definition, but chess is another story. Yet when IBM’s Deep Blue computer beat reigning chess champion Gary Kasparov in 1997, we changed the rules again. No longer were sophisticated calculations or logical decision making acts of intelligence. Perhaps when computers could answer human knowledge questions, then they’d be intelligent. Of course, we had to revise that theory in 2011 when IBM’s Watson computer soundly beat the best humans at Jeopardy.

  We have done the same thing in nature. It was previously thought that what sets us apart from other animals was our ability to use tools. Then we saw primates and crows using tools. So we changed our minds and said that what makes us intelligent is our ability to use language. Then biologists taught the first chimpanzee how to use sign language, and we decided that intelligence couldn’t be about language after all. Next came self-consciousness and awareness until experiments unequivocally proved that dolphins are self-aware.

  With animal intelligence as well as artificial intelligence, we keep changing the goalposts. We draw a line in the sand, we reach that line, and then we cross it out and draw a new line further down. Events leading toward artificial intelligence have been happening for hundreds of years, but there is no one big event that will happen to generate the headline “the singularity is here.” We have already reached a singularity and will never reach a singularity. The inevitable conclusion may elude us, but it is no less a fact: artificial intelligence is real, it’s here, and it will continue to evolve.

  Eleven

  Conclusion | Termites | Extinction

  In 1994, five biologists found three large, fully mature leaf-cutter ant nests in Botucatu, Brazil. As any good scientists would do, they set out to explore the nests. They poured over a ton of cement into one of the nests, waited days for it to harden, and then started digging.

  When fully excavated, the preserved nests were a sight to behold. A marvel of modern engineering, one mound covered an above-ground surface area of nearly 725 square feet. The largest nest had tunnels extending 229 feet below the earth, making the entire structure as large as a skyscraper and as wide as a city block. Its construction required the ants to move untold tons of soil.

  The extensive labyrinths of the largest nest contained 7,863 chambers reaching as far down as 23 feet, each with a specific purpose: there were garden compartments, nurseries, even trash heaps. The tunnel system connecting the chambers looked like a superhighway system, complete with on-ramps, off-ramps, and local access roads. The structure itself looked as if it had been designed by an architect.

  Leaf-cutter ants are known to build some of the most elaborate homes across the entire animal kingdom. In many cases, they dig chambers directly into the water table, allowing for a natural source of hydration. At the surface of leaf-cutter mounds are hundreds of openings, allowing for ventilation. Openings in the center of the mound blow out hot air and carbon dioxide from within the nest, creating an inflow of outside air from the holes at the periphery of the nest mound. In this way, cool fresh air continuously circulates throughout the nest. The ants use principles of wind velocity and thermal convection to regulate gas exchange, and this advanced air-conditioning system is important for more than just ant comfort.

  Many people have witnessed ants carrying large objects, and those who live in Central America, South America, and the southern United States often see ants carrying leaves. They look as if they’re holding up tiny leaf-umbrellas, so much so that Texas and Louisiana residents call them “parasol ants.” Most people would be surprised to learn that leaf-cutter ants don’t actually eat the leaves they diligently cut down and haul back to the nest.

  Instead, leaf-cutter ants eat fungus that they nurture, fertilize, and harvest themselves. The fungus thrives on leaves, hence all the leaf cutting and transporting. The fungus also requires precise temperatures and humidity levels, and the ants regulate those levels by plugging up holes used for inflow and outflow, depending on whether the fungus needs more humidity or more cool air. If drastic changes are required, the ants will even transport the whole fungus crop to more hospitable chambers within the nest.

  Of course, as we have seen with other ant endeavors, leaf-cutter colonies differ in their proficiency levels. Smaller, younger colonies close up their nest entrances during rain to prevent flooding, which leads to high carbon dioxide levels and suboptimal fungus growth conditions. Larger, mature colonies—those who are past the stage of breakpoint—work around this problem. Their numerous nest openings and deeper chambers enable the ants to allow carbon dioxide gases to escape while maintaining a low risk of flooding.

  Mature African termite colonies do something very similar. They build high-rises—mounds averaging six to ten feet tall—and they tend to their fungus gardens within. Like those of the leaf-cutter ants, the termites’ fungi are temperamental and can only survive within a narrow temperature range. But the outside temperatures vary drastically in some parts of Africa, sometimes dropping to 35 degrees Fahrenheit at night and heating to 104 degrees during the day. To compensate, the termites spend their days opening and closing existing vents, digging new vents, and plugging up old ones.

  Like ants, termites aren’t smart; an individual termite could not possibly have enough neural firepower to conceptualize a termite mound. But the termite colony, just like an ant colony, is something different. Once the termite colony hits its breakpoint, which varies with each of the roughly 2,800 species of termite, the colony gains intelligence.

  Luckily for us, the human brain is big enough to replicate the termite mounds. Zimbabwean architect Mick Pearce, long fascinated by termite colonies, designed a replica building in Harare called the Eastgate Centre. The building is the country’s largest retail and office space facility, and it uses no traditional air conditioning despite the African heat. Instead, it uses the ventilation systems long employed by termites and leaf-cutter ants. Hot air is drawn out through tall chimneys while cool air is sucked in from the building’s large open space, strategically located to collect natural breezes. The building uses a mere 10 percent of the amount of energy consumed by similarly sized buildings in the area. Unsurprisingly, the biomimetic building has been hailed as revolutionary, and Pearce has won numerous a
wards.

  Pearce’s accomplishment was astounding, and his accolades are well deserved. But we should be equally impressed by the mimicked as by the mimicker. It’s far easier for us to recognize human genius than to acknowledge the smarts of a termite or an ant. Clearly, we are biased. Perhaps it’s not that we are terribly species-centric, but that we fail to recognize the impact of networks across the board. This includes all of the various networks of which we’re a part: our families, our schools, our cities, and the vast network of Homo sapiens on this planet.

  Pearce didn’t act alone to build the Eastgate Centre, and that’s not just because hundreds of designers, engineers, and construction workers were involved. Since birth, Pearce benefited from mankind’s collective knowledge and experiences. If Pearce’s mother took prenatal vitamins or received other prenatal medical care, he benefited from the human network even in the womb. As a young child, he likely built his first building with toy blocks, perfected over time by toymakers for maximum stimulation and safety. He went to school and learned about mathematics, geology, and physics, subjects that even the wisest professors could not have understood were it not for their predecessors and contemporaries.

  None of this knowledge is genetic. Pearce’s DNA contains the instructions for lots of things: breathing, eating, vocalizing. Theoretically, Pearce would have done those things even with no guidance from other humans. Our DNA also contains the code that allows us to make calculations, store memories, and learn new things. But even the humans among us with the most genetically endowed brainpower would remain primitive if born on a deserted island, if they even survived at all.

  I

  We are born into rich, healthy networks, and these networks make us vastly more intelligent, efficient, and capable than our mere biology allows. It is a concept known as “emergence,” where complex systems emerge from simple parts. In their book Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives, Nicholas Christakis and James Fowler explain it this way: “The idea of emergence can be understood with an analogy: A cake has a taste not found in any one of its ingredients. Nor is its taste simply the average of the ingredients’ flavors—something, say, halfway between flour and eggs. It is much more than that. The taste of cake transcends the simple sum of its ingredients. Likewise, understanding social networks allows us to understand how indeed, in the case of humans, the whole comes to be greater than the sum of its parts.”

 

‹ Prev