by Jeff Stibel
But mirror neurons do even more than that. Found throughout the brain, they connect disparate themes with disparate pieces of information, allowing us to literally connect the dots. We see someone eating ice cream and motor mirror neurons fire. While that doesn’t activate the motor cortex or cause us to mimic the action, it does seem to elicit responses from other areas of the brain, effectively connecting someone else’s actions to our brains. Mirror neurons predict the actions of others, give us empathy, and place other people into the context of our lives.
Whereas WordNet demonstrated how we connect words to meanings, mirror neurons provided the necessary components to actually acquire language. It turns out that words are only a piece of the puzzle of language; the remainder comes from being able to make predictions. Scientists searching for insight into how we acquire language may have found their answer in mirror neurons.
Infants as young as a few weeks old begin mimicking the people around them. But so do birds, chimps, and even dogs. They move their mouths and tongues as others speak; they move their arms and legs as others walk; they move their heads and eyes in concert with others; they smile or frown in reaction to the expressions of others. This behavior is largely automatic; it’s pure mimicry, not backed by intention or meaning.
But eventually toddlers begin to seek purpose. They start with mindless mimicking, but as the brain develops, mimicking turns to understanding. In this way, they can learn a language. This likely happens as a result of the development of mirror neurons in early childhood, during the brain’s growth phase as it approaches its breakpoint. Once developed, mirror neurons fire in the brain when a complex pattern emerges. Mirror neurons connect language and motor areas of the brain, intricately linking actions (speech, writing, gestures, signing) to intentions.
It should be noted that mirror neurons were first found in monkeys even though they cannot speak. Because the mirror neurons of primates are primitive in comparison to those of humans, they cannot form as many meaningful connections. In turn, this limits their ability to develop complex language. In its place, primates communicate in rudimentary ways, using mirror neurons for making predictions about their environment rather than about the intentions of others.
V
Technology is devoid of mirrors. Robots, computers, and even the internet are stiff, predictable, and unemotional. The main reason for this is that we have thought of the job of machines as being mechanical, not emotional. In many ways, we have a dualistic approach: the computer can be a brain, but the mind is something different. As a result of this line of thinking, we have missed a wealth of opportunity.
Our attempts at reverse engineering the brain to build better computers led us to neurons, which we thought were logical. We now know that’s not an accurate description, as neurons are quite fallible. Mirror neurons, however, are something different entirely: there is no real logic because they are constantly interpreting, guessing, and making predictions. We have always linked language to ideas and ideas to wisdom, but really it is our ability to predict that gives us our intelligence. In addition to language, the ability to predict comes from mirror neurons. Mirror neurons place our thoughts and actions into context and give us the ability to anticipate. It is that last ability that gives us wisdom.
It’s ironic that our somewhat erratic, illogical decision making actually makes us wise. In The Wisdom Paradox, neuroscientist Elkhonon Goldberg describes the feeling of decision making: “As I am trying to solve a thorny problem,” he writes, “a seemingly distant association often pops up like a deus ex machina, unrelated at first glance but in the end offering a marvelously effective solution to the problem at hand. Things that in the past were separate now reveal their connection. This, too, happens effortlessly, by itself, while I experience myself more as a passive recipient of a mental windfall than as an active straining agent of my mental life.” Goldberg calls this wisdom and finds that as he ages—to his delight—he has more of it than when he was young. “What I have lost with age in my capacity for hard mental work, I seem to have gained in my capacity for instantaneous, almost unfairly easy insight.”
Goldberg’s mind is not a calculating machine, but it has developed associations, memories, emotions, and a mechanism of anticipation that actually raises the total beyond the sum of its parts. We call this wisdom, and it’s not an out-of-the-brain phenomenon; it is a product of the brain. It is likely the work of the brain’s mirror neurons.
To figure out what to do at any given moment, the brain must gaze into the future and imagine. The brain studies its environment, watches what others are doing, and simulates possible future scenarios. Then the brain evaluates those scenarios to guess which are most likely. And then, to save energy (so that it doesn’t have to interpret, calculate, and guess again and again), it learns from those simulations. Ultimately, the thought surfaces, not in the past, not in the present, but in the future tense: “What next?”
Forward thinking is the brain’s way of chipping away at the edges of uncertainty. It makes bets based on past experiences. The human brain learns and remembers not only what happened, but also what didn’t happen. And it turns the sum of this disconnected, limited information into real insight. As Pinker notes, we make “fallible guesses from fragmentary information.”
VI
Some of the most innovative technology companies have found ways to leverage our prediction capabilities. Netflix, for example, created a technology called Cinematch that helps its customers find movies. Netflix takes a customer’s previously watched movies and, with some fancy algorithmic gymnastics, matches them against thousands of other possible films. (Netflix refers to what it does as “straightforward statistical linear modeling, with a lot of data conditioning.”) Thanks to the Cinematch algorithms, Netflix can even recommend the perfect “movie for two,” which, considering the Venus-Mars tastes of many couples, should be considered a minor miracle. The algorithm works because it takes vast amounts of information, makes predictions, and learns from those predictions.
But Cinematch is not without its problems. In the past, one of its largest weaknesses was that its algorithms tended to recommend only best sellers. Because of this, many outlying films, those that might really surprise and please particular viewers, were ignored. This was not a problem Netflix knew how to solve, so they offered a million dollars to anyone who could improve Cinematch by at least 10 percent.
With that, the crowdsourced Netflix Prize discussed in chapter 7 was born. Within months, some 25,000 teams and individuals applied for the Netflix Prize and were given a set of 100 million ratings of over 17,000 movies. After three years, two teams were finally able to make a 10 percent improvement in the recommendations. Admirable, but Netflix is still not as good as a friend at making recommendations. In fact, Netflix never actually used the winning algorithms. By the time the award was given, Netflix had realized that the logical approach was not good enough. Many experts, such as MIT’s Devavrat Shah, continue to criticize
Netflix for its poor recommendations.
The problem stems from the fact that neither Netflix nor the competing teams were looking at the right data. In many ways, they were swimming in too much data and suffering from information overload. Netflix has several billion recommendations, with millions of new ones coming in every single day. Just like the field of artificial intelligence, Netflix’s approach was to take a mound of data and try to analyze it. The Netflix algorithm team truly believed that “more data availability enables better results.” They had all of the logical information they thought they needed, but logical information failed to provide perfect recommendations for illogical humans.
Netflix was missing the psychological data, what marketers call psychographics, the warm and fuzzy information that doesn’t fit well into a model. This type of data enables empathy, insight, and an understanding of individuals and their choices. In other words, it allows the system to act as a
network of mirror neurons. No one was surprised when, after the announcement of the first winners, Netflix CEO Reed Hastings announced a new Netflix Prize, but this time Netflix made available much of the needed psychological and contextual data.
While Netflix has worked hard to create a savvy recommendation technology, Amazon.com holds the prize for the most sophisticated prediction system. They use this system broadly, making suggestions for products that are bought together, items that others purchase, and personalized deals they call “Quick Picks.” They even have a technology called a “Betterizer,” which gives users the ability to improve their own recommendations. Many analysts credit Amazon’s success—in 2012 they had over $61 billion in revenues—to these sophisticated predictions. Forrester Research estimated that as much as 60 percent of Amazon’s recommendations turn into sales.
In many ways, Amazon’s system is successful because it discards obvious logical data in favor of looking for behavioral patterns. In an approach that mirrors WordNet, Amazon uses something called “item-to-item collaborative filtering.” This type of collaborative filtering essentially means that for each product, Amazon builds a synset of related products. Whenever someone views a product or makes a purchase, Amazon can use spreading activation to recommend items from that product’s synset. Amazon’s engineers described it as follows: “Given a similar-items table, the algorithm finds items similar to each of the user’s purchases and ratings, aggregates those items, and then recommends the most popular or correlated items . . . Because the algorithm recommends highly correlated similar items, recommendation quality is excellent.”
But the twist is that Amazon creates its similar-items table not by looking at the characteristics of the products but by looking at purchase behavior. If you buy a food processor, Amazon doesn’t just recommend a blender because it’s similar. Instead, it compares your purchase to those of other people who also bought the food processor to see what they would recommend—possibly batteries, an extension cord, or even a sponge to clean up the inevitable mess.
This is virtually identical to WordNet, except that instead of synsets that represent meanings, Amazon looks at the relationships between people’s buying behavior, a type of psychographic. The formula relies on the overlap of different customers’ purchases and recommendations. The results are often remarkably good because of algorithms that use humanlike decision making based on only a few pieces of psychological data. Fortune magazine had this to say about Amazon in 2012: “Much is made of what the likes of Facebook, Google and Apple know about users. Truth is, Amazon may know more.” The irony is that Amazon knows more by using less. They take a simple approach, just like the brain, and ignore much of the extraneous information. Amazon’s recommendation technology is so good that YouTube recently switched algorithms to a variation of Amazon’s technology.
Often, Amazon’s prediction algorithms result in recommendations that appear random but are, in fact, incredibly prescient. As Amazon founder and CEO Jeff Bezos puts it, “I remember one of the first times this struck me. The main book on the page was on Zen. There were other suggestions for Zen books, and in the middle of those was a book on how to have a clutter-free desk.” Bezos goes on to say that this is not something a human would do.
But Bezos is mistaken: this is exactly the type of thing a human would do, and that is what makes it so powerful. Amazon was able to make the link between a Zen book, past behavior, and the fact that Bezos was looking to clean up his desk, just as a colleague might do after peering through a stack of papers only to find a hapless coworker feverishly purchasing a book on Zen.
Where Bezos is right, however, is that Amazon is missing an analogue to mirror neurons, and that limits its overall abilities. Most recommendation engines, including those of Amazon and Netflix, use something far more similar to WordNet, where they look for meaning and context by comparing one group to another. It works to some extent, but eventually a mirror equivalent will provide better, more nuanced predictions.
Mirror neurons are on the way for both companies. Netflix has recently started using data from social networks. Leveraging a network of friends to make recommendations is the perfect way to gain insight through mirrors. Imagine the power of having movie recommendations from friends on Facebook or the people you follow on Twitter. For its part, Amazon has been quietly using mirror neurons for years. Their email system has a unique recommendation engine that dates back 200,000 years: humans. Marketing emails don’t need to be quick or automated, so Amazon has its employees personally make recommendations to targeted groups. Perhaps this is cheating in technological terms, but it works wonders. Both Netflix and Amazon have realized that to provide the most valuable recommendations and true personalization, it is critical to understand intent and speak the language of their users.
VII
We still haven’t solved the fundamental problem of true language understanding. While we’ve broken down numerous barriers with previous technological revolutions, the seemingly simple task of language translation still eludes us. We have made progress, but language is still a core problem on the internet. Many scientists, scholars, and entrepreneurs are working to bridge this gap.
Scientists have attempted to fix the language problem in three general ways. The first proposed solution is the creation of a single universal language. But this solution hasn’t gained much traction because it’s politically untenable. Americans aren’t going to give up English, let alone the Brits, and there would be mutiny altogether in France over an attempt to abolish French. Equally important, it is difficult for people to learn new languages after the critical language period, so we would strand most adults. As eloquently simple as it may sound, creating a new universal language is impractical.
The second approach is one that many people in the field of linguistics are working on: language translation. Linguistics labs all over the world are abuzz trying to figure out how to translate and map disparate languages. The problem, though, is that translation is much more difficult than it seems. Language is a dynamic system, evolving as we evolve. Even if you could create a perfect translation system, by the time it was complete, the languages would have changed. This is not unlike the creation of dictionaries, which tend to be outdated mere weeks after publication. The difference is that with translation, the problem is compounded because there are multiple languages involved. There are also cultural dialect issues and more difficult problems like dealing with slang.
A third approach, however, shows real promise. No surprise, it stems from the brain. Communication within the mind uses electricity, specifically electrical neuronal spikes. That language underlies all language. And we know what it sounds like: neuronal spikes sound like crackling static, similar to what you hear when the radio searches for a signal. This is the fundamental building block of language. If we can take this neuronal spike and map it to the fundamental building block for how the internet and computers communicate, we will have an opportunity to make translations at the root level of thought. And there is reason to believe that this is possible. Computers and transistors communicate using the same electrical currents that neurons do. If you listen to them, they have identical voices. This begs an important question: If the internet and the brain are functionally the same, and if they communicate in the exact same way, why can’t they speak to one another? When we are able to solve this problem, there will be a new network revolution afoot.
Ten
EEG | ESP | AI
Over the past two million years, the human brain has been growing steadily. But something has recently changed. In a surprising reversal, human brains have been shrinking for the last 20,000 years or so. We have lost nearly a baseball-sized amount of matter from a brain that isn’t any larger than a football. The descent is rapid and pronounced. Anthropologist John Hawks describes it as a “major downsizing in an evolutionary eyeblink.” If this pace is maintained, scientists predict that our
brains will be no larger than those of our forebears, Homo erectus, within another 2,000 years.
This finding is very different from what happens at the brain’s breakpoint, where it loses some neurons and connections. In this case, the brain loss is an overall shrinking of our species’ collective brains, and it means that individual brains have less to work with from the beginning. This isn’t an evolutionary equilibrium, and it’s not an efficiency trick to enable greater intelligence.
One reason that our brains are shrinking is that we are physically smaller than our burly ape ancestors. Remember, brain size is directly proportional to body mass—bigger bodies generally need bigger brains for movement. But that only accounts for a tiny amount of the brain loss, maybe the size of a pea. Brain scientist David Geary has a more alarming answer: “You may not want to hear this, but I think the best explanation for the decline in our brain size is the idiocracy theory.” In other words, we are getting dumber.
Many of the evolutionary traits we think of as beneficial are not really that critical. We assume that everywhere in the animal kingdom, the evolution of intelligence is important. It is not. Evolution is not concerned with progress, only with survival. Certainly, intelligence is hugely important to some species. But brains are costly in terms of weight and energy, so greater intelligence is often more dangerous than beneficial. For animals like birds, for instance, flight would not be possible with larger, heavier brains.
The reason that our brains are shrinking is simple: our biology is focused on survival, not intelligence. Larger brains were necessary to allow us to learn to use language, tools, and all of the innovations that allowed our species to thrive. But now that we have become civilized—domesticated, if you will—our brains are less necessary. This is actually true of all animals: domesticated animals, including dogs, cats, hamsters, and birds, have 10 to 15 percent smaller brains than their counterparts in the wild. Because brains are so expensive to maintain, large brain sizes are selected out when Mother Nature sees no direct survival benefit. It is a true and inevitable fact of life.