Psychedelic Apes

Home > Other > Psychedelic Apes > Page 23
Psychedelic Apes Page 23

by Alex Boese


  This is a common objection to the idiocracy theory. How can we be getting dumber if, all around, people are doing increasingly smart things? The answer to this seeming paradox comes from the humanities, and it forms the final leg of the idiocracy theory. It’s called the theory of collective learning. The researchers who formulated it, including the historian David Christian, never intended it to be used to support a theory of our declining intellect, but it could provide an explanation.

  Collective learning describes the ability of humans to accumulate and share information as a group, rather than just individually. As soon as one person learns something, everyone in the group can learn and benefit from that information.

  Christian has argued that collective learning is the defining feature of our species, since humans alone have mastered this trick. Early humans acquired the ability when they first invented symbolic language, tens of thousands of years ago. This allowed them to share complex, abstract ideas with each other, and then to transmit that knowledge to future generations. In this way, as a species, we began to accumulate more and more information, and the more we had, the easier it became to acquire still more. The process took on a momentum of its own.

  Soon, we invented technologies that amplified our ability to share and accumulate information. We were, in effect, able to outsource some of the functions of our brain, such as memory. Writing was the most powerful of these technologies, followed by printing and now computers. Thanks to these technological innovations, we’re amassing information at an almost exponentially increasing rate.

  The thing about collective learning, however, is that it’s a group phenomenon. As individuals, we may or may not be smart, but when we network together we become very intelligent as a collective entity. In other words, we shouldn’t look at all the advanced technology invented in the past century and conclude that we must be the smartest humans ever to have lived. Our achievements are only possible because we’re the beneficiaries of knowledge amassed through generations of collective learning.

  In fact, we very well could be getting dumber as individuals, and yet the force of collective learning would continue to drive our civilization onward to more innovation and complexity.

  The idiocracy theory has a certain gloomy logic to it. If you’re feeling pessimistic about the current state of the world, you may even feel it has the self-evident ring of truth. But, if you’re more optimistic about the state of humanity, then rest assured that mainstream science doesn’t put much credence in the notion that our mental powers are on the wane.

  There are, for instance, other plausible explanations for why our brains have got smaller since the Stone Age. Harvard primatologist Richard Wrangham argues that it may simply be a symptom of self-domestication. Animal researchers have discovered that domesticated breeds always have smaller brains than non-domesticated or wild breeds. Dogs, for example, have smaller brains than wolves.

  Researchers believe that the link between domestication and small brains comes about because domestication selects for less aggressive individuals. Breeders favour individuals that are friendly and easy to get along with, and, as it turns out, being cooperative is a juvenile trait associated with young brains. Aggression in wild species emerges with adulthood. By selecting for friendliness, therefore, breeders are inadvertently selecting for individuals who retain a juvenile, smaller brain as adults.

  When applied to human brain size, the argument goes that, as population density increased, it became more important for people to get along with each other. Overt aggression undermined the stability of large groups, and so the most combative individuals were systematically eliminated, often by execution. In effect, the human species domesticated itself, and as a result our brains got smaller. This doesn’t mean, though, that we got more stupid.

  As for Crabtree’s ‘fragile intellect’ argument, his fellow geneticists panned it. The general theme of the counterargument was to deny that the transition to agriculture had relaxed selective pressure for intelligence. Kevin Mitchell of Trinity College Dublin argued that higher intelligence is associated with a lower risk of death from a wide range of causes, including cardiovascular disease, suicide, homicide and accidents. So, smarter individuals continue to enjoy greater reproductive success. Furthermore, the complexity of social interactions in modern society may place a higher selective pressure on intelligence because it serves as an indicator of general fitness.

  A group of researchers from the Max Planck Institute of Molecular Cell Biology and Genetics echoed these criticisms, adding that intelligence isn’t a fragile trait, as Crabtree feared. In fact, it seems to be quite robust, genetically speaking – the reason being that the large number of genes involved in intelligence aren’t devoted solely to that trait. Instead, many of them are also associated with other vital functions, such as cell division and glucose transport. Strong selective pressure continues to preserve them.

  A larger reason lurks behind the scientific distaste for the idiocracy theory. Many fear it raises the disturbing spectre of eugenics – the idea that scientific or political committees should decide who gets to breed in order to ensure that only the ‘best’ people pass on their genes. There was a time in the nineteenth and early twentieth century when many leading scientists were advocates of eugenics. It was a dark time in the history of science, and no one wants to revisit it.

  Crabtree, for his part, insisted he was in no way an advocate of this. He raised the issue of our possibly fragile intellect, he said, not to make a case for social change, but simply as a matter of academic curiosity. In fact, he noted that, with our current state of knowledge, there’s nothing obvious we could do about the problem if it does exist. We simply have to accept the situation.

  Or do we? We may already be doing something to reverse the trend. It turns out that throughout the history of our species, one of the great limiting factors on the size of the human head, and therefore of the brain, has been the size of women’s pelvises. Very large-headed babies couldn’t fit through their mother’s pelvis during birth. In the past, this meant that large-headed babies would often die during childbirth, but now, thanks to the ability of doctors to safely perform Caesarean sections, they no longer face that risk. We’ve removed the ancient limit on head size, and researchers suspect that this has already had an impact on our evolution. In the past 150 years, there’s been a measurable increase in the average head size.

  This means that, if the human head can now grow as big as it wants, a larger brain size might follow. Just by dumb luck, we may have saved ourselves from idiocracy.

  CHAPTER FIVE

  Mushroom Gods and Phantom Time

  During the past two million years, human species of all shapes and sizes could have been encountered throughout the world. For most of this time, the long-legged Homo erectus roamed in Eurasia, and the brawny Neanderthals, more recently, occupied the same region. Further away, on the Indonesian island of Flores, the small, hobbit-like Homo floresiensis made their home.

  None of these species still survive. It’s not clear what became of them. All we know for sure is that only one human species remains: ourselves, Homo sapiens. Our ancestors first emerged from Africa around 100,000 years ago – at which time, Homo erectus had already vanished, although both the Neanderthals and Homo floresiensis were still around – and they rapidly migrated around the world.

  A distinct cultural change became apparent among our ancestors, some 50,000 years ago. No longer did they just make stone axes and spears. Suddenly, they began carving intricate tools out of bone, covering the interiors of caves with magnificent paintings, and even engaging in long-distance trade. Still, they lived a nomadic life, hunting and foraging for food. Then, around 12,000 years ago, some of them decided to start planting crops. Soon, they had settled down to live besides these crops, and they became farmers. This marked a pivotal moment in our history because it eventually led to the rise of civilization, which is the era we’ll examine in this final section.


  The term ‘civilization’ is related to the Latin word civitas, meaning ‘city’, the first of which appeared around 5,500 years ago, developing out of the earlier agrarian villages. The first forms of writing emerged in Mesopotamia and Egypt just a few hundred years after the rise of cities. It was a technology that allowed people to keep track of the resources they were accumulating, as well as to record their beliefs and memorialize important events.

  As a result of the existence of these written records, the researchers who study this era – social scientists such as archaeologists, anthropologists, philologists, historians and psychologists – don’t have to rely solely on teasing out clues from fossils and other physical remains to figure out what happened. They can read first-hand accounts. Unfortunately, this doesn’t translate into greater certainty. Humans are notoriously unreliable as sources of evidence – we lie, exaggerate, embellish, misremember and misinterpret events – so disagreements abound about the historical record, and strange theories proliferate, dreamed up by those who are convinced that, to get at the real truth of this era, we need to read deeply between the lines.

  What if ancient humans were directed by hallucinations?

  Imagine that engineers could build a society populated by sophisticated biological robots. From the outside, it would look exactly like our own. You would see people driving to work, doing their jobs, dining at restaurants, going home at night and falling asleep, but if you ever stopped one of these robots and asked it to explain why it had decided to do what it was doing, it would have no answer. It never decided to do anything. There was no conscious thought involved. It had simply been following what it had been programmed to do.

  In 1976, the bicameral-mind theory of Princeton psychologist Julian Jaynes introduced the idea that this is very much like the way humans functioned until quite recently in our history – about 3,000 years ago. Jaynes didn’t think our ancestors were robots, of course, but he did argue they had no self-awareness or capacity for introspection. They built cities, farmed fields and fought wars, but they did so without conscious planning. They acted like automatons. If asked why they behaved as they did, they wouldn’t have been able to answer. So, how did they make decisions? This was the strangest part of Jaynes’s theory. He maintained that they were guided by voices in their heads – auditory hallucinations – that told them what to do. They obediently followed these instructions, as they believed these were the voices of the gods.

  As a young researcher, Jaynes became interested in the mystery of human consciousness. It wasn’t consciousness in terms of being awake or aware of our surroundings that intrigued him. Instead, he was fascinated by the consciousness that forms the decision-making part of our brain. This could be described as our introspective self-awareness, or the train of thought that runs through our minds while we’re awake, dwelling upon things we did in the past, replaying scenes in our mind and anticipating events in the future.

  This kind of consciousness seems to be a uniquely human phenomenon. It’s not possible to know exactly how animals think, but they seem to live in the moment, relying on more instinctual behaviours to make decisions, whereas we have a layer of self-awareness that floats on top of our instincts. Jaynes wondered where this came from.

  To explore this question, he initially followed what was then the traditional research method. He studied animal behaviour, conducting maze experiments and other psychological tests on worms, reptiles and cats. But he soon grew frustrated. Consciousness, he decided, was such a complex topic that it couldn’t be fully illuminated in the confined setting of a laboratory. To understand it required an interdisciplinary approach. So, he gave up his animal experiments and immersed himself instead in a broad array of subjects: linguistics, theology, anthropology, neurology, archaeology and literature.

  It was from this wide-ranging self-education that he arrived at a revelation. Our consciousness, he realized, must have had an evolutionary history. At some point in the past, our ancestors must have lived immersed in the moment, just like animals, and between then and now our self-awareness developed. Given this, there must have been intermediary steps in its evolution. But what might such a step have looked like? The answer he came up with was that, before our consciousness developed into full-blown self-awareness, it went through a stage in which it took the form of ‘voices in the head’. Our ancestors experienced auditory hallucinations that gave them directions.

  The way he imagined it was that for most of our evolutionary history, when early humans had lived in hunter-gatherer groups, they got by on pure instinctual behaviours, inhabiting the here and now, focusing on whatever task was at hand. These groups were small enough that, one assumes, they were usually within earshot of each other, and during times of danger everyone could instantly respond to the verbal commands of the leader. As such, there was no need for them to have any kind of introspective self-awareness to regulate their behaviour.

  The crucial moment of change, Jaynes believed, occurred approximately 12,000 years ago, when our ancestors gave up the hunter-gatherer lifestyle and settled down to adopt agriculture. This led to the creation of larger communities, like towns and eventually cities, which triggered a problem of social control. People were no longer always within earshot of the leader and certain tasks required them to act on their own. The leader’s shouted commands could no longer organize the behaviour of the group.

  Our ancestors solved this problem, according to Jaynes, by imagining what the leader might tell them to do. They internalized his voice, and then heard his imagined commands as auditory hallucinations whenever an unaccustomed decision had to be made, or when they needed to be reminded to stay focused on a task. Jaynes offered the example of a man trying to set up a fishing weir on his own. Every now and then, the voice in his head would have urged him to keep working, rather than wandering off as he might otherwise have been inclined to do.

  Eventually, the identity of the voices separated from that of the group leader. People came to believe instead that they were hearing the voice of a dead ancestor, or a god. Jaynes theorized that, for thousands of years (until around 1000 BC), this was the way our ancestors experienced the world, relying on these inner voices for guidance.

  He detailed this theory in a book with the imposing title The Origin of Consciousness in the Breakdown of the Bicameral Mind. Despite sounding like the kind of work that only academics could possibly wade through, its sensational claim caught the public’s interest, and it soon climbed onto bestseller lists.

  But why voices in the head? Where did Jaynes get this idea? One source of inspiration for him was so-called split-brain studies. Our brains consist of a right and a left hemisphere, connected by a thick cord of tissue called the corpus callosum. During the 1960s, surgeons began performing a radical procedure in which they severed the corpus callosum as a way to treat extreme cases of epilepsy. This operation left the patients with what were essentially two unconnected brains in their head.

  The procedure did ease the epileptic seizures, and the patients appeared outwardly normal after the operation, but, as researchers studied the patients, they realized that at times they behaved in very strange ways, as if they had two separate brains that were at odds with each other. For instance, while getting dressed, one patient tried to button up a shirt with her right hand as the left hand simultaneously tried to unbutton it. Another would strenuously deny being able to see an object he was holding in his left hand.

  These results made researchers realize the extent to which the two hemispheres of our brain not only act independently of each other, but also focus on different aspects of the world. The left brain is detail-oriented, whereas the right takes in the bigger picture. The left could be described as more rational or logical, whereas the right is more artistic or spiritual.

  It was these split-brain studies that led Jaynes to theorize that the primitive consciousness of our ancestors might have been similarly split in two, just as their brains were. For them, the right hemisphere w
ould have served as the executive decision-maker, ruminating upon long-term planning and strategy, while the left hemisphere would have been the doer, taking care of activities in the here and now. To return to the example of constructing a fishing weir, the left hemisphere would have handled the minute-by-minute details of the task, while the right hemisphere would have acted as the overall manager, making sure the job got done.

  So, most of the time, the left hemisphere would have been in charge, but occasionally a novel situation would have arisen and the left hemisphere would have hesitated, not sure of what to do. At which point, the right hemisphere would have issued a command to it, and the left brain would have experienced this as an auditory or visual hallucination.

  Jaynes called this hypothetical form of split-brain mental organization the ‘bicameral mind’, borrowing the term from politics, where it describes a legislative system consisting of two chambers, such as the House of Commons and the House of Lords.

  Of course, you and I also have brains with two hemispheres, but most of us don’t hear voices in our heads. Jaynes’s hypothesis was that our ancestors simply hadn’t yet learned to coordinate the two hemispheres in order to produce what we experience: a singular, unicameral consciousness.

  Split-brain studies weren’t Jaynes’s only source of inspiration. He claimed that we didn’t need to try to guess how our distant ancestors experienced reality because we actually had a surviving first-hand account of life during the era of the bicameral mind. By examining this source carefully, we could discern the character of their mental world.

 

‹ Prev