The Secret Life of the Mind

Home > Other > The Secret Life of the Mind > Page 5
The Secret Life of the Mind Page 5

by Mariano Sigman


  I, me, mine and other permutations by George

  Long before becoming great jurists, philosophers, or noted economists, children–including the children that Aristotle, Plato and Piaget once were–already had intuitions about property and ownership. In fact, children use the pronouns my and mine before using I or their own names. This language progression reflects an extraordinary fact: the idea of ownership precedes the idea of identity, not the other way around.

  In early battles over property the principles of law are also rehearsed. The youngest children claim ownership of something based on the argument of their own desires: ‘It’s mine because I want it.’* Later, around two years of age, they begin to argue with an acknowledgement of others’ rights to claim the same property for themselves. Understanding others’ ownership is a way of discovering that there are other individuals. The first arguments outlined by children are usually: ‘I had it first’; ‘They gave it to me.’ This intuition that the first person to touch something wins indefinite rights to its usage does not disappear in adulthood. Heated discussions over a parking spot, a seat on a bus, or the ownership of an island by the first country to plant its flag there are private and institutional examples of these heuristics. Perhaps because of that, it is unsurprising that large social conflicts, like in the Middle East, are perpetuated by very similar arguments to those deployed in a dispute between two-year-olds: ‘I got here first’; ‘They gave it to me.’

  Transactions in the playground, or the origin of commerce and theft

  On the local 5-a-side football pitch, the owner of the ball is, to a certain extent, also the owner of the game. It gives them privileges like deciding the teams, and declaring when the game ends. These advantages can also be used to negotiate. The philosopher Gustavo Faigenbaum, in Entre Ríos, Argentina, and the psychologist Philippe Rochat, in Atlanta, in the USA, set out to understand this world: basically, how the concept of owning and sharing is established in children, among intuitions, praxis and rules. Thus they invented the sociology of the playground. Faigenbaum and Rochat, in their voyage to the land of childhood,* researched swapping, gifts and other transactions that took place in a primary school playground. Studying the exchange of little figurines, they found that even in the supposedly naïve world of the playground, the economy is formalized. As children grow up, lending and the assignment of vague, future values give way to more precise exchanges, the notion of money, usefulness and the prices of things.

  As in the adult world, not all transactions in the country of childhood are licit. There are thefts, scams and betrayals. Rousseau’s conjecture is that the rules of citizenship are learned in discord. And it is the playground, which is more innocuous than real life, that becomes the breeding ground in which to play at the game of law.

  The contrasting observations of Wynn and her colleagues suggest that very young children should already be able to sketch out moral reasoning. On the other hand, the work of Piaget, who is an heir to Rousseau’s tradition, indicates that moral reasoning only begins at around six or seven years old. Gustavo Faigenbaum and I set out to reconcile these different great thinkers in the history of psychology. And, along the way, to understand how children become citizens.

  We showed to a group of children between four and eight years of age a video with three characters: one had chocolates, the other asked to borrow them and the third stole them. Then we asked a series of questions to measure varying degrees of depth of moral comprehension; if they preferred to be friends with the one who stole or the one who borrowed* (and why), and what the thief had to do to make things right with the victim. In this way we were able to investigate the notion of justice in playground transactions.

  Our hypothesis was that the preference for the borrower over the thief, an implicit manifestation of moral preferences–as in Wynn’s experiments–should already be established even for the younger kids. And, to the contrary, the justification of these options and the understanding of what had to be done to compensate for the damage caused–as in Piaget’s experiments–should develop at a later stage. That is exactly what we proved. In the room with the four-year-olds, the children preferred to play with the borrower rather than with the thief. We also discovered that they preferred to play with someone who stole under extenuating circumstances than with aggravating ones.

  But our most interesting finding was this: when we asked fouryear-old children why they chose the borrower over the thief or the one who robbed in extenuating circumstances over the one who did so in aggravating ones, they gave responses like ‘Because he’s blond’ or ‘Because I want her to be my friend.’ Their moral criteria seemed completely blind to causes and reasons.

  Here we find again an idea which has appeared several times in this chapter. Children have very early (often innate) intuitions–what the developmental psychologists Liz Spelke and Susan Carey refer to as core knowledge. These intuitions are revealed in very specific experimental circumstances, in which children direct their gaze or are asked to choose between two alternatives. But core knowledge is not accessible on demand in most real-life situations where it may be needed. This is because at a younger age core knowledge cannot be accessed explicitly and represented in words or concrete symbols.

  Specifically, in the domain of morality, our results show that children have from a very young age intuitions about ownership which allow them to understand whether a transaction is licit or not. They understand the notion of theft, and they even comprehend subtle considerations which extenuate or aggravate it. These intuitions serve as a scaffold to forge, later in development, a formal and explicit understanding of justice.

  But every experiment comes with its own surprises, revealing unexpected aspects of reality. This one was no exception. Gustavo and I came up with the experiment to study the price of theft. Our intuition was that the children would respond that the chocolate thief should give back the two they stole plus a few more as compensation for the damages. But that didn’t happen. The vast majority of the children felt that the thief had to return exactly the two chocolates that had been stolen. What’s more, the older the kids, the higher the fraction of those who advocated an exact restitution. Our hypothesis was mistaken. Children are much more morally dignified than we had imagined. They understood that the thieves had done wrong, that they would have to make up for it by returning what they’d stolen along with an apology. But the moral cost of the theft could not be resolved in kind, with the stolen merchandise. In the children’s justice, there was no reparation that absolved the crime.

  If we think about the children’s transactions as a toy model of international law, this result, in hindsight, is extraordinary. An implicit, though not always respected, norm of international conflict resolution is that there should be no escalation in reprisal. And the reason is simple. If someone steals two and, in order to settle a peace, the victim demands four, the exponential growth of reprisals would be harmful for everyone. Children seem to understand that even in war there ought to be rules.

  Jacques, innatism, genes, biology, culture and an image

  Jacques Mehler is one of many Argentinian political and intellectual exiles. He studied with Noam Chomsky at the Massachusetts Institute of Technology (MIT) at the heart of the cognitive revolution. From there he went to Oxford and then France, where he was the founder of the extraordinary school of cognitive science in Paris. He was exiled not just as a person, but as a thinker. He was accused of being a reactionary for claiming that human thought had a biological foundation. It was the oft-mentioned divorce between human sciences and exact sciences, which in psychology was particularly marked. I like to think of this book as an ode to and an acknowledgment of Jacques’s career. A space of freedom earned by an effort that he began, swimming against the tide. An exercise in dialogue.

  In the epic task of understanding human thought, the division between biology, psychology and neuroscience is a mere declaration of caste. Nature doesn’t care a fig for such artificial barriers bet
ween types of knowledge. As a result, throughout this chapter, I have interspersed biological arguments, such as the development of the frontal cortex, with cognitive arguments, such as the early development of moral notions. In other examples, like that of bilingualism and attention, we’ve delved into how those arguments combine.

  Our brains today are practically identical to those of at least 60,000 years ago, when modern man migrated from Africa and culture was completely different. This shows that individuals’ paths and potential for expression are forged within their social niches. One of the arguments of this book is that it is also virtually impossible to understand human behaviour without taking into consideration the traits of the organ that comprises it: the brain. The way in which social knowledge and biological knowledge interact and complement each other depends, obviously, on each case and its circumstances. There are some cases in which biological constitution is decisive. And others are determined primarily by culture and the social fabric. It is not very different from what happens with the rest of the body. Physiologists and coaches know that physical fitness can change enormously during our life while, on the other hand, our running speed, for example, doesn’t have such a wide range of variation.

  The biological and the cultural are always intrinsically related. And not in a linear manner. In fact, a completely unfounded intuition is that biology precedes behaviour, that there is an innate biological predisposition that can later follow, through the effect of culture, different trajectories. That is not true; the social fabric affects the very biology of the brain. This is clear in a dramatic example observed in the brains of two three-year-old children. One is raised with affection in a normal environment while the other lacks emotional, educational and social stability. The brain of the latter is not only abnormally small but its ventricles, the cavities through which cerebrospinal fluid flows, have an abnormal size as well.

  So different social experiences result in completely distinct brains. A caress, a word, an image–every life experience leaves a trace in the brain. These traces modify the brain and, with it, one’s way of responding to things, one’s predisposition to relating to someone, one’s desires, wishes and dreams. In other words, the social context changes the brain, and this in turn defines who we are as social beings.

  A second unfounded intuition is thinking that because something is biological it is unchangeable. Again, this is simply not true. For instance, the predisposition to music depends on the biological constitution of the auditory cortex. This is a causal relation between an organ and a cultural expression. However, this connection does not imply developmental determinism. The auditory cortex is not static, anyone can change it just by practising and exercising.

  Thus the social and the biological are intrinsically related in a network of networks. This categorical division is not a property of nature, but rather of our obtuse way of understanding it.

  CHAPTER TWO

  The fuzzy borders of identity

  What defines our choices and allows us to trust other people and our own decisions?

  Our choices define us. We choose to take risks or live conservatively, to lie when it seems convenient or to make the truth a priority, no matter what the cost. We choose to save up for a distant future or live in the moment. The vast sum of our actions comprises the outline of our identities. As José Saramago put it in his novel All the Names: ‘We don’t actually make decisions, the decisions make us.’ Or, in a more contemporary version, when Albus Dumbledore lectures Harry Potter: ‘It is our choices, Harry, that show what we truly are, far more than our abilities.’

  Almost all decisions are mundane, because the overwhelming majority of our lives are spent in the day-to-day. Deciding whether we’ll visit a friend after work, if we should take the bus or the Underground; choosing between chips or a salad. Imperceptibly, we compare the universe of possible options on a mental scale, and after thinking it over we finally choose (chips, of course). When choosing between these alternatives, we activate the brain circuits that make up our mental decision-making machine.

  Our decisions are almost always made based on incomplete information and imprecise data. When a parent chooses what school to send their child to, or a Minister of Economics decides to change the tax policy, or a football player opts to shoot at goal instead of passing to a teammate in the penalty area–in each and every one of these occasions it is only possible to sketch an approximate idea of the impending consequences of our decisions. Making decisions is a bit like predicting the future, and as such is inevitably imprecise. Eppur si muove. The machine works. That is what’s most extraordinary.

  Churchill, Turing and his labryinth

  On 14 November 1940, some 500 Luftwaffe planes flew, almost unchallenged, to Britain and bombed the industrial city of Coventry for seven hours. Many years after the war had ended, Captain Frederick William Winterbotham revealed that Winston Churchill* could have avoided the bombing and the destruction of the city if he had decided to use a secret weapon discovered by the young British mathematician Alan Turing.

  Turing had achieved a scientific feat that gave the Allies a strategic advantage that could decide the outcome of the Second World War. He had created an algorithm capable of deciphering Enigma, the sophisticated mechanical system made of circular pieces–like a combination lock–that allowed the Nazis to encode their military messages. Winterbotham explained that, with Enigma decoded, the secret service men had received the coordinates for the bombing of Coventry with enough warning to take preventive measures. Then, in the hours leading up to the bombing, Churchill had to decide between two options: one emotional and immediate–avoiding the horror of a civilian massacre–and the other rational and calculated–sacrificing Coventry, not revealing their discovery to the Nazis, and holding on to that card in order to use it in the future. Churchill decided, at a cost of 500 civilian lives, to keep Britain’s strategic advantage over his German enemies a secret.

  Turing’s algorithm evaluated in unison all the configurations–each one corresponding to a possible code–and, according to its capacity to predict a series of likely messages, updated each configuration’s probability. This procedure continued until the likelihood of one of the configurations reached a sufficiently high level. The discovery, in addition to precipitating the Allied victory, opened up a new window for science. Half a century after the war’s end it was discovered that the algorithm that Turing had come up with to decode Enigma was the same one that the human brain uses to make decisions. The great English mathematician, who was one of the founders of computation and artificial intelligence, created–in the urgency of wartime–the first, and still the most effective, model for understanding what happens in our brains when we make a decision.

  Turing’s brain

  As in the procedure sketched out by Turing, the cerebral mechanism for making decisions is built on an extremely simple principle: the brain elaborates a landscape of options and starts a winner-take-all race between them.

  The brain converts the information it has gathered from the senses into votes for one option or the other. The votes pile up in the form of ionic currents accumulated in a neuron until they reach a threshold where the brain deems there is sufficient evidence. These circuits that coordinate decision-making in the brain were discovered by a group of researchers headed by William Newsome and Michael Shadlen. Their challenge was to design an experiment simple enough to be able to isolate each element of the decision and, at the same time, sophisticated enough to represent decision-making in real life.

  This is how the experiment works: a cloud of dots moves on a screen. Many of the dots move in a chaotic, disorganized way. Others move coherently, in a single direction. A player (an adult, a child, a monkey and, sometimes, a computer) decides which way that cloud of dots is moving. It is the electronic version of a sailor lifting a finger to decide, in the midst of choppy waters, which way the wind is blowing. Naturally, the game becomes easier when more dots are moving in the same directio
n.

  Monkeys played this game thousands of times, while the researchers recorded their neuronal activity as reflected by the electrical currents produced in their brains. After studying this exercise for many years, and in many variations, they revealed the three principles of Turing’s algorithm for decision-making:

  (1) A group of neurons in the visual cortex receives information from the retina. The neuron’s current reflects the quantity and direction of movement in each moment, but does not accumulate a history of these observations.

  (2) The sensory neurons are connected to other neurons in the parietal cortex, which amass this information over time. So the neuronal circuits of the parietal cortex codify how the predisposition towards each possible action changes over time during the course of making the decision.

  (3) As information favouring one option accumulates, the parietal cortex that codifies this option increases its electrical activity. When the activity reaches a certain threshold, a circuit of neurons in structures deep in the brain–known as basal ganglia–set off the corresponding action and restart the process to make way for the next decision.

  The best way to prove that the brain decides through a race in the parietal cortex is by showing that a monkey’s response can be conditioned by injecting a current into the neurons that codify evidence in favour of a certain option. Shalden and Newsome did that experiment. While one monkey was watching a cloud of dots that moved completely randomly, they used an electrode to inject an electrical current into the parietal neurons that codify movement to the right. And, despite the senses indicating that movement was tied in either direction, the monkeys always responded that they were moving to the right. This is like emulating electoral fraud, manually inserting certain votes into the ballot box.

 

‹ Prev