Book Read Free

The Idiot Brain

Page 17

by Dean Burnett


  Regardless of this, the human brain still does an impressive job of translating vibrations in the air to the rich and complex auditory sensations we experience every day.

  So hearing is a mechanical sense that responds to vibration and physical pressure exerted by sound. Touch is the other mechanical sense. If pressure is applied to the skin, we can feel it. We can do this via dedicated mechanoreceptors that are located everywhere in our skin. The signals from the receptors are then conveyed via dedicated nerves to the spinal cord (unless the stimulation is applied to the head, which is dealt with by the cranial nerves), where they’re then relayed to the brain, arriving at the somatosensory cortex in the parietal lobe which makes sense of where the signals come from and allows us to perceive them accordingly. It seems fairly straightforward, so obviously it isn’t.

  Firstly, what we call touch has several elements that contribute to the overall sensation. As well as physical pressure, there’s vibration and temperature, skin stretch and even pain in some circumstances, all of which have their own dedicated receptors in the skin, muscle, organ or bone. All of this is known as the somatosensory system (hence somatosensory cortex) and our whole body is innervated by the nerves that serve it. Pain, aka nociception, has its own dedicated receptors and nerve fibres throughout the body.

  Pretty much the only organ that doesn’t have pain receptors is the brain itself, and that’s because it’s responsible for receiving and processing the signals. You could argue that the brain feeling pain would be confusing, like trying to call your own number from your own phone and expecting someone to pick up.

  What is interesting is that touch sensitivity isn’t uniform; different parts of the body respond differently to the same contact. Like the motor cortex discussed in a previous chapter, the somatosensory cortex is laid out like a map of the body corresponding to the areas it’s receiving information from, with the foot region processing stimuli from feet, the arm region for the arm, and so on.

  However, it doesn’t use the same dimensions as the actual body. This means that the sensory information received doesn’t necessarily correspond with the size of the region the sensations are coming from. The chest and back areas take up quite a small amount of space in the somatosensory cortex, whereas the hands and lips take up a very large area. Some parts of the body are far more sensitive to touch than others; the soles of the feet aren’t especially sensitive, which makes sense as it wouldn’t be practical to feel exquisite pain whenever you step on a pebble or a twig. But the hands and lips occupy disproportionately large areas of the somatosensory cortex because we use them for very fine manipulation and sensations. Consequently, they are very sensitive. As are the genitals, but let’s not go into that.

  Scientists measure this sensitivity by simply prodding someone with a two-pronged instrument and seeing how close together these prongs can be and still be recognised as separate pressure points.6 The fingertips are especially sensitive, which is why braille was developed. However, there are some limitations: braille is a series of separate specific bumps because the fingertips aren’t sensitive enough to recognise the letters of the alphabet when they’re text sized.7

  Like hearing, the sense of touch can also be ‘fooled’. Part of our ability to identify things with touch is via the brain being aware of the arrangement of your fingers, so if you touch something small (for instance, a marble) with your index and middle finger, you’ll feel just the one object. But if you cross your fingers and close your eyes, it feels more like two separate objects. There’s been no direct communication between the touch-processing somatosensory cortex and the finger-moving motor cortex to flag up this point up, and the eyes are closed so aren’t able to provide any information to override the inaccurate conclusion of the brain. This is the Aristotle illusion.

  So there are more overlaps between touch and hearing than is immediately apparent, and recent studies have found evidence that the link between the two may be far more fundamental than previously thought. While we’ve always understood that certain genes were strongly linked to hearing abilities and increased risk of deafness, a 2012 study by Henning Frenzel and his team8 discovered that genes also influenced touch sensitivity, and interestingly that those with very sensitive hearing also showed a finer sense of touch too. Similarly, those with genes that resulted in poor hearing also had a much higher likelihood of showing poor touch sensitivity. A mutated gene was also discovered that resulted in both impaired hearing and touch.

  While there is still more work to be done on this area, this does strongly suggest that the human brain uses similar mechanisms to process both hearing and touch, so deep-seated issues that affect one can end up affecting the other. This is perhaps not the most logical arrangement, but it’s reasonably consistent with the taste–smell interaction we saw in the previous section. The brain does tend to group our senses together more often than seems practical. But on the other hand, it does suggest people can ‘feel the rhythm’ in a more literal manner than is generally assumed.

  Jesus has returned … as a piece of toast?

  (What you didn’t know about the visual system)

  What do toast, tacos, pizza, ice-cream, jars of spread, bananas, pretzels, crisps and nachos have in common? The image of Jesus has been found in all of them (seriously, look it up). It’s not always food though; Jesus often pops up in varnished wooden items. And it’s not always Jesus; sometimes it’s the Virgin Mary. Or Elvis Presley.

  What’s actually happening is that there are uncountable billions of objects in the world that have random patterns of colour or patches that are either light or dark, and by sheer chance these patterns sometimes resemble a well-known image or face. And if the face is that of a celebrated figure with metaphysical properties (Elvis falls into this category for many) then the image will have more resonance and get a lot of attention.

  The weird part (scientifically speaking) is that even those who are aware that it’s just a grilled snack and not the bread-based rebirth of the Messiah can still see it. Everyone can still recognise what is said to be there, even if they dispute the origins of it.

  The human brain prioritises vision over all other senses, and the visual system boasts an impressive array of oddities. As with the other senses, the idea that the eyes capture everything about our outside world and relay this information intact to the brain like two worryingly squishy video cameras is a far cry from how things really work.‡

  Many neuroscientists argue that the retina is part of the brain, as it develops from the same tissue and is directly linked to it. The eyes take in light through the pupils and lenses at the front, which lands on the retina at the back. The retina is a complex layer of photoreceptors, specialised neurons for detecting light, some of which can be activated by as little as half-a-dozen photons (the individual ‘bits’ of light). This is very impressive sensitivity, like a bank security system being triggered because someone had a thought about robbing the place. The photoreceptors that demonstrate such sensitivity are used primarily for seeing contrasts, light and dark, and are known as rods. These work in low-light conditions, such as at night. Bright daylight actually oversaturates them, rendering them useless; it’s like trying to pour a gallon of water into an egg cup. The other (daylight-friendly) photoreceptors detect photons of certain wavelengths, which is how we perceive colour. These are known as cones, and they give us a far more detailed view of the environment, but they require a lot more light to be activated, which is why we don’t see colours at low light levels.

  Photoreceptors aren’t spread uniformly across the retina. Some areas have different concentrations from others. We have one area in the centre of the retina that recognises fine detail, while much of the periphery gives only blurry outlines. This is due to the concentrations and connections of the photoreceptor types in these areas. Each photoreceptor is connected to other cells (a bipolar cell and a ganglion cell usually), which transmit the information from the photoreceptors to the brain. Each photoreceptor
is part of a receptive field (which is made up of all the receptors connected to the same transmission cells) that covers a specific part of the retina. Think of it like a mobile-phone mast, which receives all the different information relayed from the phones within its coverage range and processes them. The bipolar and ganglion cells are the mast, the receptors are the phones; thus there is a specific receptive field. If light hits this field it will activate a specific bipolar or ganglion cell via the photoreceptors attached to it, and the brain recognises this.

  In the periphery of the retina, the receptive fields can be quite big, like a golf umbrella canvas around the central shaft. But this means precision suffers – it’s difficult to work out where a raindrop is falling on a golf umbrella; you just know it’s there. Luckily, towards the centre of the retina, the receptive fields are small and dense enough to provide sharp and precise images, enough for us to be able to see very fine details like small print.

  Bizarrely, only one part of the retina is able to recognise this fine detail. It is named the fovea, in the dead centre of the retina, and it makes up less than 1 per cent of the total retina. If the retina were a widescreen TV, the fovea would be a thumbprint in the middle. The rest of the eye gives us more blurry outlines, vague shapes and colours.

  You may think this makes no sense, because surely people see the world crisp and clear, give or take the odd cataract? This described arrangement would be more like looking through the wrong end of a telescope made of Vaseline. But, worryingly, that is what we ‘see’, in the purest sense. It’s just that the brain does a sterling job of cleaning this image up before we consciously perceive it. The most convincing Photoshopped image is little more than a crude sketch in yellow crayon compared to the polishing the brain does with our visual information. But how does it do this?

  The eyes move around a lot, and much of this is due to the fovea being pointed at various things in our environment that we need to look at. In the old days, experiments tracking eyeball movements used specialised metal contact lenses. Just let that sink in, and appreciate how committed some people are to science.§

  Essentially, whatever we’re looking at, the fovea scans as much of it as possible, as quickly as possible. Think of a spotlight aimed at a football field operated by someone in the middle of a near-lethal caffeine overdose, and you’re sort of there. The visual information obtained via this process, coupled with the less-detailed but still-usable image of the rest of the retina, is enough for the brain to do some serious polishing and make a few ‘educated guesses’ about what things look like, and we see what we see.

  This seems a very inefficient system, relying on such a small area of retina to do so much. But considering how much of the brain is required to process this much visual information, even doubling the size of the fovea so it’s more than 1 per cent of the retina would require an increase in brain matter for visual processing to the point where our brains could end up the size of basketballs.

  But what of this processing? How does the brain render such detailed perception from such crude information? Well, photoreceptors convert light information to neuronal signals which are sent to the brain along the optic nerves (one from each eye).¶ The optic nerve relays visual information to several parts of the brain. Initially, the visual information is sent to the thalamus, the old central station of the brain, and from there it’s spread far and wide. Some of it ends up in the brain-stem, either in a spot called the pretectum, which dilates or contracts pupils in response to light intensity, or in the superior colliculus, which controls movement of the eyes in short jumps called saccades.

  If you concentrate on how your eyes move when you look from right to left or vice versa, you will notice that they don’t move in one smooth sweep but a series of short jerks (do it slowly to appreciate this properly). These movements are saccades, and they allow the brain to perceive a continuous image by piecing together a rapid series of ‘still’ images, which is what appears on the retina between each jerk. Technically, we don’t actually ‘see’ much of what’s happening between each jerk, but it’s so quick we don’t really notice, like the gap between the frames of an animation. (The saccade is one of the quickest movements the human body can make, along with blinking and closing a laptop as your mum walks into your bedroom unexpectedly.)

  We experience the jerky saccades whenever we move our eyes from one object to another, but if we’re visually following something in motion our eye movement is as smooth as a waxed bowling ball. This makes evolutionary sense; if you’re tracking a moving object in nature it’s usually prey or a threat, so you’d need to keep focused on it constantly. But we can do it only when there’s something moving that we can track. Once this object leaves our field of vision, our eyes jerk right back to where they were via saccades, a process termed the Optokinetic reflex. Overall, it means the brain can move our eyes smoothly, it just often doesn’t.

  But why when we move our eyes do we not perceive the world around us as moving? After all, it all looks the same as far as images on the retina are concerned. Luckily, the brain has a quite ingenious system for dealing with this issue. The eye muscles receive regular inputs from the balance and motion systems in our ears, and use these to differentiate between eye motion and motion in or of the world around us. It means we can also maintain focus on an object when we’re in motion. It’s a system that can be confused though, as the motion-detection systems can sometimes end up sending signals to the eyes when we’re not moving, resulting in involuntary eye movements called nystagmus. Health professionals look out for these when assessing the health of the visual system, because when your eyes are twitching for no reason, that’s not great. It’s suggestive of something gone awry in the fundamental systems that control your eyes. Nystagmus is to doctors and optometrists what a rattling in the engine is to a mechanic; might be something fairly harmless, or it might not, but either way it’s not meant to be happening.

  This is what your brain does just working out where to point the eyes. We haven’t even started on how the visual information is processed.

  Visual information is mostly relayed to the visual cortex in the occipital lobe, at the back of the brain. Have you ever experienced the phenomenon of hitting your head and ‘seeing stars’? One explanation for this is that impact causes your brain to rattle around in your skull like a hideous bluebottle trapped in an egg cup, so the back of your brain bounces off your skull. This causes pressure and trauma to the visual processing areas, briefly scrambling them, and as a result we see sudden weird colours and images resembling stars, for want of a better description.

  The visual cortex itself is divided into several different layers, which are themselves often subdivided into further layers.

  The primary visual cortex, the first place the information from the eyes arrives in, is arranged in neat ‘columns’, like sliced bread. These columns are very sensitive to orientation, meaning they respond only to the sight of lines of a certain direction. In practical terms, this means we recognise edges. The importance of this can’t be overstressed: edges mean boundaries, which means we can recognise individual objects and focus on them, rather than on the uniform surface that makes up much of their form. And it means we can track their movements as different columns fire in response to changes. We can recognise individual objects and their movement, and dodge an oncoming football, rather than just wonder why the white blob is getting bigger. The discovery of this orientation sensitivity is so integral that when David Hubel and Torsten Wiesel discovered it in 1981, they ended up with a Nobel Prize.9

  The secondary visual cortex is responsible for recognising colours, and is extra impressive because it can work out colour constancy. A red object in bright light will look, on the retina, very different from a red object in dark light, but the secondary visual cortex can seemingly take the amount of light into account, and work out what colour the object is ‘meant’ to be. This is great, but it’s not 100 per cent reliable. If you’ve ever argued with someone
over what colour something is (such as whether a car is dark blue or black) you’ve experienced first hand what happens when the secondary visual cortex gets confused.

  It goes on like this, the visual-processing areas spreading out further into the brain, and the further they spread from the primary visual cortex the more specific they get regarding what it is they process. It even crosses over into other lobes, such as the parietal lobe containing areas that process spatial awareness, to the inferior temporal lobe processing recognition of specific objects and (going back to the start) faces. We have parts of the brain that are dedicated to recognising faces, so we see them everywhere. Even if they’re not there, because it’s just a piece of toast.

  These are just some of the impressive facets of the visual system. But perhaps the one that is most fundamental is the fact that we can see in three dimensions, or ‘3D’ as the kids are calling it. It’s a big ask, because the brain has to create a rich 3D impression of the environment from a patchy 2D image. The retina itself is technically a ‘flat’ surface, so it can’t support 3D images any more than a blackboard can. Luckily, the brain has a few tricks to get around this.

  Firstly, having two eyes helps. They may be close together on the face, but they’re far enough apart to supply subtly different images to the brain, and the brain uses this difference to work out depth and distance in the final image we end up perceiving.

  It doesn’t just rely on the parallax resulting from ocular disparity (that’s the technical way of saying what I just said) though, as this requires two eyes to be working in unison, but when you close or cover one eye, the world doesn’t instantly convert to a flat image. This is because the brain can also use aspects of the image delivered by the retina to work out depth and distance. Things like occlusion (objects covering other objects), texture (fine details in a surface if it’s close but not if it’s far away) and convergence (things up close tend to be much further apart than things in the distance; imagine a long road receding to a single point) and more. While having two eyes is the most beneficial and effective way to work out depth, the brain can get by fine with just one, and can even keep performing tasks that involve fine manipulation. I once knew a successful dentist who could see out of only one eye; if you can’t manage depth perception, you don’t last long in that job.

 

‹ Prev