by Ruby Wax
These days, we don’t need to upgrade genetically because we’ve got things like central heating and sunblock. If we need to travel longer distances, we don’t need to grow more or stronger legs, we just make faster planes or cars. Most of us don’t go outside much, so here in the West one of the big challenges is our foot falling asleep from not moving it while you’re sitting at your computer or getting a crick in our finger from texting.
Most of the natural selection today is because of cultural change, not climate change. Now, with technology at the tip of our fingers, and with the possibility of instant exposure to infinite environments and intellectual challenges, we just can’t adapt our genes fast enough to keep up with life in the digital jungle. The reason we’re burning out and sprouting new diseases is that we aren’t equipped to be able to genetically modify that fast, and yet, if we don’t, our species will go kaput.
To be able to keep up with the technological evolution, we’d have to evolve genetically a few times a month. Humans can produce a new generation only every twenty-five to thirty years, and it can take thousands of years to bake a new upgraded trait to be spread throughout the population. Some animals replicate once a week. There’s a fly that lives for only twenty-four hours; at the end of a day of existence, they’re replaced by a whole new generation. They’re just getting into the swing of things and, boom, they’re dead.
The New New Age
Probably as many science-fiction books will tell you, it was all part of the ‘bigger plan’ that we evolved to this moment in time to build computers that can take over from the old worn-out models called ‘us’. What’s coming in the future is coming. You can’t stop evolution, and the technological add-ons that are here, or almost here, are becoming extensions of us.
One of the first marriages of humans and their technology will be repairing brain injuries or other cognitive dysfunctions. Good news for those of us with mental illness. Antidepressants, at present, splatter-gun the brain, randomly hitting any old receptor they come upon to alter your chemicals. Neural-stimulation techniques, coming soon to your future, will be able to focus on a single area.
Further down the line, there will probably be memory enhancement for the ageing. We already have deep-brain stimulation to alleviate symptoms of Parkinson’s Disease, cochlear implants to restore hearing, and prosthetic limbs for those with disabilities.
Recently, electrodes to connect the motor cortex to the nervous system were inserted, enabling a quadriplegic woman to fly an F-35 fighter jet in simulation. A monkey used its mind to ride around in a wheelchair. (I’m not sure what the point of that one was, but he did it.) The scientist and physician Miguel Nicolelis and his team made it possible for a paralysed man to make the opening kick of the World Cup (see TEDTalks).
The same technology that allows a quadriplegic to use their thoughts as a remote control to move a bionic limb will make it possible for anyone to use their thoughts as a ‘remote control’ for everything. All your online shopping could be done simply by imagining it. The remote is already here and is being used by people who are paralysed, who can move a cursor on a screen just by using their thoughts.
As we speak, or as you read, work is being done by Elon Musk (owner of Tesla and SpaceX, who some say is brilliant, others not) and his team to create a brain–machine interface where all the neurons in your brain will be able to communicate with the outside world. Elon says, ‘We already have a digital tertiary layer in a sense, in that you have your computer or your phone or your applications. You can ask a question via Google and get an answer instantly. You can access any book or any music. With a spreadsheet, you can do incredible calculations. You can video chat with someone in Timbuktu for free. This would’ve gotten you burnt for witchcraft in the old days.’
The New You
This century may be the one when we, as a species, manage to snatch the genetic code from the clutches of evolution and learn to reprogramme ourselves. People alive today could witness the moment when ‘biotechnology’ might be able to free the human lifespan from the will of nature and hand it over to the whim of each individual. So, now, a whole new set of questions come into play, such as, would you actually want to be immortal? And, if you do, where will the next generation live? Maybe in jars.
We’ve neutralized the power of natural selection with modern-day medicine and technological innovations. We don’t have to wait around for new and more improved humans any more, thanks to in vitro fertilization. Parents can choose which embryo to implant, like choosing a lobster at a seafood restaurant. Specialists use gene-editing tools to create new mutations so parents will be able to design their own babies as far as gender, hair or eye colour is concerned. One of our big aims as a species is to develop greater intelligence, since it’s what got us where we are today, and our genes have evolved to dedicate more and more resources to our brains. We won’t have to wait for evolution to up the ante, we’ll soon be able to enhance intelligence by choosing the most intelligent embryo. Also, we’ll be able to manipulate DNA to engineer cells to create the next Einstein, Rembrandt or Olympic champion. I’m not saying this is good or bad, I’m just the messenger.
Kevin Kelly, publisher of the Whole Earth Review, executive editor at WIRED, founder of visionary non-profit organizations and writer on biology and ‘cool tools’, agrees that we might now start using the machines we’ve created to take the next step in our evolution. We are already working on implants for the deaf; the next step just might be that they can hear things which people with normal hearing can’t, for example, the sound of a whale hundreds of miles away, or the ability to hear what someone might be thinking. Eventually, we won’t be able to tell our software from our biological brains.
We Have Always Upgraded
Before we came up with language, a mere fifty thousand years ago, we had no way of getting a thought from your brain into my brain. Then the technology of language was invented, transforming vocal cords and ears into the first communication devices. We used these devices for years, and they seemed to work well, until we started to spread over large spaces and, then, no amount of screaming loudly would do the trick. Fortunately, phones came along to solve the problem, and everything since using our mouths and ears has been a technological upgrade, allowing us to stay connected to the world. I haven’t heard anyone complain all these two hundred years since then, or send Alexander Graham Bell hate mail. Or troll him.
Let’s face some reality at this point: twenty years ago, twenty thousand people had laser eye surgery to improve their vision; now, 2 million people a year get it done, it’s no big deal and everyone’s happy when they can suddenly read stop signs. Same thing happened with pacemakers and then organ transplants. There are waiting lists for organs, that’s how widespread this technology is now. We have to accept that the brain–machine love match has already begun.
We’ve already got cyborgs roaming around (your next-door neighbour might be one). The definition of a cyborg in the dictionary is ‘a fictional or hypothetical person whose physical abilities are extended beyond normal human limitations by mechanical elements built into the body’. As I said earlier, there are thousands of people walking around right now with cochlear implants, retinal implants, pacemakers, deep-brain implants, and so on. The number of robotic procedures performed increases by about 30 per cent a year.
I made friends with a guy called Neil Harbisson whom I met at TEDTalks Global and immediately dragged him to my home to meet the family. They thought he was the coolest person on earth. Neil was born with the inability to see colour, he saw only black and white, so he built an electronic eye that detects colour frequency and had it implanted in the back of his skull. Now, through bone conduction, he can hear colour through sound frequency. He hears the sound of red as C major. He says he can actually hear a Picasso or the sounds of a shopping mall and interpret these sounds into colours. Sometimes he puts certain colours of food on his plate so he can eat his favourite song. He has extended his senses by u
sing technology as part of his body. One day, maybe we’ll all be able to buy an implant and extend our own senses, just like your average superhero.
Guess What’s Coming?
Ray Kurzweil is one of the world’s leading inventors, thinkers and futurists, someone who talks about technology and trends and looks at the bigger picture. He says, ‘Basically, thanks to the Human Genome Project, doctors are learning how to reprogramme the “outdated software” of our bodies.’ He predicts, ‘In the 2040s, humans will develop the means to instantly create new portions of ourselves, either biological or non-biological.’ This means, if you feel like it, you can sprout wings – or have a bigger penis, depending on your mood.
These are all speculations, not facts, but rumour has it that, by the late 2020s, we’ll be able to eat as much junk food as we want because we’ll all have nanobots injected into our bodies that will provide us with all the proper nutrients we need while also eliminating all the excess fat we’ll gain from eating twenty bags of Doritos and unlimited chocolate every day. Hurrah!
It’s reported that, at some point in the future, we’ll be able to beam ourselves into another person’s brain and experience the world as they see it, just as in the film Being John Malkovich, or that one where these doctors shrink themselves down to nanometre size and go inside someone’s body, riding through their bloodstream. They were almost eaten many times, by bacteria and foreign viruses. I saw it as a child and was convinced they were in me, so I sat on the loo for about a week, trying to squeeze them out.
It goes without saying, but I’m saying it: there’ll be a tsunami of digital sex. You’ll be able to create an avatar who is blue-toothed to your genitals. Voilà! You won’t even have to shave your legs or bother with makeup. You can be represented by Beyoncé while, in real life be a slob.
Enter the Robots
The biggest demand for human-like robots is coming out of Japan; everyone’s living way too long and the older population isn’t getting any younger. Today, 25 per cent of Japan’s population is sixty-five or older and, by 2050, that’s going to increase to 39 per cent; their Ministry of Health says that, by 2025, Japan will need 4 million caregivers. The birth rate in Japan is low and they don’t like to let in too many immigrants, or give out work visas, so who’s going to take care of Grandma/pa? Well, Japan is leading the world in the creation of robots so maybe they’re looking for a solution there. Toyota has built one already, called Robina, named after Rosie, the cartoon robot in The Jetsons. Robina is just under four feet tall and can use words and gestures and wear a skirt. Her brother, Humanoid, can do the dishes, take care of Grandma/pa and even entertain, with specific talents. (I’m imagining the playing of spoons and clog dancing.)
Honda has created Asimo, a fully functioning humanoid, standing four foot tall with cameras for eyes. Asimo can answer questions, and, supposedly, interpret human emotions, movements and conversations.
My Conclusion
The drawback is that we might get so addicted to ‘the next big thing’ that we lose who we are and end up just being ‘out there’, communicating with virtual families and friends. I hope we don’t feel too lonely not being near real flesh-and-blood humans but I’m sure there will be some pill to take or some implant to implant if that happens.
A potential emotional sacrifice might be that if we ‘have it all’, we may lose out on any depth and just shallow out. Unless you experience a little bit of sadness or darkness, you won’t be able to feel compassion for anyone else. This also means there will be no literature, because only the dark stuff can be great. No one ever won a Booker Prize for a peppy novel where everyone ends up at a picnic. On the other hand, in the future, there might be a button on your keyboard to delete pain and one to hit for compassion, so – problem solved. Let’s also hope there’s a button marked ‘the present’, otherwise you’ll never be able to taste, smell, hear or see anything as it’s happening live. It will all be recorded on video, where nothing is ever as good as the real thing.
Also, I hope that, in the near future, there might be an automatic button on the computer that takes you offline and starts making all the decisions for you, like deciding which events, parties and meetings you really need to go to, which friends are worth seeing and which are draining you, and tells you honestly what your ‘look’ should be, taking into consideration your age, weight and personality. This means you’ll have time to have a life and declutter your brain. You can take it easy and it will do all the work. That’s what I’m looking forward to. Otherwise, we’ll continue to be slaves to the digital age.
One last thing, and this is the purpose and heart of mindfulness, I hope we’ll still have the facility to pay attention, to focus on things we choose to focus on. Once you lose attention, you’ll just get dragged from one thought to the next, which will scatter and rattle your mind. Unless we train ourselves to focus intentionally on what we choose to focus on – and, hopefully, it’s things that make us feel good – we’ll be in a constant state of distraction and dissatisfaction.
The Monk, the Neuroscientist and Me
Ruby: Ash, I was talking about Asimo, Honda’s human-like robot. Will robots like that be able to recognize emotions?
Neuroscientist: Yes, in a way, computers can already be trained to distinguish between facial expressions. You show them lots of photos of people smiling and they learn what a smile looks like. Then the computer can categorize a smiling face as happy, and now you start to have a system that can recognize emotions.
Ruby: Does the robot know the smile means there’s good news? Does it feel happy?
Neuroscientist: That’s the trick. Emotional expression can be complicated and it’s hard to cover all the possibilities with a robotic algorithm. With humans, we know that smiling might not always mean ‘happy’: there are genuine smiles, fake smiles and even angry smiles.
Ruby: How do you know a fake one when you see it? I’m smiling at you now, but can you tell underneath you’re really getting on my nerves?
Neuroscientist: Yes, I can tell, because we both feel that way. We just pick up a vibe from each other. We’re sensing many cues we’re unaware of. When we have our own emotions, we become aware of our behaviour and that lets us recognize what other people are doing. Computers don’t have that lived experience. They’re not aware, they just detect.
Ruby: And I can imagine, if you’re with a robot and it gets it wrong how you’re feeling, that would really piss you off.
Neuroscientist: Yeah, exactly. My mother does that. She calls and tells me that I sound depressed but I wasn’t before I started speaking to her.
Ruby: Ed can’t read me either. He just stands there smiling and nodding but has no idea what I’m saying. I’m telling him I ran over his leaf-blower with the car and he’s still smiling and nodding.
Neuroscientist: It’s a hard problem, reading emotion, and it’s something that humans get wrong all the time.
Ruby: I was reading that the Japanese are also introducing robots for childrearing, to help get more women in the workplace. It turns out that Japanese women are the best educated in the world but 70 per cent of them leave their jobs after their first child. The government and private funds are investing money in robots so the women can go to work. So, Ash, what are the chances that this chunk of metal can nurture a baby? What if we suddenly get robot babies? I wouldn’t want to breastfeed the Terminator.
Neuroscientist: Would you breastfeed R2D2?
Ruby: Would you? ’Cos, in the future, that’s what you’ll be doing.
Neuroscientist: You know, scientists have been trying to make artificial mothers for a long time. In the fifties and sixties, an American psychologist named Harry Harlow did some amazing work on artificial mothers for monkeys. Harlow’s mother monkey was a block of wood covered with sponge and an old towel, with a light bulb on the inside to make it warm. He thought that all a baby monkey needed was food, water, warmth and something to cling to.
Ruby: I think I would have been bet
ter off if my mother was made of an old towel. She did, however, carry sponges wherever she went, in case a microbe of dust was visible to the naked eye. What happened to the monkeys with the sponge mothers?
Neuroscientist: Harlow thought it was a big success because, when the babies were frightened, they would cling to the sponge mother for comfort. But when we watch his old videos now, it’s clear that the babies were so frightened they would cling to anything, even a large piece of lettuce.
Ruby: I know how they feel.
Monk: Yes, when I was brought up by wolves …
Ruby: Thubten’s just upset because we’ve cut him out of the conversation.
Neuroscientist: (ignoring Thubten) So, clearly, these artificial parents aren’t ideal. But it’s a tough one for mothers because, exactly at the time it’s most crucial to be there for the baby, it’s also likely to be a critical time in their career.
Ruby: That’s the bitch, that there’s always this choice between giving up your job or giving up time with your kid. It’s a lose–lose. What can the robot offer?
Neuroscientist: Robots can help with tasks around the house and maybe even with childrearing tasks like changing nappies, but the actual nurturing role would be very hard to mechanize. Caretakers mirror a baby’s emotions, they teach babies how to self-soothe and calm down. Robots can’t do that, not yet anyway.