Biomimicry

Home > Other > Biomimicry > Page 24
Biomimicry Page 24

by Janine M Benyus


  Native Americans had no trouble accepting biomimicry. Long ago they acknowledged that they were led to their medicines by animals, most notably the bear. Tribes in Africa also reportedly turned to animals (their livestock) to find out what to eat after their drought-stricken crops had failed. As tribal leaders told Donald Vermeer, “We ate the plants they ate, found we were OK, and now we eat those plants.” Even the U.S. Navy understands that animals may hold clues to our survival. In the U.S. Naval Institute’s 1943 book How to Survive on Land and Sea, authors John and Frank Craighead write, “In general it is safe to try foods that you observe being eaten by birds and mammals…. Food eaten by rodents or by monkeys, baboons, bears, raccoons, and various other omnivorous animals usually will be safe for you to try.”

  Why has it taken so long for the rest of us to come around to what is so obvious—that animals living in this world for millions of years might lead us to foods and drugs? Perhaps it’s the old specter of the belief that animals can’t teach us anything. When I ask Kenneth Glander, he frowns beneath his handlebar mustache and says, “It probably has something to do with the fact that we think we’re above animals. To say that we’ve learned something by watching a lower or nonhuman animal could be viewed as belittling to humans.” He hears himself and stops. “See? Even the terminology—‘lower animals’ and ‘nonhuman’—has a bias built in, and that bias is reflected in our reluctance to accept anything that is not human—and in some cases, even other humans’1 knowledge.” He’s right. We’ve only recently expanded our kinship circle to include indigenous cultures, to accept the so-called primitives’ knowledge. It’s taken those of us in the Western culture too long to do that, and in the process we’ve lost the opportunity to learn from tribes now scattered. Finally, we’re beginning to include animals in our circle of consideration—hoping against hope that we are not too late.

  For 99 percent of the time that humans have been on Earth, we watched the ways of animals to ensure our own survival as hunters and gatherers. Now, in a strange repeat of history, we are once again watching what they eat and what they avoid, what leaves they swallow whole or rub into their fur, and we are making notes to pass on to our tribe, the scientific community.

  In some of the places we watch, the human connection to the Earth has been severed. There are no cooking fires to storytell around, no ceremonial dances to reenact the movement of the herds. Yet even in the most modern setting, there is indigenous knowledge in the collective wisdom of wild communities. Animals embody the same rootedness that made local people local experts—they are a living repository of habitat knowledge. This habitat knowledge gives animals the wherewithal to balance their diet, to incorporate new foods without poisoning themselves, to prevent and treat ailments, and perhaps even to influence their reproductive lives.

  Wild plant eaters have already filtered and screened, assayed and applied the kaleidoscope of compounds that make up their world and ours. It is through them that we can tap the enormous potential of plant chemicals. By accepting their expertise, we may be retrieving the lost thread to a world we once knew well.

  CHAPTER 6

  HOW WILL WE STORE WHAT WE LEARN?

  DANCES WITH MOLECULES: COMPUTING LIKE A CELL

  Nerve cells are the mysterious butterflies of the soul, the beating of whose wings may someday—who knows?—clarify the secret of mental life.

  —SANTIAGO RAMÓN y CAJAL, father of modern brain science

  No one can possibly simulate you or me with a system that is less complex than you or me. The products that we produce may be viewed as a simulation, and while products can endure in ways that our bodies cannot, they can never capture the richness, complexity, or depth of purpose of their creator. Beethoven once remarked that the music he had written was nothing compared with the music he had heard.

  —HEINZ PAGELS, author of The Dreams of Reason

  I became curious about Jorge Luis Borges (1899–1986), the avantgarde Argentinean writer, after running across his quotes in so many of the mind/brain and computer books I was reading. His stories had made him a cult figure of sorts. When I read “The Library of Babel,” I began to understand why. In that story, Borges asks us to imagine a huge library that contains all possible books, that is, each and every combination of letters, punctuation marks, and spaces in the English language.

  Most of the books, of course, would be gibberish. But scattered throughout this vast library of possibilities would be books that made sense—all the books written, and all the books yet to be written. (At times, I agreed with Kevin Kelly, author of Out of Control, who wrote that it would be nice to visit Borges’s library and simply find his next book without having to write it.) Surrounding these readable books, and fanning out in all directions in bookcases shaped like honeycombs, would be thousands of “almost books,” books that were almost the same, except a word was transposed, a comma missing. The books closest to the real book would be only slightly changed, but as you got farther away, the books would degenerate into gibberish.

  You could work your way up to a readable book in the following manner. Pick a book and browse it. Gibberish, gibberish, gibber—now wait a minute, here’s one that has a whole word. You’d open a few more books, and if you found one that had two words and then three, you’d know you were on to something. The idea would be to walk in the direction of increasing order. If each book made more sense than the last, you would be getting warm. As long as you headed in the same direction, you would eventually come to the center of order—the complete book. Perhaps the book you are now holding in your hands.

  Computer scientists call this library of all possible books a “space.” You can talk about the space of all possible anythings. All possible comic books, all possible paintings, all possible conversations, all possible mathematical formulas. Evolution is like a hike through the “space” of all carbon-based life-forms, an upward climb past the contour lines of the “almost survived” to the mountain peak of survivors.

  Engineering is also a form of bushwhacking through the space of all possible solutions to a problem, climbing toward better and better solutions until you reach the optimal peak. When we went looking for a machine that would represent, store, and manipulate information for us, we began the long trek toward modern-day computers.

  What’s humorous is that we forgot that we were not the only mountain climbers in the landscape of computing space. Information processing—computing—is the crux of all problem solving, whether it’s done by us or by the banana slug on the log on which we are about to sit. Like us, the slug takes in information, processes it, and passes it along to initiate an action. As it begins oozing out of our way, our eyes take in the flicker of movement and pass it to our brain, saying, “Wait, don’t sit.” Both are forms of computing/problem solving, and evolution has been at it a lot longer than we have.

  In fact, life has been wandering through the landscape of computing possibilities for 3.8 billion years. Life has a world of problems to solve—how to eat, survive capricious climates, find mates, escape from enemies, and more recently, choose the right stock in a fluctuating market. Deep inside multicellular organisms like ourselves, problem solving is occurring on a colossal scale. Embryonic cells are deciding to become liver cells, liver cells are deciding to release sugar, nerve cells are telling muscle cells to fire or be still, the immune system is deciding whether to zap a new foreign invader, and neurons are weighing incoming signals and churning out the message “Buy low, sell high.” With mind-boggling precision, each cell manufactures nearly 200,000 different chemicals, hundreds at any one time. In technical terms, a highly distributed, massively parallel computer is hacking a living for each of us.

  The problem is, we don’t always recognize nature’s computing styles because they are so different from our own. In the vast space of all possible computing styles, our engineers have climbed one particular mountain—that of digital silicon computing. We use a symbolic code of zeros and ones, processing in a linear se
quence at great speeds. While we’ve been perfecting this one ascent, nature has already scaled numerous peaks in a whole different range.

  Michael Conrad is one of the few people in computing who has stood on our silicon digital peak and taken a look around. Far off in the distance, he has spied nature’s flags on other peaks and decided to climb toward them. Abandoning zeros and ones, Conrad is pursuing a totally new form of computing inspired by the lock-and-key interactions of proteins called enzymes. It’s called jigsaw computing, and it uses shape and touch to literally “feel” its way to a solution. I decided to hike out to find him.

  WHAT?! NO COMPUTER?

  After reading Conrad’s papers, I honestly didn’t know whether to look for him in the department of mathematics, quantum physics, molecular biology, or evolutionary biology. He has worked for a time in all of these disciplines (I couldn’t stop myself, he says), but these days, like a volunteer plant flourishing in a foreign ecosystem, Conrad brings his organic sensibilities to the most inorganic of sciences—computer science.

  I was excited about going to see him. Although I make my home on the edge of the largest wilderness in the lower forty-eight states and adore all things biological, I am a shameless technophile when it comes to computers. I wrote my first book on a begged, borrowed, and all but stolen Osborn that had a blurry amber screen the size of an oscilloscope. I graduated from that to a sewing-machine-style Zenith luggable with a slightly larger green screen and the original, hieroglyphic WordStar program. I wrote the next three books peering into the monochrome scuba mask of a Macintosh SE/30 circa 1986 (a very good year in Apple’s history). Finally, at the beginning of this book, I graduated to a Power Macintosh topped by a twenty-inch peacock of a monitor. I am completely smitten. To me, my computer is a semi-animated being, a connector to other inquiring minds on the Internet and a faithful recorder for every idea that stubs its toe on my receptors. In short, it’s a mind amplifier, letting me leap tall buildings of imagination.

  So, naturally, on my way over to Michael Conrad’s office at Wayne State University in Detroit, I began to wonder what I’d see. Since he is head of the cutting-edge BioComputing Group, I thought he might be a beta tester for Apple and I’d get to see the next Powerbook or the operating system code-named Gershwin. Maybe he had a whole wall full of those flat-panel screens, controlled via a console/dashboard at his fingertips. Or maybe the desk itself would be a computer, ergonomic and wraparound, with a monitor built into eyeglasses and a keyboard that you wear like a glove. This would be something to see, I thought. Luckily, I had a few minutes alone in Conrad’s office before he arrived—time to check out the gear.

  It’s strange. Here I am, in the lair of one of the most eminent minds in futuristic computing, and there isn’t a CPU (central processing unit—the guts of the computer) in sight. No SIMMS, RAM, ROM or LANS, either. Instead, there are paintings. Not computer-generated laser prints, but heavy oils and watercolors with Conrad’s signature on them. The largest, the size of a blackboard, looks like a fevered dream of the tropics spied through a lens that sees only greens, yellows, and blacks. It is disturbingly fecund—a hallucinogenic jungle of vines, heart-shaped leaves, and yellow blossoms, spiraling toward the viewer. A smaller painting of a painter—a Frenchman with beret and palette out by the docks somewhere—greets you as you come inside Conrad’s office, as if to say, to visitors and to himself returning, the mathematician is really a painter. There are also paintings by his daughter. One on his desk has Picasso-esque double faces and daisy-petal legs going round in a wheel. I later find out that she is five, the age Conrad was when he asked his parents for oil paints.

  Behind his desk sits an old Olympia typewriter (manual) and from the looks of fresh droppings of correction fluid, it still sees use. Finally I pick out the computer, nearly swallowed by a white whale of papers, journals, and notebooks. It’s a yellowing Mac Plus from the early eighties, now considered an antique. When you turn it on, a little bell rings Ta Da!, and a computer with a happy face pops on the screen and says WELCOME TO MACINTOSH. I am perplexed.

  When Conrad arrives, I recognize him from the French artist painting. He is without the palette but he does wear a maroon beret over a graying ponytail and zigzagging part. His eyes are so very alive that they almost tear up with emotion when he looks at you. He has caught me ogling his Mac Plus and he goes over to it. I expect him to throw an arm around it and tell me how important this machine was to the computer revolution. Instead he says, “This is the deadest thing in the universe.”

  PORT ME NOT: A COMPUTER

  IS NOT A GIANT BRAIN

  In the forties, the term computers referred to people, specifically mathematicians hired by the defense department to calculate trajectories of armament. In the fifties, these bipedal computers were replaced by computing machines known colloquially as giant brains. It was a tempting metaphor, but it was far from true. We now know that computers are nothing like our brains, or even like the brains of slugs or hamsters. For one thing, our thinking parts are made of carbon, and computers’ are made of silicon.

  “There’s a clear line in the sand between carbon and silicon,” says Conrad, and when he realizes his pun (silicon is sand) he breaks into a fit of laughter that springs loose a few tears. (I like this guy.) He wipes his eyes and begins to paint a picture of the differences between the human brain and a computer, the reasons he thinks a silk purse will never be made from this silicon ear.

  1. Brained beings can walk and chew gum and learn at the same time; silicon digital computers can’t.

  In the “space” of all possible problems, modern computers prove worthy steeds, doing a wonderful job of number crunching, data manipulation, even graphic manipulation tasks. They can mix, match, and sort bits and bytes with aplomb. They can even make dinosaurs from the Jurassic Era seem to come alive on the screen. But finally, our steeds stall when we ask them to do things that we take for granted, things we do without thinking. Remember negotiating the crowded dance floor at your twenty-year reunion? Scanning a few feet ahead, you recognized faces from the past, put names to them, spotted someone approaching you, and recalling “the incident,” you hid behind a tray of ham roll-ups. All in a split second. Ask a computer to do all this and you’d wait an ice age for a response.

  The fact is that humans and many so-called “lower” animals do a great job of interacting with a complex environment; computers don’t. We perceive situations, we recognize patterns quickly, and we learn, in real time, via hundreds of thousands of processors (neurons) working in parallel; computers don’t. They’ve got keyboards and mice, which, as input devices go, can’t hold a candle to ears, eyes, and taste buds.

  Engineers know this, and they would love to build computers that are more like us. Instead of typing into them, we would simply show them things, or they would notice for themselves. They would be able to answer not just yes or no, but maybe. Spotting someone who looks familiar, they would venture a fuzzy guess as to the person’s name, and if they were mobile (robotic), they would tap the person on the shoulder or wheel away, depending on what they had learned in the past. Like most of us, as our computers got older, they’d get wiser.

  But at this point, all these tasks—pattern recognition, parallel processing, and learning—are stuck on the drawing boards. They are, in the words of computer theorists, “recalcitrant problems with combinatorial explosions,” meaning that as the complexity of the problem grows (scanning a roomful of faces instead of just one), the amount of power and speed needed to crack the nut “explodes.” The already blinding speed of modern processors can’t touch the task. The question has become, how do we speed them up? Or more precisely, how do we speed them up if we’re still stuck on controlling them?

  2. Brains are unpredictable, but conventional computing is obsessed with control.

  Today’s computer chip is essentially a switching network—a railyard of switches and wires—with electrons (the basic particles of electricity) instead of trains tr
aveling to and fro. Everything is controlled via switches—tiny gates at intervals along the wires that either block the flow of electrons or let them pass through. By applying a voltage to these gates, we can open or close them to represent zeros or ones. In short, we can control them.

  One way to speed computers up would be to shorten electrons’ commuting time by shrinking switches and packing them closer together. Knowing this, computer engineers have been “doing an Alice”—hanging out around the looking glass and itching to go smaller. Behind the mirror is a quantum world we can barely fathom, much less predict—a world of parallel universes, superposition principles, electron tunneling, and wayward thermal effects. As much as they’d like to cross that threshold, computer engineers acknowledge that there’s a limit to how small electronic components can be. It’s called Point One. Below .1 micron (the width of a DNA coil, or 1/500th the width of a human hair) electrons will laugh at a closed switch and tunnel right through. In a system built around control, this “jumping the tracks” would spell disaster.

  Another route to speedier and more powerful computers would be to keep the components we have now but just add more of them; instead of one processor, we’d have thousands working in parallel to solve a problem. At first blush, parallelism sounds good. The drawback is that we can’t be completely sure of what will happen when many programs are run concurrently. Programmers wouldn’t be able to look in the user’s manual to predict how programs would interact. Once again, control—the great idol of conventional computing—would do a faceplant.

  When you look under the hood, you realize that we didn’t build the “giant brain” in our image—we built it as a dependable, versatile appliance that we could control. The trick to predictable performance is conformity (as the military well knows). Standardized components must operate according to specs, so that any programmer in the world can consult the manual and write software that will control the computer’s operations. This conformity comes at a price, however, which is why our computers, unlike our individualized brains, can’t learn to learn.

 

‹ Prev