Standard links. Few kinds of logic gates. These principles already suffice to create chips that play chess more powerfully than mankind’s best, to find a single page in a million different books, or to “print” objects in three dimensions. The capabilities of real-world programmable circuits are so reminiscent of nature’s innovation powers that they suggest a profound question: Are entire libraries of digital circuits—huge circuit collections that can be created through recombining logic gates in every possible way—organized like the library of biological circuits? The answer can tell us whether the warp drives of biological innovation could be mounted on the spaceships of technological innovation.
Karthik Raman provided this answer. A graduate from the Indian Institute of Science, one of India’s top universities, Karthik came to my lab as a postdoctoral researcher. And he did not come alone. He also brought an effervescent enthusiasm for science, dogged tenacity in the face of failure—as inevitable in science as in evolution—and a wizardlike talent for analyzing complex data. When I invited him to map libraries of programmable circuits, he jumped right on it.
Although commercially available programmable chips have more than a million gates, some back-of-the envelope calculations convinced us that we should study smaller circuits. A library of circuits with a mere sixteen logic gates contains 1046 such circuits—a number already large beyond imagination—and this number increases exponentially with the number of gates, to 10100 circuits with only thirty-six gates.46 Huge numbers like this also made the decision of whether to build circuits in hardware or to study them in the computer easy: Millions of circuits are most easily analyzed inside a computer.47
A sixteen-gate circuit could in principle compute 1019—a million trillion—Boolean functions, but we didn’t know whether the library’s circuits encoded that many.48 Perhaps its circuits could compute only a few functions, like addition or multiplication. To find out, Karthik first cast a net wide enough to haul in as many volumes as possible from the circuit library. He created many circuits with random wirings, two million of them, and found that they computed more than 1.5 million logic functions, only a few of them as familiar as the AND function. Even though he had hauled in only a small fraction of circuits—there were still 1040 circuits left, and 1012 times more functions to explore—his enormous catch taught us that even simple circuits can compute numerous Boolean functions.
Because the library hosts many more circuits than there are functions—1026 times more, to be precise—we knew that there must be multiple synonymous texts, circuits computing the same logic function, but we didn’t know how they were organized. To find out, Karthik started with a circuit that computed an arbitrary logic function and changed it to one of its neighbors in the library, for example by reconnecting the input of one gate to the input of another. If this “mutated” circuit still computed the same logic function, Karthik kept it. If not, he tried another rewiring, and repeated that until he had found a circuit with the same function. From that new circuit he took another step, and another, and so on, such that each step preserved the circuit’s function. Karthik started random walks like this from more than a thousand different circuits, each one computing a different function that needed to be preserved.
The networks of circuits he found reached even farther through the library than the genotype networks from earlier chapters: From most circuits one could walk all the way through the circuit library without changing a circuit’s logic function. Two circuits may share nothing, not a single gate or wire, except the logic function they express, yet they can still be part of a huge network of circuits connectable through many small wiring changes. What’s more, we found that this holds for every single function we studied. It is a fundamental property of digital logic circuits.49 The library of digital electronics is like biology, only more so.
Karthik next turned to the neighborhoods of different circuits computing the same function, created all their neighbors, and listed the logic functions that each of them computed. He found that these neighborhoods are just as diverse as those in biology. More than 80 percent of functions are found near one circuit but not the other.50 This is good news for the same reason as in biology: One can explore ever more logic functions while rewiring a circuit without changing its capabilities. A circuit’s neighborhood contains circuits with some sixty new functions, but a mere ten rewiring steps make a hundred new functions accessible, a hundred rewirings put four hundred new functions within reach, and a thousand changes can access almost two thousand new functions.51
And the similarities continued. Earlier I mentioned the multidimensional fabric of biological innovability, the almost unimaginably complex, densely woven tissue of genotype networks. Karthik found that it has a counterpart in the circuit library, where a circuit with any function could be reached from any starting circuit by changing only a small percentage of wires. A fabric just like that of life’s innovability exists in digital electronics, and it can accelerate the search for a circuit best suited for any one task.52
Circuit networks thus have all it takes to become the warp drives of programmable hardware, in precisely the same way that genotype networks are the warp drives of evolution. They have the potential to help future generations of YaMoRs learn many new skills, from simple self-preservation like avoiding deadly staircases to complex skills like doing the dishes or playing ball with children. In this vision, their digital brains can rewire themselves step by little step, and explore many new behaviors, while being able to preserve old behavior—conserving the old while exploring the new.53 I wouldn’t even be surprised if our brains used a similar strategy to learn. We already know that they continually rewire the synaptic connections between our neurons, but perhaps our brains also explore new connections in the same way that organisms explore a genotype network. If so, the very same principle allowing biological innovation could be at work in the engines of human creativity.
Unfortunately, our ignorance in this area is still nearly absolute. We know next to nothing about the material basis of human creativity. We do know, however, that the kind of creativity we discovered is not free, because Karthik found its price tag. It was a familiar one.
When Karthik analyzed logic circuits that differed in their complexity—their number of logic gates—he found that the simplest circuits could not be rewired without destroying their function. Change one wire in such a circuit, and you destroy the circuit’s function. Every gate and every wire matters. Such simple circuits have no innovability, because they cannot explore new configurations and computations. For rewiring, one needs more complex circuits. The more complex they are, the more rewiring they tolerate. Their apparently superfluous gates and wires are like collections of spare parts—piles of Edison’s precious junk—that help compute new digital functions. Just as in biology, innovability comes from complexity, apparently unnecessary, but actually vital. This is one of nature’s lessons for innovable technologies: If we want to open nature’s black box of innovation, Ockham’s razor is much too dull. Like oil and water, simplicity and innovability don’t mix.
This doesn’t mean that simplicity and elegance are absent from powerful innovable technologies. Quite the opposite. But they hide beneath the visible world. The basic principle behind them is simplicity itself: With a limited number of building blocks connected in a limited number of ways, you can create an entire world. Out of such building blocks and standard links between them, nature has created a world of proteins, regulation circuits, and metabolisms that sustains life, that has brought forth simple viruses and complex humans, and ultimately, our culture and technology, from the Iliad to the iPad. The simplicity and the elegance of innovable technologies are hidden behind the visible world, just like nature’s libraries, whose faint reflection we see in the Tree of Life, like a shadow in Plato’s cave.
EPILOGUE
Plato’s Cave
In October 1970, the magazine Scientific American published a description of the Game of Life, a creation
of the British mathematician John Conway, and a simplification of ideas on building self-replicating machines proposed by the polymath John von Neumann. Not requiring a human player, the “game” can unfold inside a computer on a two-dimensional grid of cells, each of which can be either “on” (alive) or “off” (dead). Each square on Conway’s grid has eight neighbors, and a very simple set of rules determines their status. For example, if a cell has fewer than two live neighbors it turns off. In the game’s lingo it “dies.” Same result if it has between four and eight live neighbors. However, if the cell has two or three “live” neighbors, it gets to live. The final rule: A dead cell with three live neighbors is reanimated.
And that’s it. But depending on which cells are on and off when the game starts, what follows is anything but simple. Enormously complex patterns can emerge, a huge and unpredictable variety of forms, including “self-replicating” clusters of cells that spawn more of themselves. And from these simple beginnings, the Game of Life can go on indefinitely, creating complex patterns that never repeat or terminate.
Like life itself.
The game is a metaphor rather than a model for life, but it reflects a broader human aspiration: to understand life and its diversity through the language of mathematics and computation. This aspiration is much older than the game. Seventeen years after the publication of the Origin, Charles Darwin wrote in his autobiography, “I have deeply regretted that I did not proceed far enough at least to understand something of the great leading principles of mathematics, for men thus endowed seem to have an extra sense.”1
Like the zoopraxiscope that filmed Sallie Gardner four years before Darwin’s death, Darwin’s work sparked a revolution, but even if he had been a mathematician he would have been in the dark about the hidden architecture of life—he could not even know that it existed. To illuminate nature’s giant libraries, the flames of this revolution would require more fuel than Darwin’s theory.
Biology and mathematics first needed to become fully intertwined, which would take another century. It began with the mathematics of Sewall Wright and R. A. Fisher, which bridged the gap between traditional Darwinism and Mendelian genetics, and led to the modern synthesis that allowed the first accurate predictions of how fast natural selection can help innovations spread. Another half century had to elapse until systems biology taught us how molecules cooperate to produce the complex behavior and phenotypes of life. In doing so, it showed us that cells are vastly more complex than the simple elements of the Game of Life. Through regulatory circuits operating a bit like the neural networks of our brains, they perform sophisticated computations that regulate their own molecules and help them survive. And while these circuits are very different from digital computers—for one thing, they are self-assembled from organic molecules—they hint at a deep unity between the material world of biology and the conceptual world of mathematics and computation, a unity that Conway and Darwin could barely have guessed at.
The mathematical perspective of systems biology also allowed us to decipher the staggeringly complex phenotypic meaning of genotypic texts in nature’s libraries, which is crucial to understanding innovability. It led us to identify genotype networks, and to grasp that genotype networks are the common origin of the different kinds of innovations—in metabolism, regulation, and macromolecules—that created life as we know it. They propelled life from its very beginnings to single-celled organisms, from our bacterial and eukaryotic ancestors to primitive wormlike creatures, fish, amphibians, mammals, and all the way to humans, spanning billions of generations.
More than that, the mathematics of biology allowed us to see that these libraries self-organize with a simple principle, as simple as the gravitation that helps mold diffuse matter into enormous galaxies. This principle—that organisms are robust, a consequence of the complexity that helps them survive in a changing world—brings forth the intricate organization of these vast libraries.
These libraries and their texts differ fundamentally from the muscles, nerves, and connective tissues that an anatomist dissects and that we can touch with our bare hands. They are not even like cellular organelles visible through a microscope, or the structure of DNA revealed by X-ray crystallography. They are concepts, mathematical concepts, touchable only by the mind’s eye.
Does that mean they exist only in our imagination? Did we discover them or invent them?
The question whether knowledge—especially mathematical knowledge—is created or discovered has occupied philosophers for more than twenty-five hundred years, at least since Pythagoras and certainly since Plato. Plato saw our visible world as a faint shadow cast by the light of a higher, timeless reality on the poorly lit walls of a cave we inhabit. Platonists posit that we discover truths, which come to us from a higher reality. They exist even if nobody is there to see them, like the dark side of the moon. Others, such as the Austrian philosopher Ludwig Wittgenstein, argue that mathematical truths are invented—in Wittgenstein’s words, “the mathematician is an inventor, not a discoverer.”2
Platonism has the upper hand in this debate, even though Plato himself was unaware of the best argument for it. It is the startling congruence between mathematical theorems and physical reality, encapsulated in a dictum often attributed to Galileo Galilei: “Mathematics is the language in which God wrote the universe.” (Words that should give any naïve creationist pause.) The Hungarian-born Nobel Prize–winning physicist and mathematician Eugene Wigner called it “the unreasonable effectiveness of mathematics in the natural sciences.”3
Unreasonable indeed: We know no reason why Newton’s laws should predict so much more than the speed of a falling apple, phenomena as different as the rotation of planets and the shaping of galaxies. Except they do. And so do countless other mathematical laws that explain phenomena so remote in space and time that we will never experience them directly. The nexus between math and reality is so tight, in fact, that the Swedish theoretical physicist Max Tegmark argues that the entire universe is mathematics.4
But the “unreasonable effectiveness” of math is not the only reason to believe in the reality of nature’s libraries and their genotype networks. Another is that the technology of the twenty-first century grants us unrestricted access to these libraries. In so doing it can shift the debate about discovery versus invention—uncomfortably abstract for millennia—from its traditional focus on languages like that of mathematics to incorporate experimental science. The reason is that we now can read individual volumes in nature’s libraries. We can, for example, manufacture any volume of the protein library—any amino acid sequence at all—and study its chemical meaning with the instruments of biochemistry. Many of these volumes were discovered by other organisms long before us, and their molecular meaning has surprised us greatly, as antifreeze proteins, crystallins, and Hox regulators testify. It’s a safe bet that nature’s libraries will continue to surprise us—more than anything we just invented.
When we begin to study nature’s libraries we aren’t just investigating life’s innovability or that of technology. We are shedding new light on one of the most durable and fascinating subjects in all of philosophy. And we learn that life’s creativity draws from a source that is older than life, and perhaps older than time.
ACKNOWLEDGMENTS
Some key collaborators are mentioned by name in the text, but I am greatly indebted to numerous others, from graduate students to postdoctoral fellows and faculty colleagues at multiple universities. I am especially grateful for discussions with research associates in my laboratory, among them Aditya Barve, Sinisa Bratulic, Joshua Payne, José Aguilar-Rodríguez, and Kathleen Sprouffske. In many conversations superficially unrelated to this book, they have unknowingly sharpened my thinking about the material represented herein. My thanks also go to the numerous colleagues and fellow visitors whom I have encountered over the years at the Santa Fe Institute, which has remained a wellspring of new ideas and stimulation. Special thanks go to Jerry Sabloff and Doug Erwin, who p
rovided feedback on an early draft of the manuscript. I am also indebted to Cormac McCarthy, who not only read this early draft but also provided many useful editorial comments. (Bowing to his avowed aversion to punctuation, this book is free of semicolons.) My faculty colleagues at the University of Zürich deserve thanks for helping create the kind of research environment in which projects like this can thrive.
Bill Rosen taught me that a good editor can turn caterpillars into butterflies. His guidance was instrumental at all stages of this project. He did an outstanding job and I cannot thank him enough. He and my agent, Lisa Adams, also helped me navigate the treacherous waters of the publishing industry. Lisa superbly handled all contractual matters. Furthermore, I am indebted to Niki Papadopoulos of Current for editorial support. Her incisive comments and questions have helped improve the manuscript greatly. She and her assistants, Kary Perez and Natalie Horbachevsky, have also promptly and patiently handled numerous queries. Last but not least, thanks go to my family for their benevolent tolerance of my moods when the roller coaster of the writing process went on one of its downturns.
Arrival of the Fittest: Solving Evolution's Greatest Puzzle Page 22