Arrival of the Fittest: Solving Evolution's Greatest Puzzle

Home > Science > Arrival of the Fittest: Solving Evolution's Greatest Puzzle > Page 21
Arrival of the Fittest: Solving Evolution's Greatest Puzzle Page 21

by Andreas Wagner


  The jet engine is scarcely unique as a combinatorial innovation. Decades ago, social scientists like the economist Joseph Schumpeter and the sociologist S. Colum Gilfillan argued that combining the old to make the new is essential for technological innovation. In his book The Nature of Technology the economist W. Brian Arthur goes further to say that even entire “technologies somehow must come into being as fresh combinations of what already exists.”22 So too in biology, as we heard in previous chapters: All evolutionary innovations, discovered as they are in searches through nearly infinite libraries, are combinatorial, just as new books combine old letters into new meanings.

  Trial and error. Populations. Multiple origination. Combination. With all these parallels between technology and nature, it is little surprise that technologists would try to mimic nature’s innovability. And I do not just mean biotechnologsts, whose innovations are already legion, from the enzymes in our laundry detergents that turn my ten-year-old’s mud-caked pants spotless, to the engineered forms of insulin used by diabetics, to crops that have been genetically modified to produce a powerful bacterial toxin that kills insects preying on them. Because biotechnology uses biological materials, it already takes advantage of nature’s libraries. The bigger question is whether technologies built on man-made materials—glass, plastic, or silicon, like YaMoR—can do the same.

  Technologists made a big first step to answer this question when they realized that evolution follows an algorithm, a recipe so simple and stereotypical that it could be executed by a machine.23 By altering DNA, mutations create organisms with new phenotypes, and selection allows some of them to survive and reproduce. Mutate. Select. Repeat over and over.24 Those technologists who know their automata—computer scientists—created an entirely new research field from this insight, one that revolves around evolutionary algorithms: recipes that use some form of mutation and selection to solve really difficult real-world problems, entirely inside a computer.25

  An especially famous and difficult such problem is known as the traveling salesman problem, a mathematical puzzle first formulated in the mid-nineteenth century by the Irish mathematician William Rowan Hamilton. The basics are simple: A salesman has to visit dozens of potential faraway clients all on a regular basis. Each of them lives in a different city. The salesman spends a lot of time on the road and in the air. To spend more time with his family, he would like to keep each trip as short as possible and still visit all cities. The problem is to find a travel route that takes him to each city in turn, and then back home at the end of the trip, as quickly and efficiently as possible.

  This problem is a lot harder than it sounds. For a mere handful of cities, anybody could design the shortest possible route. But once the number of cities increases beyond a few dozen, the optimal route becomes surprisingly difficult to find. The traveling salesman problem is what computer scientists call a “nondeterministic, polynomial-hard” problem.26 It is among the hardest of all puzzles that can be imagined, mainly because the number of potential solutions increases exponentially as new cities are added.

  Thousands of papers have been written about this problem, because it isn’t just for sales forces. Designers of computer chips face it too. Such chips contain thousands to millions of components that are wired together and exchange data. Because short wires between them can save energy and accelerate computations, designers of such chips need to find shortest possible routes that connect many circuit components (“cities”). Truckers who deliver goods to multiple department stores also know this problem, as does Federal Express, and so do school buses that pick up multiple children within a school district. Even bumblebees face it: A foraging bee might visit hundreds of flowers before returning to the hive, and cannot afford to waste energy on excessive travel.27

  It’s possible to calculate perfect solutions to the traveling salesman problem for up to thousands of “cities,” using sophisticated mathematical techniques with deceptively simple names like “cutting plane” and “branch-and-bound.” Such techniques can produce near-perfect solutions for up to millions of cities. But even an algorithm as blind and mindless as that used by biological evolution can tackle the problem, by instructing a computer to start with a completely arbitrary solution—any solution at all, no matter how bad. The computer’s program then mutates the solution randomly, changes the route between a few cities (or stores, or schools, or flowers), and checks to see whether the new route is shorter than the old one. If so, the program selects that mutant. Next, it tries a new mutation, and checks whether this mutation shortens the route. If not, it rejects the mutation, reverts to the original route, and begins again with a new mutation. Over many generations, this simple algorithm produces shorter and shorter routes, and eventually arrives at good if not perfect solutions.28

  Evolutionary algorithms like this are also being put to work in some surprising places. Military planners use them to plot optimal routes for unmanned drones that patrol hostile territories.29 Cryptographers use them to encode sensitive data. Fund managers use them to predict the movements of financial markets. And automotive engineers use them to change how an engine operates, by optimizing the time or pressure at which fuel is injected into the engine. And those algorithms do indeed improve fuel efficiency.

  What they don’t do is revolutionize engine design.30

  The evolutionary algorithms that mimic biological evolution are powerful tools, but they’re still missing something. They are still deficient in the recombination department so central to biological innovations.31 Nature is better at recombination, much better, for one simple reason: standards.

  As we’ve seen in chapter 2, standards like the universal energy standard ATP and the universal genetic code are a sign of life’s common origin. The absence of such standards makes recombination more difficult in technology, which often uses ingenuity as a substitute. It took plenty of ingenuity to recombine a compressor, a combustion chamber, and a turbine to lift a plane’s many tons of metal into the air. The same holds for the combustion engines of today’s cars, whose parts are connected through one-of-a-kind links, like pistons and valves that need to be precision engineered to fit a cylinder. And likewise for signature inventions of the Industrial Revolution. The first practical steam engines combined steam-powered toys originally invented in second-century Alexandria with vacuum pumps from seventeenth-century Germany. Bench vises joined two of the simple machines of antiquity, a lever-handle and a screw. The first bicycles combined three, the wheel, the lever, and the pulley. All of these combinatorial innovations required ingenuity.

  It’s not that standards are absent from technology, far from it. Technology relies not only on universal laws of nature established by science, but also on standardized ways of measuring quantities like temperature, mass, or electric charge. But most technologies are deficient in a certain kind of standard, the one that allows nature to combine the old to make the new. Nature needs these standards, precisely because it does not have the ingenuity of human inventors.

  All the different things that proteins can do—catalyze reactions, transport molecules, support cells—emerge from strings of building blocks connected in the same way, through a standardized chemical connection called the peptide bond, where an atom of nitrogen from one amino acid bonds with an atom of carbon from its neighbor. Even though each amino acid has a different shape, they can all connect in the same way, because they share a universal interface. And this standard, used by all organisms, has made life as we know it possible. It allows nature to cobble together—blindly, without any ingenuity—the astronomical numbers of genotypes needed to find innovations.

  Standards that make recombination mindlessly easy do not just exist in proteins. RNA strings also use a standardized chemical bond to link their parts. Furthermore, life’s standard of information storage—DNA—allows bacteria to exchange genes and create new metabolisms from new combinations of the enzymes they encode. And finally, regulation circuits use a standardized way to regulate
genes, based on the principle that regulator proteins can bind specific short words on DNA, allowing nature to combine old regulators into countless new circuits by changing these words.32 If we could take a small number of different objects, create a standard way to link them, and recombine them into every conceivable configuration—mindlessly—our powers to innovate could be just as immeasurable as those of nature.

  Such standardization is clearly not beyond human technology: Our hardworking Lego blocks hint at it, and so does a much older human technology.

  The sixteenth-century Venetian Andrea Palladio may be the most influential architect in Western history. Throughout a long and successful career, he conceived at least sixteen of the urban palazzos that housed Venice’s wealthiest families, built thirty country villas, and designed several churches.33 The floor plans of Palladio’s buildings are not identical. Far from it. They differ in a thousand ways. His buildings are different in size, shape, orientation, and the arrangement of their rooms. Yet they share an architectural essence, even if most of us would find that essence hard to pinpoint. More than twenty years ago, the art historian George Hersey and computer specialist Richard Freedman searched for this element, the secret behind Palladian floor plans.34 If rules behind these plans exist, they thought, then we must be able to find an algorithm to create any number of Palladian floor plans.

  To distill the essence of Palladio’s style, they analyzed dozens of Palladian villas: how their rooms were oriented, how their walls were placed, whether the lengths of adjacent rooms had certain proportions, and so on. And they succeeded. Their work culminated in a computer program that executes a recipe to create new Palladian floor plans. Its creations differ in many details, in the sizes, orientation, and arrangements of rooms, but all of the new designs are recognizably Palladian.35

  The algorithm behind this program starts from the outline of a building, usually rectangular in shape, and draws vertical or horizontal lines through it—walls that subdivide the building into rooms. One such line would split the building into two rooms, two parallel lines would split the building into three rooms, and so on. Each room could again be split by horizontal or vertical lines, and the resulting rooms can be split further, and so on, until only rooms with a desired size have been created. By splitting a rectangle multiple times, each time either vertically or horizontally, each time by one or more walls, one can create an infinite variety of floor plans.

  The rules underlying Palladian floor plans are neither arbitrary nor random. Nor are they complicated. For example, when a room is split in two with one line, it is usually split either in the middle or such that one of the resulting rooms is twice as long as the other. If it is split with two parallel lines, most splits create three rooms where the middle room is twice as long as the others. By combining these and a few other rules, this most celebrated style of Renaissance architecture can be re-created by a computer.36

  Nature’s stereotypical recombination doesn’t work exactly the same way. Small amino acid parts are combined to make new proteins, while the outline of a Palladian building is subdivided to create a floor plan. But the similarities are more important: Both use a small number of standard building blocks and an even smaller number of rules to create enormously complex objects. And if these similarities exist already in preindustrial architecture, the odds that they exist in postindustrial technology—unappreciated perhaps—should be even better. 37

  As in the technology behind robots like YaMoR: digital electronics.

  Each of the myriad transistors on an integrated circuit—the kind of computer chip guiding a robot—is really nothing more than a tiny electronic on-off switch that responds to electronic impulses: one denoted as a “0” turns the switch off, whereas the other, a “1,” turns it on. Together, these transistors transform an arriving stream of data into a departing stream that also contains only zeroes and ones, the familiar bits or binary digits from the simple two-letter alphabet that computers read. Mathematicians describe this process more precisely, by saying that a circuit calculates the values of a function, that it takes an input and computes an output.38 The kinds of functions that a digital circuit computes are named after the English mathematician and philosopher George Boole, who wrote about them in his 1854 book, An Investigation of the Laws of Thought on Which Are Founded the Mathematical Theories of Logic and Probabilities. Boole’s new branch of mathematics was an enormous advance, and Boolean logic functions, as they are called today, are still at the heart of all modern computers.

  FIGURE 20. Truth tables

  One of the simplest such functions is the AND function, which is needed whenever one searches, for example, an electronic database of sheet music for a specific composition, such as Mozart’s Magic Flute. The search engine would search for compositions containing the word “Mozart,” and it would report a yes or a no—encoded by the bits 1 and 0—for all compositions containing this word. It would do the same for the word “magic,” yielding two possible answers for each of these two partial questions, which makes four possible combinations of answers. Their digital one-zero version can be written in a form that mathematicians use to describe Boolean logic functions, a truth table. In the truth table of the AND function shown in figure 20a, the function’s input, the four possible combinations of partial answers are written to the left of the vertical line, and to the right are the possible final answers—the output—again encoded as zeroes and ones. Merely one of the four lines contains a yes as the final answer—only if both partial answers are yes has a score for The Magic Flute been found.

  If you wanted to find compositions whose title contained the word “Mozart” or “magic” (or both), the search engine would have to compute another Boolean function: the OR function. Same input stream—everything with the word “Mozart” and “magic”—but a different rule. In this case, the output is a yes if at least one partial answer is a yes (figure 20b). As a result, the OR function would report not only The Magic Flute but also every other one of Mozart’s 626 pieces,39 as well as Santana’s “Black Magic Woman,” Stevie Wonder’s “If It’s Magic,” and hundreds more. An even simpler Boolean function, the NOT function (figure 20c) turns a yes into a no, and could help you find all compositions without the word “Mozart” in them.

  These and other more exotic Boolean logic functions—XOR, XNOR, NAND, NOR—allow us to translate complex questions from natural languages into the strings of binary numbers that rule the world of computers. What’s more, binary numbers can encode any decimal number, and can be added, subtracted, multiplied, and divided just like decimal numbers. The integrated circuits of even the most sophisticated computers perform nothing but basic arithmetic and simpler Boolean logic functions like the AND function. With the simplest of all possible alphabets—zeroes and ones—and Boolean logic, digital computers can recognize images, encrypt information, deliver voice mail, and predict next Tuesday’s weather. Arithmetic, it turns out, is even more important than we learned in grade school.

  FIGURE 21. Logic gates

  Another remarkable fact about Boolean functions is that even the most complicated Boolean function can be computed by stringing together simpler functions, such that the output of one function becomes the input of another. It’s a bit like multiplication (3 × 4), which can be written as a series of additions (4 + 4 + 4). More than that, even though the number of possible Boolean functions is virtually infinite, each of them can be computed by stringing together only AND, OR, and NOT functions. This matters for computers, because in an integrated circuit, transistors are wired into units of computation that compute a simple Boolean function and that are known as logic gates. Figure 21 shows some of the symbols that chip designers use for the simple AND, OR, and NOT gates. Each gate has one or two wires to the left for its input bits, and one on the right for its output bit. In figure 22 a few gates are wired to perform the simplest possible arithmetic operation of adding two binary digits—this already requires six logic gates, each with multiple transisto
rs.40 Today’s chips of course add, subtract, multiply, and divide much longer numbers of sixty-four binary digits, and require millions of gates.41

  FIGURE 22. A circuit adding two binary digits

  Most integrated circuits are hardwired in the factory, but robots like YaMoR are equipped with programmable hardware, chips that can be rewired to alter what some logic gates do—changing an AND gate into an OR gate, for example—and how these gates are wired. Some programmable chips can even be rewired while the chip is working.42 With more than a million logic gates, such chips are not just simple toys but powerful and flexible computing engines that could eventually help machines learn much as we do—by rewiring their own hardware—and create autonomous robots that can not just explore the world but also learn about its potholes and other pitfalls.43

  If this sounds familiar, it’s because such learning resembles evolution, which alters life’s genotypes one molecule at a time. A programmable circuit’s logic gates and wiring are an analog of a genotype that can be altered to explore new computations, the analog of new phenotypes. Like evolution, the learning process requires plenty of trial and error. It reinforces good behavior, and punishes bad behavior. (But not as severely as evolution does. If your—or a future robot’s—golf game is weak, you may need to improve your stance, your grip, or your swing, but you need not die.) What is more, such learning need not destroy old knowledge. As you learn to play golf, you can still sit, walk, run, or jump, even though the neural circuit responsible for these skills gets rewired. And the parallels to evolution don’t end here. The wires connecting logic gates are generic links, just as flexible as the peptide bonds of proteins, because the output of any one gate can be fed into any other gate.44 As with proteins, such links can be built, destroyed, and modified without much ingenuity to create electronic circuits that can be wired in astronomically many ways.45

 

‹ Prev