Life's Ratchet: How Molecular Machines Extract Order from Chaos

Home > Other > Life's Ratchet: How Molecular Machines Extract Order from Chaos > Page 27
Life's Ratchet: How Molecular Machines Extract Order from Chaos Page 27

by Hoffmann, Peter M.


  The very existence of molecular motors is also regulated by various molecular feedback loops and control molecules. For example, certain different kinesins are needed only during a specific phase of cell division (mitosis). In this phase, these kinesins are manufactured at a higher rate. Once the next phase begins, other proteins direct the breakdown of these kinesins, which are then recycled. Mitosis is a complicated, highly choreographed process. Not only the presence of kinesins (which help to separate the chromosomes), but also their location must be regulated. Control proteins make sure the various kinesins do their work at the right place.

  Understanding the roles and regulation of molecular machines has been a boon for the pharmaceutical industry. Medical drugs, with few exceptions, work by inhibiting enzymes or molecular machines and are artificial control molecules. The task of R&D personnel at major drug companies is to come up with chemicals that specifically bind to target proteins, blocking their activity, and not binding to anything else, as this would cause side effects. Drugs need to be specific.

  Kinesins are a target for cancer drugs. Cancer cells divide prodigiously, creating tumors. Eventually, the cells spread, causing metastasis, the main cause of cancer deaths. A drug called monastrol targets kinesin-5, which plays a major role during mitosis. As described in Chapter 7, kinesin-5 is a double motor that can bind to two microtubules at the same time. Kinesin-5 controls tension in the spindle, which separates chromosomes during the cell division. When monastrol binds to kinesin-5, the drug causes a change in structure of its ATP-binding site. The kinesin can still bind ATP, but it can no longer release ADP after hydrolysis. With ADP stuck in its ATPase pocket, the motor cannot obtain energy and falls dead. The spindle falls slack, and the cell cannot divide. The cancer cell, stuck in the middle of dividing, commits cell suicide.

  Side Effects

  At Wayne State University, the lab of my colleague Rafi Fridman studies the interaction of the collagen-eating, membrane-anchored enzyme, MMP-14 (metalloproteinase 14), and its inhibitor TIMP-2. My lab collaborates with Rafi, trying to measure the affinity of TIMP-2 for MMP-14 on living cells. This requires measurements at the single-molecule level. To do this, we attach TIMP-2 to an AFM tip and let it interact with MMP-14 on the surface of a living cell. Then we move the lever up to pull on the bond between MMP-14 and TIMP-2 and record the force needed to break the bond. After countless measurements and applying the proper statistics, we can determine the average lifetime of the bond between the two proteins.

  The interaction of MMP-14 and TIMP-2 is of special interest because scientists previously discovered that TIMP-2 not only inhibits MMP-14, but also primes the enzyme to activate another, free-floating MMP, called MMP-2. In other words, the so-called inhibitor of MMP-14 inhibits the collagen-destroying activity of MMP-14, but at the same time activates another collagen-destroying machine, MMP-2. The inhibitor is not really an inhibitor, but rather switches from one method of destroying collagen to another. Why? This is a question of ongoing research.

  As mentioned in Chapter 7, MMPs play an important role in the motion of cells. In cancer cells, MMPs are often produced in high numbers and allow cancer cells to spread throughout the body. Because of this implication for cancer, MMPs have been a major target for drug development. Like monastrol, drugs that target MMPs are artificial inhibitors that stop MMP from doing its work. When the first artificial inhibitors were developed, they worked very well—in a test tube. However, after lengthy approval processes, these drugs were used on terminally ill cancer patients, but did not work very well. Moreover, they caused serious side effects, especially excruciating joint pain. What happened? MMPs are not only used by cancer cells, but also by regular cells, including those that maintain the cartilage in our joints. Indiscriminately shutting down the activity of an important molecular machine is not the best way to battle cancer.

  Incidentally, the kinesin-5-targeting drug, monastrol, also ran into trouble. Besides causing a number of side effects (noncancer cells like to divide, too), it was ineffective in many types of cancer. It is not always clear why certain drugs do not work as hoped from laboratory experiments. Shutting down a single type of enzyme or molecular motor may not always be the key to finding a cure for cancer or other diseases. The molecules in our cells interact with each other in complicated ways that we are just beginning to understand. Our cells are well-regulated machines. Deciphering their regulation has proved difficult, as their complexity is staggering. This lack of understanding of the full complexity of our cells is the main reason why medical drugs often fall short of producing the desired results.

  Systems Biology and Regulatory Networks

  Regulation in biological systems proceeds on many levels. DNA contains information to make proteins. The types of proteins and the timing of their production are regulated by special DNA-binding proteins. Transcription and translation are regulated by control molecules. All of these processes involve positive and negative feedback loops. Enzymes and molecular machines, as we have seen, are regulated in a variety of ways, from the auto regulation seen in the “parking brake” of kinesin-1 to the complex, sometimes contradictory regulation of molecular machines involving multiple, co-interacting control molecules. On top of this, the cell surface contains numerous specialized receptors, which are controlled by external chemicals. Once a receptor binds to a chemical target, it undergoes a conformational change, which releases or binds a control molecule, setting off a cascade of feedback loops inside the cell, leading to a “macroscopic” response of the entire cell. These signaling pathways are a large part of what biochemists and cell biologists study today.

  The first regulatory pathway that was deciphered is the so-called lac-operon in E. coli bacteria, for which Jacques Monod and François Jacob received the 1965 Nobel Prize. E. coli can live off of a variety of “foods,” one of which is lactose (milk sugar). To break down lactose, three enzymes are needed. One of these enzymes is a molecular machine that pumps lactose into the cell. Since it takes resources to make these enzymes, it would not make much sense to produce them if there were no lactose present. In addition, the pump uses precious ATP. But how does DNA know if lactose is present? If lactose is present in the “broth” surrounding the bacteria, a lactose receptor on the cell’s surface becomes activated (it binds to lactose and sets off a chemical signal inside the cell through allostery). This activates the few lactose pumps present at the cell’s surface, and they begin to pump lactose into the cell. However, these few lactose pumps are not enough to take in all the lactose that is floating by. What to do? Make more pumps!

  The gene that encodes the enzymes needed for lactose digestion is preceded by a DNA sequence called the operon. The operon is a DNA patch to which a control protein (a repressor protein) can bind. When the repressor binds to the operon, the RNA polymerase, which transcribes DNA into RNA, is blocked, and transcription cannot proceed. No lactose-digesting enzymes are manufactured.

  Lactose, however, can bind to the repressor protein, causing it to change shape (via an allosteric interaction) so that the protein can no longer bind to the DNA operon. Now the RNA polymerase is free to transcribe the lactose genes, and lactose-digesting enzymes and lactose pumps are produced in large numbers. Thus, the interaction between lactose, repressor, and the DNA operon makes sure that lactose-digesting enzymes are only produced when lactose is present. This is how the “computer logic” of our cells works. The manufacture of just about every protein is regulated by similar feedback loops.

  The study of how these feedback loops work is called systems biology. Where molecular biology takes chemical bonding for granted, systems biology takes molecular biology for granted and treats protein and DNA sequences as interacting mathematical entities—players in the computer program of our cells. In this way, scientists work their way up from atoms to molecules to proteins to networks to systems, and finally to an entire cell. Around the world, there are a number of groups trying to develop virtual cells—complete simulations of t
he regulatory networks of simple cells—to understand in detail how they operate.

  An important finding from such studies is that as the complexity of simulated networks increases, surprising new properties emerge. In a 1999 paper in the journal Science, Upinder S. Bhalla and Ravi Iyengar, two researchers from the Mount Sinai School of Medicine in New York City, simulated interacting signaling networks from their experimental studies of a variety of such networks operating in cells. Bhalla and Iyengar found that by linking different networks together (for example, one network produces a control molecule, which controls an enzyme in another network), new properties emerge that were not part of the individual networks. One such property is persistent activation. This property is the persistent production of a protein or control molecule, even after the initial stimulus that caused the production in the first place is long gone. Why would persistent activation be useful? Some processes in our cells take a long time—but at the same time, they may be triggered by a short-lived stimulus. Examples include development, where stem cells need to transform into blood, kidney, or brain cells. Another process where persistent activation is important is the formation of memory. To remember something, our brains must make physical changes to the structure and interaction between brain cells. These changes are triggered by sensory impressions, which become translated into chemical signals. Sensory impressions do not last forever, and neither do the chemical signals derived from them. Yet, we need to remember. Persistent activation makes this possible.

  Persistent activation requires a switch, which once flipped, stays on for a long time. At the same time, the switch should not react too easily. After all, creating a permanent memory or turning a stem cell into a brain has serious consequences. One would not want these things to happen unless they were absolutely necessary. Therefore, the switch needs to flip on only if the signal is strong enough and persistent enough. Could simple molecular switches do the job? Molecular switches do not typically send sustained signals. Once they are switched on, they send their signal (releasing a molecule, for example), and that’s it.

  A better strategy is to have a self-activating feedback loop. This is what Bhalla and Iyengar found. When signaling networks interact in certain ways, they can create a situation where a chemical signal can become locked into a high-activity state—or in other words, where a signaling molecule is continuously produced in high numbers for a long time. This will only happen in response to strong-enough chemical signals. Once the network is activated, the activation is persistent, that is, it will stay activated until another signal switches it off. This bistability (in which the network has two stable states: low and high), the duration of the on state, and the threshold concentration needed to activate the network are emergent properties of the interacting networks. Like linking electronic transistors into a larger network in a computer, linking molecular switches together in an interacting network can create more complex functionality.

  The Physicist and the Biologist

  We have broken down life to its smallest parts: DNA, proteins, enzymes, molecular machines. The idea that we can understand how something works by reducing it to its parts is natural for a physicist. Some consider physics to ultimately be the quest to break things down to smaller and smaller constituents, until we find the one constituent or equation that explains everything. Biologists know that this approach does not work when studying life. There is no “life atom” or one formula to explain life.

  As we have seen, with the help of statistical mechanics and nanoscience, we can decipher how the directed activity of molecular machines emerges from the underlying atomic chaos. But understanding these machines is still a long way from providing a full understanding of how a living cell works. The next step is to understand how these machines interact in complex signaling and regulation networks. Moving to this level of understanding brings with it its own emergent properties, moving us closer to understanding how life works.

  Understanding the parts is crucial, but parts by themselves are not always sufficient to explain the whole. Complex interactions between parts create new processes, structures and principles that, while based materially on the underlying parts, are conceptually independent of them. This insight is what we call holism.

  For reasons mysterious to me, there exists a great debate between scientists of various stripes of what the correct approach to science should be—reductionism or holism? As I hope this book has shown, reductionism is essential if we want to understand life. Without it, scientists would have long ago stopped looking at smaller and smaller scales and would have missed the marvels of molecular machinery. At the same time, molecular machines don’t explain everything. Scientists must still answer the questions of how these machines interact and what roles they play in the complexity of the cell. The ultimate goal is always to explain the totality of life’s processes, from molecules to cells to organisms. Having taken the toy apart, we want to put it back together again. This is the way we learn how things work. Thus reductionism and holism are two sides of the same coin—they are both parts of what good science ought to be.

  The battle between holism and reductionism is, in some sense, the modern extension of the ancient battle of the atomists and vitalists. Postulating the existence of perpetually moving atoms, the ancient atomists explained the activity and change they saw in the world as the ultimately impenetrable interactions between atoms. The vitalists adopted a top-down view and declared that it is impossible to reduce life to physical forces, because seen from the outside, life appeared so different and so mysterious compared with the inanimate world. The fault lines of reductionism and holism run through all of science, but especially biology. Ecologists, for example, must think holistically about the interactions of many organisms in a complex environment, while at the other end of the spectrum, molecular biologists and biophysicists look at the smallest possible units of life.

  Ernst Mayr was a staunch defender of biological holism and one of the great evolutionary biologists of the twentieth century. Among his many achievements, he provided the best modern definition of species, the idea that species are separated by the inability to interbreed. Mayr wrote a number of wonderful books on the history of biology and evolution and was one of the most spirited defenders of Darwin’s theory. But he had (in my opinion) one curious flaw: He hated physicists.

  More generally, Mayr deeply disliked any reductionist approaches to biology. In a 2004 paper, he went as far as to make this claim: “To the best of my knowledge, none of the great discoveries made by physics in the twentieth century has contributed anything to an understanding of the living world.” Considering the advances we have discussed in this book, which involve twentieth-century physics such as fluorescence spectroscopy, nanotechnology, and X-ray diffraction, this is a curiously uninformed statement by such a great scientist. It seems that he was genuinely concerned that his beloved biology might be taken over by physicists. This fear was and remains unfounded.

  In all fairness, Mayr was able to explain more clearly than anyone else why there are differences between biology and physics. For him these differences were in the degree of complexity, the role of chance, the importance of evolutionary history, and the treatment of species as populations. As we have seen, chance and complexity are basic attributes of life. He was quite correct, for example, that much of biology is a question of contingency, of frozen accidents. Biophysics, for example, may explain how a ribosome translates the genetic code into a protein product, but the actual genetic code seems to be a pure accident. There seems to be no reducible physical reason why the genetic letters UUG (corresponding to the RNA molecular bases uracil-uracil-guanine), should translate into a leucine protein subunit, while UGU should translate into cysteine. In most of physics, we don’t have such frozen historical accidents. Let copper crystallize from a melt, and it will always crystallize into a face-centered cubic crystal structure. The energy levels in every hydrogen atom are identical. The superconducting transition temperature
of mercury is always the same. These things can be predicted from doing quantum mechanics. They happen in accord with fixed laws.

  But then again, there are many nonbiological frozen accidents in our universe. Our sun, the earth, the moon, and every mountain on our planet are the result of the vagaries of history. But none of them lies outside a physical explanation. We know the mechanisms that form stars, planets, and mountains. We just cannot predict that a particular mountain will be in a particular place a billion years from now.

 

‹ Prev