But Snow persisted. He showed that the workhouse near Broad Street, where almost no residents got sick, had its own independent well. He pointed out that the workers at the brewery near Broad Street, where nobody got sick, were allowed to drink all the beer they wanted, and he suspected something about the beverage prevented the disease (during brewing, the beer wort is boiled for an hour, killing most bacteria). Perhaps most tellingly, he observed that all the residents near the Carnaby Street pump who got sick were the very ones who traveled to fetch the supposedly clean water at the Broad Street pump.
Eventually, the council relented and granted him permission to shut down the well. Snow immediately removed the handle from the Broad Street pump, making it impossible to get water … and that was the end of the cholera epidemic in Soho.
We now know that the Broad Street well was contaminated with the pathogen Vibrio cholera, a bacterium that infected residents with every gulp. Yet, even without this knowledge, Snow’s original method of investigation—focusing on both geography and population—provided an effective means to control the disease. This was the first scientific example of epidemiology, the study of patterns of diseases in the population. Today, John Snow is regarded as the father of epidemiology.
In some sense, Snow was quite lucky. Unlike an experiment (which can demonstrate cause and effect), an epidemiological study cannot prove causality. It can only demonstrate a relationship—in Snow’s case, a relationship between the location of victims and the location of water pumps. It could have been the case that something other than the water or the water pump was causing the disease; there was no way to know for sure solely using Snow’s map. Though Snow’s conclusion—that there was a contaminant in the Broad Street well—turned out to be correct, epidemiological studies are more susceptible than experiments to misleading findings.
As one example, epidemiological studies in the 1930s found an extremely high correlation between the consumption of refined sugar and the incidence of polio. Does eating sugar cause polio? Not at all. Polio is caused by a virus that is transmitted through drinking water, similar to cholera. So what is the connection to refined sugar?
Babies are born immune to polio because they inherit protective antibodies from their mothers. However, these antibodies wear off after a few months. If you are exposed to polio while you still possess your mother’s antibodies you will not get sick. Instead—and rather remarkably—the infection induces your immune system to produce its own antibodies, which will then protect you against polio for the rest of your life.
If, on the other hand, you are exposed to polio after your mother’s antibodies wear off, you will develop the full-blown disease. These individuals still produce their own antibodies after the infection strikes, but in many cases this happens too late to protect them from the worst ravages of the disease—lifelong paralysis. Thus, if you contract polio as an infant, you will experience a barely noticeable infection. If you contract polio as a young child or adult, you will suffer devastating effects. In poor countries with inferior sanitation, almost everyone is exposed to polio during their infancy. No problem; they still have mom’s antibodies. But before the development of a polio vaccine, in developed countries with excellent sanitation, people usually did not encounter the polio virus until they had reached late childhood or adulthood. Catastrophe.
So what is the sugar connection? In the 1930s, when the epidemiological study was conducted, citizens of wealthy countries (with superior sanitation) enjoyed the luxury of eating refined sugar. In contrast, people in poor countries (with dismal sanitation) could not afford refined sugar. Correlation, not cause and effect.
On the other hand, epidemiology can also generate powerful new insights that overturn conventional medical wisdom—and can lead to unexpected new opportunities for drug hunting. A good example of this is a doctrine-shattering revelation about hypertension divulged by one of medicine’s most famous epidemiological studies. You probably know that hypertension—high blood pressure—is unhealthy and merits treatment. But until the 1960s, many physicians actually held the opposite view, a conviction reflected in the antiquated medical term “essential hypertension”: for decades, the medical establishment thought that hypertension was essential for maintaining good health. John Hay, professor of medicine at Liverpool University, expressed the prevailing sentiment when he wrote in 1931, “there is some truth in the saying that the greatest danger to a man with a high blood pressure lies in its discovery, because then some fool is certain to try and reduce it.”
Doctors believed that hypertension was a kind of natural compensatory mechanism that kept the heart pumping properly. President Franklin Delano Roosevelt was a lifelong hypertensive patient, but his doctors feared that it might be dangerous to lower his blood pressure, so they left it alone. FDR died from a stroke during his fourth term—almost certainly the consequence of untreated hypertension. But the fallacious notion of “essential hypertension” was finally disproved by the longest running epidemiological study of all time, the Framingham Heart Study.
Launched in 1948, the Framingham Heart Study initially tracked 5,209 residents of the town of Framingham, Massachusetts, a small working-class municipality about twenty miles west of Boston. Its mission was (and still is) the identification of risk factors associated with the development of cardiovascular disease, one of the leading killers in the 1940s. It was the Framingham Heart Study that first demonstrated the effects of diet and exercise on the prevention of heart disease.
Like John Snow, the original Framingham investigators were skeptical about the prevailing medical theories of their time. Most doctors believed that heart disease was a natural consequence of aging and, as such, finding a medicine to prevent heart disease would be as likely as finding the Fountain of Youth. In contrast, the Framingham scientists speculated that cardiovascular health was influenced by lifestyle and environment. They hoped that a large-scale epidemiological study might identify these lifestyle and environmental factors and lead to new methods of intervention to reduce the risk of cardiovascular disease and strokes.
The investigators knew that the study would need many years before it would be possible to draw firm conclusions, and consequently the first reliable findings were not reported until the early 1960s, more than a decade after the start of the Framingham Heart Study. Among other findings, they showed that stroke was correlated with three separate physical conditions: clogged arteries (atherosclerosis), elevated serum cholesterol (hypercholesterolemia) … and hypertension.
Since the Framingham study—like all epidemiological research—was correlational rather than causal, it was not clear if hypertension actually caused strokes, or if there was some other shared cause that produced both hypertension and strokes, in the same way that eating refined sugar and contracting polio both resulted from first-world lifestyles in the 1930s. Some doctors critical of the Framingham Heart Study, for example, suggested that both hypertension and strokes were inevitable side effects of aging. But there was an unexpected source of support for the startling idea that “essential hypertension” might not be so essential after all—a drug known as Diuril.
In the 1950s, Merck boasted a program to hunt for compounds that inhibited an enzyme called carbonic anhydrase. These inhibitors reduced blood acidity, a common medical condition that often resulted from kidney or lung problems. For good health, the acidity of our blood must stay within a very narrow range or else we experience headaches, dizziness, or exhaustion, or—if our blood acidity rises particularly high—we might slip into a coma. Carbonic anhydrase inhibitors helped restore blood acidity to a healthy level, but these drugs also provoked an unanticipated side effect. They made patients pee. Physicians call such pee-inducing drugs diuretics.
Increased urination can lower blood volume, and therefore can lower blood pressure. (When there is less fluid circulating in your blood, your heart does not need to work as hard to pump blood through your body, which reduces your blood pressure.) Thus, Merck’s ca
rbonic anhydrase inhibitor drugs not only reduced blood acidity (Merck’s original objective)—they unintentionally reduced hypertension, too.
Of course, at the time there was no perceived medical need to reduce hypertension. But since Merck now had in their possession a set of diuretic drugs, they looked for other reasons that a patient might want to increase her rate of urination. They soon identified another use for their carbonic anhydrase inhibitor drugs: helping patients who suffered from edema. Edema refers to a swelling of the body produced by the abnormal accumulation of fluid beneath the skin and in the cavities of the body. For example, pulmonary edema is a swelling of the lungs that often occurs when the heart becomes too weak to effectively pump blood out of the lungs, which causes fluid to accumulate in the air spaces in the lungs. Merck realized that carbonic anhydrase inhibitors would be a useful treatment for pulmonary edema, since lowering a patient’s blood volume through increased urination would (1) reduce the amount of fluid available to pool around the lungs, and (2) reduce total blood volume, making it easier for the heart to pump blood out of the lungs.
It was at this moment that serendipity struck. While Merck was working to find the most potent and efficacious carbonic anhydrase inhibitor, they stumbled upon a drug that did not inhibit carbonic anhydrase yet was an even more powerful diuretic than their existing drugs. They eventually named it Diuril. They had no idea how it worked, but when Merck tested Diuril on patients suffering from pulmonary edema they found it be safe and extremely effective. Thus, the hunt for a drug to treat blood acidity led to an entirely new kind of drug that treated pulmonary edema, a completely different condition. But that was not the end of the story. A Merck scientist named Karl Beyer thought that Diuril could be used for yet another purpose—to “treat” hypertension.
Of course, at the time, the idea of treating hypertension was akin to how we might consider “treating” yawning—why would you want to mess with something so natural and normal? Even so, there was a minority of physicians who suspected that high blood pressure was dangerous rather than a marker of good health. Beyer quietly passed Diuril to his colleague Bill Wilkerson, a physician, and asked Wilkerson to slip the drug to a few hypertensive patients to see what happened. As expected, their blood pressure went down. Beyer knew then that Diuril could be the first clinically effective anti-hypertensive drug—but there was not yet a market for such a medication. When Diuril went on sale to the public in 1958, its primary use was as a treatment for edema.
Nevertheless, other drug companies noticed that Merck had created an effective anti-hypertensive drug and—fearing that they might lose out on some unknown future market opportunity—tried to develop their own. This led to an entire class of Diuril copycat drugs known as the thiazides that served as anti-hypertensives; within a few years after Diuril came out, more than six thiazides had been approved by the FDA.
At first, these diuretic anti-hypertensive drugs were not used very often. But then the first round of the Framingham Heart Study came out, showing a link between hypertension and stroke. Even though many physicians reacted with skepticism to this finding, other doctors knew that safe and effective drugs—the thiazides—were available to lower blood pressure and decided that prescribing these drugs to their patients with high blood pressure presented a favorable risk to reward ratio. If the Framingham link between hypertension and stroke was actually due to cause and effect, well, the thiazides would probably reduce their hypertensive patients’ chances of a stroke. If the link was not causal, on the other hand, the physicians calculated they would be doing little harm by prescribing the thiazides. The FDA also supported the prescription of the various anti-hypertensive drugs to patients with high blood pressure, since the FDA realized that the only way scientists could establish a causal link between hypertension and stroke (instead of the correlational link in the Framingham Study) was by actually reducing hypertensive individuals’ blood pressure and observing what kind of effect it had—by conducting an ad hoc experiment, in other words.
The Centers for Disease Control and Prevention, which monitored the incidence of stroke in the national population, soon noticed that there was a clear reduction in the number of people who were having strokes—and determined this reduction was due to the increase in patients taking anti-hypertension drugs. The medical establishment quickly changed course and began recommending that high blood pressure should be treated. “Essential hypertension” became “unhealthy hypertension.” This was one of the first cases where epidemiology and Big Pharma worked hand in hand to overturn conventional wisdom and lead to a dramatic shift in attitude toward a major medical condition, saving countless lives. The incidence of stroke in the US declined by almost 40 percent between the years 1955 and 1980.
Now that it was clear that anti-hypertensive drugs were beneficial, drug hunters embarked on the search for the perfect anti-hypertensive drug. The thiazides were only modestly effective in lowering blood pressure and had one manifestly undesirable side effect—frequent urination. If someone could come up with a way to reduce blood pressure in a more effective manner—without unpleasant side effects—there would be tremendous profit potential, since patients who needed anti-hypertensives would have to take them every day of their lives. And there was such a someone—a man by the name of James Black.
James Black was an unlikely drug hunter. Born in 1924 in the small Scottish town of Uddingston, Black was an excellent student and studied medicine at the University of St. Andrews. Unfortunately for Black, by the time he graduated he had accumulated large debts. He had little choice but to take the best paying job available, which was a teaching job at the University of Malaya in Singapore. He was finally able to return to Scotland by joining the faculty at a veterinary school. He attempted to make the best of his unfavorable professional situation and began to study adrenaline’s effect on the human heart, particularly in those suffering from angina.
You are likely familiar with adrenaline’s role in the fight-or-flight response—if you encounter something dangerous, like a stranger with a gun, you experience a surge of adrenaline that makes you hyper-alert and ready for action. But adrenaline also serves another physiological role, as a hormone that regulates our blood pressure. Thus, Black concluded that any drug that blocked adrenaline should also lower blood pressure. Armed with this promising idea, Black approached the British company ICI Pharmaceuticals in 1958 and applied for a job as a drug hunting scientist. Despite the fact that Black was a veterinary professor without any pharmaceutical experience, he had an excellent reputation as a researcher and was hired by ICI, where he quickly set to work looking for compounds that might block the effects of adrenaline.
It was known that there were two different types of adrenaline receptors, known as alpha receptors and beta receptors. Studies revealed that the beta receptors were the ones involved in the regulation of blood pressure. Black surmised that if he could block a person’s beta receptors, he would be able to reduce her blood pressure. The challenge, though, was to figure out how to block the beta receptors without blocking the alpha receptors, which were molecularly quite similar and which controlled other physiological functions unrelated to blood pressure. Black set to work on finding a compound that would differentiate between the two types of adrenalin receptors and in 1964 discovered propranolol, a drug that selectively blocked beta adrenaline receptors. This was the world’s first anti-hypertension “beta blocker.”
Propranolol reduced blood pressure without the diuretic effects of the thiazides. It rapidly became one of the best-selling drugs in the late 1960s and 1970s and was prescribed throughout the world. For his groundbreaking work, Black received the Nobel Prize in Medicine in 1988.
Even though the beta blockers were a clear improvement over the thiazides, they still harbored two major flaws. Beta adrenaline receptors are also found in the lungs, where they regulate the size of the airways. Blocking the beta receptors in the lungs causes the airways to constrict. (Indeed, many modern inhalers used
to treat asthma attacks contain drugs that stimulate the beta receptors in the lungs.) Thus, propranolol and other early beta blockers featured a very untoward side effect—they made it more difficult to breathe. Treating an asthma patient with beta blockers can be downright dangerous. In addition, beta blockers elicit another side effect in men that may be far less physically risky than constricted airways but can be just as psychologically harrowing: impotence.
Thus, every available anti-hypertensive exhibited some meaningful shortcoming. The Holy Grail of anti-hypertensive drugs remained elusive. But I worked at the place where it was finally found. In the early 1980s, during my first pharmaceutical industry job with Squibb, I became acquainted with two snake charmers, Dave Cushman and Miguel Ondetti. They were drug hunters at Squibb who happened to be very interested in the venom produced by pit vipers. The venom of these snakes knocks out their prey by drastically reducing their victims’ blood pressure, rendering their victim unconscious. Cushman and Ondetti reasoned that it should be possible to isolate the compound in pit viper venom that reduced blood pressure and convert it into an anti-hypertensive drug.
They started by studying one of the venom’s most active components, a substance known as teprotide. They discovered that teprotide inhibited an enzyme in the body known as ACE (angiotensin converting enzyme). Even though adrenaline contributes to the regulation of blood pressure, we now know that ACE serves as the true “master controller” of our blood pressure. In effect, by inhibiting ACE, snake venom shuts down the body’s ability to control blood pressure, and without this control, blood pressure dropped.
The Drug Hunters Page 15