You Are the Placebo

Home > Other > You Are the Placebo > Page 7
You Are the Placebo Page 7

by Joe Dispenza, Dr.


  Beecher’s point was well taken. Initially, researchers expected that a study’s control group (the group taking the placebo) would remain neutral so that comparisons between the control group and the group taking the active treatment would show how well the active treatment worked. But in so many studies, the control group was indeed getting better—not just on their own but because of their expectation and belief that they might be taking a drug or receiving a treatment that would help them. The placebo itself might have been inert, but its effect was certainly not, and these beliefs and expectations were proving to be extremely powerful! So somehow, that effect had to be teased out from the data if that data was to have any real meaning.

  To that end, and heeding Beecher’s petition, researchers began making the randomized, double-blind trial the norm, randomly assigning subjects to either the active or the placebo group and making sure none of the subjects or any of the researchers themselves knew who was taking the real drug and who was taking the placebo. This way, the placebo effect would be equally active in each group, and any possibility that the researchers might treat subjects differently according to what group they were in would be eliminated. (These days, studies are sometimes even triple blind, meaning that not only are the participants and the researchers who are conducting the trial in the dark about who’s taking what until the end of the study, but the statisticians analyzing the data also don’t know until their job is done.)

  Exploring the Nocebo Effect

  Of course, there’s always a flip side. While suggestibility was garnering more attention because of its ability to heal, it also became apparent that the same phenomenon could be used to harm. Such practices as hexes and voodoo curses illustrated the negative side of suggestibility.

  In the 1940s, Harvard physiologist Walter Bradford Cannon (who had in 1932 coined the term fight or flight) studied the ultimate nocebo response—a phenomenon that he called “voodoo death.”2 Cannon examined a number of anecdotal reports of people with strong cultural beliefs in the power of witch doctors or voodoo priests suddenly falling ill and dying—despite no apparent injury or evidence of poison or infection—after ending up on the receiving end of a hex or curse. His research laid the groundwork for much of what we know today about how physiological response systems enable emotions (fear in particular) to create illness. The victim’s belief in the power of the curse itself to kill him was only part of the psychological soup that brought about his ultimate demise, Cannon said. Another factor was the effect of being socially ostracized and rejected, even by the victim’s own family. Such people quickly became the walking dead.

  Harmful effects from harmless sources aren’t restricted to voodoo, of course. Scientists in the 1960s coined the term nocebo (Latin for “I shall harm,” as opposed to “I shall please,” the Latin translation of placebo), referring to an inert substance that causes a harmful effect—simply because someone believes or expects it will harm her.3 The nocebo effect commonly pops up in drug studies when subjects who are taking placebos either just expect that there will be side effects to the drug being tested, or when the subjects are specifically warned of potential side effects—and then they experience those same side effects by associating the thought of the drug with all of the potential causations, even though they’ve not taken the drug.

  For obvious ethical reasons, few studies are designed specifically to look at this phenomenon, although some do exist. A famous example is a 1962 study done in Japan with a group of children who were all extremely allergic to poison ivy.4 Researchers rubbed one forearm of each child with a poison-ivy leaf but told them the leaf was harmless. As a control, they rubbed the child’s other forearm with a harmless leaf that they claimed was poison ivy. All the children developed a rash on the arm rubbed with the harmless leaf that was thought to be poison ivy. And 11 of the 13 children developed no rash at all where the poison had actually touched them.

  This was an astounding finding; how could children who were highly allergic to poison ivy not get a rash when exposed to it? And how could they develop a rash from a totally benign leaf? The new thought that the leaf wouldn’t hurt them overrode their memory and belief that they were allergic to it, rendering real poison ivy harmless. And the reverse was true in the second part of the experiment: A harmless leaf was made toxic by thought alone. In both cases, it seemed as if the children’s bodies instantaneously responded to a new mind.

  In this instance, we could say that the children were somehow freed from the future expectation of a physical reaction to the toxic leaf, based on their past experiences of being allergic. In effect, they somehow transcended a predictable line of time. This also suggests that by some means, they became greater than the conditions in their environment (the poison-ivy leaf). Finally, the children were able to alter and control their physiology by simply changing a thought. This astonishing evidence that thought (in the form of expectation) could have a greater effect on the body than the “real” physical environment helped to usher in a new era of scientific study called psychoneuroimmunology—the effect of thoughts and emotions on the immune system—an important segment of the mind-body connection.

  Another notable nocebo study from the ’60s looked at people with asthma.5 Researchers gave 40 asthma patients inhalers containing nothing but water vapor, although they told the subjects that the inhalers contained an allergen or irritant; 19 of them (48 percent) experienced asthmatic symptoms, such as restriction of their airways, with 12 (30 percent) of the group suffering full-blown asthmatic attacks. Researchers then gave the subjects inhalers said to contain medicine that would relieve their symptoms, and in each case, their airways did indeed open back up—although again, the inhalers contained only water vapor.

  In both situations—bringing on the asthma symptoms and then dramatically reversing them—the patients were responding to suggestion alone, the thought planted in their minds by the researchers, which played out exactly as they expected. They were harmed when they thought they’d inhaled something harmful, and they got better when they thought they were receiving medicine—and these thoughts were greater than their environment, greater than reality. We could say that their thoughts created a brand-new reality.

  What does this say about the beliefs we hold and the thoughts we think every day? Are we more susceptible to catching the flu because all winter long, everywhere we look, we see articles about flu season and signs about flu-shot availability—all of which reminds us that if we don’t get a flu shot, we’ll get sick? Could it be that when we simply see someone with flu-like symptoms, we become ill from thinking in the same ways as the children in the poison-ivy study who got a rash from the inert leaf or from thinking like the asthmatics who experienced a significant bronchial reaction after inhaling simple water vapor?

  Are we more likely to suffer from arthritis, stiff joints, poor memory, flagging energy, and decreased sex drive as we age, simply because that’s the version of the truth that ads, commercials, television shows, and media reports bombard us with? What other self-fulfilling prophecies are we creating in our minds without being aware of what we’re doing? And what “inevitable truths” can we successfully reverse simply through thinking new thoughts and choosing new beliefs?

  The First Big Breakthroughs

  A groundbreaking study in the late ’70s showed for the first time that a placebo could trigger the release of endorphins (the body’s natural painkillers), just as certain active drugs do. In the study, Jon Levine, M.D., Ph.D., of the University of California, San Francisco, gave placebos, instead of pain medication, to 40 dental patients who had just had their wisdom teeth removed.6 Not surprisingly, because the patients thought they were getting medicine that would indeed relieve their pain, most reported relief. But then the researchers gave the patients an antidote to morphine called naloxone, which chemically blocks the receptor sites for both morphine and endorphins (endogenous morphine) in the brain. When the researchers administered it, the patients’ pain returned! This pro
ved that by taking the placebos, the patients had been creating their own endorphins—their own natural pain relievers. It was a milestone in placebo research, because it meant that the relief the study subjects experienced wasn’t all in their minds; it was in their minds and their bodies—in their state of being.

  If the human body can act like its own pharmacy, producing its own pain drugs, then might it not also be true that it’s fully capable of dispensing other natural drugs when they’re needed from the infinite blend of chemicals and healing compounds it houses—drugs that act just like the ones doctors prescribe or maybe even better than the drugs doctors prescribe?

  Another study in the ’70s, this one by psychologist Robert Ader, Ph.D., at the University of Rochester, added a fascinating new dimension to the placebo discussion: the element of conditioning. Conditioning, an idea made famous by Russian physiologist Ivan Pavlov, depends on associating one thing with another—like Pavlov’s dogs associating the sound of the bell with food after Pavlov started ringing it every day before he fed them. In time, the dogs were conditioned to automatically salivate in anticipation of a meal whenever they heard a bell. As a result of this type of conditioning, their bodies became trained to physiologically respond to a new stimulus in the environment (in this case, the bell), even without the original stimulus that elicited the response (the food) being present.

  Therefore, in a conditioned response, we could say that a subconscious program, which is housed in the body (I’ll talk more about this in the coming chapters), seemingly overrides the conscious mind and takes charge. In this way, the body is actually conditioned to become the mind because conscious thought is no longer totally in control.

  In the case of Pavlov, the dogs were repeatedly exposed to the smell, sight, and taste of the food, and then Pavlov rang a bell. Over time, just the sound of the bell caused the dogs to automatically change their physiological and chemical state without thinking about it consciously. Their autonomic nervous system—the body’s subconscious system that operates below conscious awareness—took over. So conditioning creates subconscious internal changes in the body by associating past memories with the expectation of internal effects (what we call associative memory) until those expected or anticipated end results automatically occur. The stronger the conditioning, the less conscious control we have over these processes and the more automatic the subconscious programming becomes.

  Ader started out attempting to study how long such conditioned responses could be expected to last. He fed lab rats saccharine-sweetened water that he’d spiked with a drug called cyclophosphamide, which causes stomach pain. After conditioning the rats to associate the sweet taste of the water with the ache in their gut, he expected they’d soon refuse to drink the spiked water. His intention was to see how long they’d continue to refuse the water so that he could measure the amount of time their conditioned response to the sweet water would last.

  But what Ader didn’t know initially was that the cyclophosphamide also suppresses the immune system, so he was surprised when his rats started unexpectedly dying from bacterial and viral infections. Changing gears in his research, he continued to give the rats saccharine water (force-feeding them with an eyedropper) but without the cyclophosphamide. Although they were no longer receiving the immune-suppressing drug, the rats continued to die of infections (while the control group that had received only the sweetened water all along continued to be fine). Teaming up with University of Rochester immunologist Nicholas Cohen, Ph.D., Ader further discovered that when the rats had been conditioned to associate the taste of the sweetened water with the effect of the immune-suppressing drug, the association was so strong that just drinking the sweetened water alone produced the same physiological effect as the drug—signaling the nervous system to suppress the immune system.7

  Like Sam Londe, whose story was in Chapter 1, Ader’s rats died by thought alone. Researchers were beginning to see that the mind was clearly able to subconsciously activate the body in several powerful ways they’d never imagined.

  West Meets East

  By this time, the Eastern practice of Transcendental Meditation (TM), taught by Indian guru Maharishi Mahesh Yogi, had caught on in the United States, fueled by the enthusiastic participation of several celebrities (starting with the Beatles in the 1960s). The goal of this technique, which involves quieting the mind and repeating a mantra during a 20-minute meditation session performed twice a day, is spiritual enlightenment. But the practice caught the attention of Harvard cardiologist Herbert Benson, who became interested in how it might help reduce stress and lessen the risk factors for heart disease. Demystifying the process, Benson developed a similar technique, which he called the “relaxation response,” described in his 1975 book by the same title.8 Benson found that just by changing their thought patterns, people could switch off the stress response, thereby lowering blood pressure, normalizing heart rate, and attaining deep states of relaxation.

  While meditation involves maintaining a neutral attitude, attention was also being paid to the beneficial effects of cultivating a more positive attitude and pumping up positive emotions. The way had been paved in 1952, when former minister Norman Vincent Peale published the book The Power of Positive Thinking, which popularized the idea that our thoughts can have a real effect, both positive and negative, on our lives.9 That idea grabbed the attention of the medical community in 1976, when political analyst and magazine editor Norman Cousins published an account in the New England Journal of Medicine of how he had used laughter to reverse a potentially fatal disease.10 Cousins also told his story in his best-selling book Anatomy of an Illness, published a few years later.11

  Cousins’s doctor had diagnosed him with a degenerative disorder called ankylosing spondylitis—a form of arthritis that causes the breakdown of collagen, the fibrous proteins that hold our bodies’ cells together—and had given him only a 1-in-500 chance of recovery. Cousins suffered from tremendous pain and had such difficulty moving his limbs that he could barely turn over in bed. Grainy nodules appeared under his skin, and at his lowest point, his jaw nearly locked shut.

  Convinced that a persistent negative emotional state had contributed to his illness, he decided it was equally possible that a more positive emotional state could reverse the damage. While continuing to consult with his doctor, Cousins started a regimen of massive doses of vitamin C and Marx Brothers movies (as well as other humorous films and comedy shows). He found that ten minutes of hearty laughter gave him two hours of pain-free sleep. Eventually, he made a complete recovery. Cousins, quite simply, laughed himself to health.

  How? Although scientists at the time didn’t have a way to understand or explain such a miraculous recovery, research now tells us it’s likely that epigenetic processes were at work. Cousins’s shift of attitude changed his body chemistry, which altered his internal state, enabling him to program new genes in new ways; he simply downregulated (or turned off) the genes that were causing his illness and upregulated (or turned on) the genes responsible for his recovery. (I’ll go into more detail about turning genes on and off in the coming chapters.)

  Many years later, research by Keiko Hayashi, Ph.D., of the University of Tsukuba in Japan showed the same thing.12 In Hayashi’s study, diabetic patients watching an hour-long comedy program upregulated a total of 39 genes, 14 of which were related to natural killer cell activity. While none of these genes were directly involved in blood-glucose regulation, the patients’ blood-glucose levels were better controlled than after they listened to a diabetes health lecture on a different day. Researchers surmised that laughter influences many genes involved with immune response, which in turn contributed to the improved glucose control. The elevated emotion, triggered by the patients’ brains, turned on the genetic variations, which activated the natural killer cells and also somehow improved their glucose response—probably in addition to many other beneficial effects.

  As Cousins said of placebos back in 1979, “The process works not becau
se of any magic in the tablet, but because the human body is its own best apothecary and because the most successful prescriptions are filled by the body itself.”13

  Inspired by Cousins’s experience, and with alternative and mind-body medicine now in full swing, Yale University surgeon Bernie Siegel started to look at why some of his cancer patients with poor odds survived while others with better odds died. Siegel’s work defined cancer survivors largely as those who had a feisty, fighting spirit, and he concluded that there were no incurable diseases, only incurable patients. Siegel also began writing about hope as a powerful force for healing and about unconditional love, with the natural pharmacy of elixirs it provides, as the most powerful stimulant of the immune system.14

  Placebos Outperform Antidepressants

  The profusion of new antidepressants that appeared around the late 1980s and into the ’90s would next ignite a controversy that would ultimately (although not immediately) increase respect for the power of placebos. In researching a 1998 meta-analysis of published studies on antidepressant drugs, psychologist Irving Kirsch, Ph.D., then at the University of Connecticut, was shocked to find that in 19 randomized, double-blind clinical trials involving more than 2,300 patients, most of the improvement was due not to the antidepressant medications, but to the placebo.15

  Kirsch then used the Freedom of Information Act to gain access to the data from the drug manufacturers’ unpublished clinical trials, which by law had to be reported to the Food and Drug Administration. Kirsch and his colleagues did a second meta-analysis, this time on the 35 clinical trials conducted for four of the six most widely prescribed antidepressants approved between 1987 and 1999.16 Now looking at data from more than 5,000 patients, the researchers found again that placebos worked just as well as the popular antidepressant drugs Prozac, Effexor, Serzone, and Paxil a whopping 81 percent of the time. In most of the remaining cases where the drug did perform better, the benefit was so small that it wasn’t statistically significant. Only with severely depressed patients were the prescription drugs clearly better than placebo.

 

‹ Prev