Viruses, Pandemics, and Immunity
Page 14
The spike protein of the hepatitis B virus could be mass-produced using recombinant DNA technology. But the protein was not the only component of the vaccine. Remember the immunologists’ dirty little secret: if you inject a foreign protein by itself, there is no adaptive immune response. This is because you need to add other components to activate the innate immune system. These additional chemical components added to vaccines are called adjuvants.
Until we learned about the receptors of the innate immune system (see chapter 4), adjuvants were mostly formulated using trial and error. Some of the early adjuvants were killed bacteria. Immunologists did not know it at the time, but killed bacteria stimulate certain innate immune receptors. Another commonly used adjuvant in many vaccines is aluminum salt. Exactly how it works to stimulate the immune system is not known, but it may function as an irritant that causes tissue damage and inflammation, which activates innate immunity. In the modern era, with our knowledge of innate immune receptors, drugs that specifically target specific innate receptors can be used as adjuvants. But while this new knowledge serves as a guide, adjuvant formulation remains empirical.
Another example of a successful vaccine made using recombinant DNA technology is the one that protects against human papilloma virus (HPV) infection. HPV is an important cause of cervical cancer in women, and protecting against infection by this virus prevents the cancer. The spike protein of HPV produced using recombinant DNA technology aggregates to form a structure that resembles the array of spikes on the virus. This results in the generation of a strong protective antibody response.
Subunit vaccines are also designed using another approach. A virus that can infect human cells is chosen. For example, one could choose a member of the adenovirus family of viruses, which cause mild colds and diarrhea. The genes of this virus are modified so that it cannot replicate after it infects a human cell, and so it cannot cause illness. Using recombinant DNA technology, the genes encoding information on key proteins of a virus for which a vaccine is needed are then inserted into the DNA of the modified adenovirus. The modified adenovirus is called the vector and the inserted genes of the virus is called the insert. When this vaccine is injected into the body, the vector enters human cells and the inserted genes enable production of the corresponding proteins. Immune responses are thus elicited against these proteins. Vaccines designed in this way do not need an adjuvant, because the vector is a live virus that is able to induce innate immune responses. This approach is being used for some of the COVID-19 vaccine candidates.
DNA and RNA Vaccines
Developing and deploying a process for making the subunit vaccines described above takes a long time, often years or more. DNA is transcribed into RNA, which is then translated into the corresponding protein. Technology for synthesizing DNA and RNA has advanced to the point where these molecules can be manufactured in a matter of days to weeks. So, it may be easier and faster to just inject DNA or RNA into people. If the DNA or RNA can enter human cells, the cells would then make the corresponding proteins, which would elicit an immune response. This is a new vaccination strategy that is currently being tested. The first COVID-19 vaccine that entered into clinical trials used such an approach.
The technology for delivering RNA and DNA vaccines in a way that enables them to enter cells efficiently is still evolving. For RNA vaccines, in current methods, the RNA encoding information about the protein of interest is encapsulated in a tiny particle. These particles are called nanoparticles because they are a few nanometers in diameter, which is a thousand times smaller than the diameter of a human hair. The nanoparticles are made up of lipids and other substances, which are the same molecules that make up our own cells’ membranous walls and that of viruses. They are designed so that they are preferentially eaten up by Metchnikoff’s phagocytic cells, but they can also enter other cells. Once inside the cell, the RNA is then translated into the corresponding proteins, which can then potentially elicit immune responses.
RNA vaccines have the advantage that an adjuvant is not needed. RNA is usually present only inside the cell. When innate immune receptors detect the presence of RNA outside cells, it is like a foreign invader and an innate immune response is activated. The “secret sauce” that makes some RNA vaccines purportedly work well is how the nanoparticle is formulated and how the RNA is modified so as to not induce too strong an innate immune response.
DNA is much more stable than RNA, and so does not need to be encapsulated in a nanoparticle before it is administered. Optimizing approaches to deliver DNA efficiently into cells, and whether an adjuvant is necessary, are issues that are currently being explored.
It should be noted that as of mid-2020, there are no DNA or RNA vaccines that have previously been approved for human use.
Clinical Trials of Vaccine Candidates
The development of a new vaccine begins with what is called the discovery phase. During this time, a particular vaccine concept is chosen (e.g., inactivated virus, subunit vaccine, or RNA) and then tested in small animals, like mice. If the vaccine concept seems promising, it is tested in large animals. It is important that the chosen animal model recapitulates the symptoms and stages of the disease observed in humans. As monkeys are primates, they are similar to us in several ways, and so are often chosen to test vaccine concepts. Upon infection with the influenza virus, ferrets exhibit symptoms very similar to humans, thus they are often used to test concepts for influenza vaccines and therapies. During this stage of vaccine development, the data generated in animals are also used to estimate what may be a safe and effective dose for the vaccine. If all these preclinical studies go well, then the concept is ready for human clinical trials.
Clinical trials are carried out in three phases. In phase I, the safety of the vaccine is tested in a relatively small number of healthy humans. Based on animal studies, a range of doses is tested to determine the doses at which the vaccine can be administered without side effects. If the vaccine is proven to be safe in phase I, then in phase II of the clinical trial, a specific vaccine dose is chosen and the ability of the vaccine to elicit the desired immune responses is tested. For example, vaccinated people could be tested to determine if the vaccine elicited antibodies that can block the virus from infecting human cells. This is done by testing antibodies generated upon vaccination in the laboratory. It is also important to determine whether antibodies were generated in sufficient numbers to be protective. In the case of an ongoing pandemic, the level of antibodies required for protection from infection may not be known when clinical trials start. This was the case when clinical trials for COVID-19 vaccines started. Sometimes phases I and II of a clinical trial can be combined to accelerate vaccine development. As different vaccine doses are tested, the antibody levels and their neutralizing ability can be assessed at the same time.
After successful completion of phases I and II of the trial, the critically important phase III starts. This is when the efficacy of the vaccine in preventing infections in humans is tested. The gold standard of phase III clinical trials is what is called a double-blind trial. The people who enroll in the clinical trial are divided into two groups. One group is given the vaccine and the other is given a harmless substance (called a placebo). Neither the enrollees in the trial nor the physicians involved know who has received the vaccine and who is in the placebo group. Double-blind trials aim to minimize bias. Everyone enrolled in a phase III clinical trial is monitored for infection. The difference in infection rates and severity of illness between the group that received the vaccine and the placebo group determines the efficacy of the vaccine.
Phase III vaccine trials are enormously complex and expensive. When the efficacy of a new therapeutic for a disease like cancer is tested, everyone participating in the clinical trial has the disease. If there are 200 people enrolled in the trial, we assess the efficacy of the drug in a population of 200 people. When a vaccine is tested, at the beginning of the clinical trial, everyone enrolled is healthy. During the trial peri
od, only a small fraction of these people will be exposed to the virus. Let’s say that the natural prevalence of the disease that a virus causes is 2 percent. If 200 people are enrolled in the trial, only four individuals are likely to be infected during the course of the trial. So, the efficacy of the vaccine is really being tested in only four people. The statistical accuracy of the vaccine trial is therefore going to be much lower than that for the analogous cancer drug trial, even though both trials enrolled the same number of people. This is why phase III vaccine trials have to enroll very large numbers of people in both the group that gets the vaccine and the placebo group. By conducting clinical trials in areas with a high prevalence of the disease, the number of people enrolled in the trial can be reduced because more people are likely to be exposed to infection.
Many vaccine candidates that aim to prevent SARS-CoV-2 infection will be tested simultaneously in a short time. Enrolling sufficiently large numbers of people for each vaccine trial is a challenge. For this reason, some have wondered whether, given the urgent need for a vaccine that can prevent COVID-19, whether a different kind of trial called a “challenge trial” should be carried out. In such a trial, healthy volunteers are vaccinated and then infected with the virus. So, now, analogous to drug efficacy trials, everyone enrolled in the trial is infected. Thus, the efficacy of the vaccine can be determined with high statistical accuracy with small numbers of people enrolled in clinical trials. Since effective treatments for COVID-19 are not available, challenge trials present obvious ethical concerns.
Another critically important part of clinical trials and the deployment of a successful vaccine is manufacturing the product that goes into humans. Strict regulations exist for how such products are manufactured, which are called “good manufacturing practices” (GMP). In the United States, the Food and Drug Administration (FDA) regulates GMP standards in order to ensure the safety of pharmaceuticals. It usually takes a long time to develop the manufacturing process for a new drug or vaccine. This is especially true if there is no precedent for manufacturing a particular type of vaccine at large scale (as is the case with DNA or RNA vaccines). Typically, a manufacturing process involves many steps; each step has to be optimized, and methods to test the quality of the product have to be developed. The entire manufacturing process has to be developed before filing an application for approval by the FDA. Furthermore, as the clinical trial moves through different phases, the manufacturing process has to be scaled up. Changes to the core manufacturing processes at this stage requires a new application for approval, which can result in considerable delays.
With this general description of how vaccines work and how they are developed, let us first look to history and learn about how polio vaccines were developed. Then, we will describe some of the efforts underway to develop vaccines that may protect us from SARS-CoV-2 and HIV infections.
Examples of Vaccine Development
Salk, Sabin, and Polio Vaccines
The poliovirus has been circulating in humans for centuries, with evidence of polio found in Egyptian mummies. Polio is a very infectious food- and waterborne virus. Most infections result in mild flu-like symptoms or no symptoms at all. In a small proportion of cases (one in 200), infection causes muscle weakness or paralysis. In the twentieth century, outbreaks of polio began to occur with regularity. In many of the cases, children would lose the ability to walk or did not have the muscle strength to breathe. Interestingly, these outbreaks occurred more frequently in developed nations, like the United States and Scandinavian countries.
It is thought that before good sanitation and clean water supplies were available, most infants were likely to be infected with polio from contaminated water. Infants were able to control the virus because of protective antibodies acquired through their mother’s milk. This mitigated the severity of disease in infants. The infants’ immune responses also responded to the virus and so they acquired lifelong immunity. As hygiene and sanitation improved in some nations, infants were less likely to be infected by poliovirus. This was probably especially true for those born in wealthy families. After breastfeeding stopped, the mother’s antibodies were no longer available for protection. So, people exposed to the virus at an older age had no immunity to it. It is thought that this increased the risk of severe disease and paralysis at older ages.
In 1916, a major epidemic occurred in the United States, with the epicenter being in New York City. Approximately 27,000 people fell ill, 6,000 died, and many children were paralyzed. As not much was known about the virus, and most infected people were asymptomatic, it seemed that randomly selected children suddenly became paralyzed. This situation was frightening. In 1921, Franklin Delano Roosevelt was infected with polio at age 39. Roosevelt had just lost the election for vice president of the United States. That a wealthy and powerful American politician could be afflicted by the disease and become paralyzed exacerbated the fear. Cases of paralysis from polio grew in number each summer, causing parents to dread summer vacation for their children. Many parents forbade their children from going to swimming pools, the beach, movie theaters, and bowling alleys. To address the national health crisis, Roosevelt started a philanthropic organization whose major goal was to develop a polio vaccine. This organization came to be called the March of Dimes.
Two New Yorkers and graduates of New York University medical school, Jonas Salk and Albert Sabin, would take two different approaches toward developing a vaccine for polio and in the process would become bitter rivals. The rivalry would pit those who believed in inactivated vaccines against those who believed in live attenuated vaccines. Salk and Sabin were not the first to advocate for these two different approaches for developing a polio vaccine. But when these strategies were first tested in the 1930s, clinical trials using both types of vaccines were believed to have caused polio. These events dampened enthusiasm for any further trials for the next 20 years.
As we described in the previous chapter, in 1949, Enders, Robbins, and Weller learned how to grow poliovirus in the laboratory. Salk immediately took advantage of this breakthrough. He scaled up production of the virus, and then determined just the right amount of formaldehyde required to inactivate it while keeping it intact. The March of Dimes decided to use all its resources to back the development of a polio vaccine based on Salk’s inactivated virus. Given the national anxiety about polio, the US media focused on Salk’s work. In a clinical trial, Salk was able to establish that his vaccine was safe, and also determine the vaccine dose required to elicit an antibody response. In three short years phases I and II of the trial were completed. In 1953, Salk announced that he was ready to test the efficacy of his vaccine.
The decision to start a large clinical trial was controversial. Enders and Sabin both questioned the safety of an inactivated vaccine, as well as whether an antibody response was a meaningful surrogate for protection from infection. Many clinicians also felt that a double-blind clinical trial was unethical as individuals in the placebo group would not benefit. Others were concerned that mostly children would be enrolled in the trial. Some worried that the wealthy would be more likely to volunteer their children since the disease afflicted them more, and this would bias the study. In the end, the phase III clinical trial used multiple approaches.
The clinical trial was an organizational tour de force. About two million children, almost all between 6 and 8 years of age, were enrolled. Since polio infections occurred mainly in the summer, all vaccinations needed to be completed before the end of the 1954 school year. The trial was conducted in counties with high rates of infection. One trial was a double-blind trial, with neither the physicians nor the children knowing which was the placebo group. In the other trial, the enrollees were first-, second-, and third-grade elementary school children. Only the second graders were vaccinated. Since no single company was able to manufacture the number of vaccine doses needed for the trial, many manufacturers were used. Vials of vaccines from different manufacturers were labeled similarly, but because there co
uld be differences in product quality, the origin of each dose needed to be monitored.
At the end of the summer, the trial ended. Because of the massive amounts of data obtained, the computer company IBM was invited to help analyze the data. Finally, in the spring of 1955, on the tenth anniversary of Roosevelt’s death, the March of Dimes announced the exciting results of the trial, which showed that the Salk vaccine worked.
Almost ten years older than Jonas Salk, Albert Sabin greeted these results with mixed emotions. Sabin had been working on polio for almost his entire career. He was the one who recognized that polio infected the intestines first because of fecal contamination of food or water. After multiplying in the intestine, the virus then spreads to the blood before being cleared by the immune response. In some cases, the virus is able to enter the nervous system from the blood, resulting in paralysis. Based on this work, Sabin believed that a good vaccine needed to provide protective immunity to the intestinal tract.
Sabin spent years weakening or attenuating the poliovirus by growing it repeatedly in different animals and in cells in the laboratory. Eventually, he was able to isolate a weakened form of the virus that he felt was safe to use in humans. The United States and the March of Dimes felt that Salk’s vaccine had solved the polio problem and there was no need for another vaccine. So, Sabin turned to other countries for support. In the Soviet Union, millions of people participated in a clinical trial. With its success established, the Soviet Union began manufacturing Sabin’s vaccine. It is remarkable that at the height of the Cold War an American polio vaccine got its first foothold in the communist world. Eventually, Sabin’s vaccine would be approved for use in the United States in 1961 and, in a victory for Sabin, replaced Salk’s vaccine in 1962.