Book Read Free

The Patient Equation

Page 17

by Glen de Vries


  But that doesn't mean it's the only possible approach. The technologies and connectivity that are enabling so many of the therapeutic breakthroughs and new measurements in today's world might also be able to change the way we structure trials. Perhaps we can flip the current approach on its head and make studies truly patient‐centric. Efforts like these are happening as we speak.

  Anthony Costello is senior vice president, Mobile Health at Medidata and leads our involvement in the ADAPTABLE trial (Aspirin Dosing: A Patient‐Centric Trial Assessing Benefits and Long‐Term Effectiveness), a real‐world trial that brings the study to the patient instead of the other way around, sponsored by the Patient‐Centered Outcomes Research Institute (PCORI). In an interview, Costello talked about the study's transformative approach to recruitment—patients who receive their care at a PCORnet site (The National Patient‐Centered Clinical Research Network, covering more than 68 million patients nationwide8) are identified through their electronic medical records, and sent an invitation with a code (a “golden ticket”) that allows them to log onto the study website and sign up to participate.9

  This is the beginning of how the study turns on its head the entire idea of how we recruit for trials. Instead of finding investigators who then try to find participants, here the participants are effectively the ones who enroll themselves—and in this case, thanks to the power of a huge network of physicians, they are prescreened in advance to make sure they meet the study criteria, and provided with everything needed for them to enroll and participate. A virtual site is created around them.

  ADAPTABLE, and other virtual trials like it, prove that you don't need a site to be the center of recruiting and treatment. These kinds of studies will continue to proliferate and become more of a normal practice in the future—and make everyone in the system better off, thanks to technology.

  The patient's burden is lowered, certainly—they don't have to travel to clinics as often, and the study is easier to maintain as part of their daily lives. Anecdotally, ADAPTABLE has surprisingly high engagement rates, with patients compliant with the needs of the study and fewer of them dropping out, a huge problem in many studies. Costello thinks that the reason is at least partly because of how easy it is for patients to participate under this virtual model.

  The companies conducting the research (or, in the case of PCORI and ADAPTABLE, the nonprofit organization conducting it) also benefit. Costs for recruiting go down, and time to recruit goes down as well. ADAPTABLE has 15,000 patients participating, a number that would be orders of magnitude harder to achieve in a traditional trial.

  The outputs of the study are better as well. It's easier to get the kind of continuous monitoring previously discussed.

  But this isn't an all‐or‐nothing option for conducting studies. Instead of patients having to travel to a physician's office for every data point to be collected in a particular study, they can be given the option to go to a pharmacy with a mini‐clinic, or to a local laboratory for a blood draw. Drug supplies can be shipped to their homes. Perhaps they will go to that physician's office for an initial screening, for key check‐in points, and to close out their treatment. But any chance to move even part of a study to the virtual environment—and relieve participant burden—should be taken advantage of.

  Within the next five years, I expect we'll see almost every clinical trial taking advantage of virtual trial designs. Some trials will be entirely virtual—like ADAPTABLE—but more likely we'll see a bimodal distribution (two prominent peaks, if we were to plot it on a graph) where most trials are either 20% virtual or 80% virtual. The former category will largely be in cases where patients are critically ill, or the therapies are complex to administer. We'll need them to be in the clinic more, and they may even want to be there, but there will be quality‐of‐life adjustments by using the virtual space to make certain aspects easier.

  The 80% peak will be for chronic conditions or for easily administered medications and evaluations of progression. These studies will contain elements that can be performed at home, at pharmacies, or with nurse practitioners visiting people at home, in addition to—as mentioned—perhaps a visit or two to a clinic to screen the patient, enroll and train them on whatever they need to know for the study, and then to close out their participation at the end of their course of therapy.

  These kinds of virtual trials are coming at a time when the life sciences industry needs to evolve more than ever. With increasingly precise medicines, the math tells us that the number of patients who will benefit from each medicine goes down. Finding the right patient for a breakthrough precision medicine is harder than finding the right patient for a medication designed to be more broadly administered. Thus, finding the right candidates (able, willing, and appropriate) for research projects becomes an even harder problem to solve.

  Accepting New Kinds of Data

  The second big piece of the trial discussion is that life sciences companies need to continue moving in the direction of broader data capture, from wearables and mobile devices to genetic sequencing and the retaining of biospecimens. Richer, broader data in trials means better analysis. The more variables there are, the more likely we can find the meaningful ones.

  Tarek Sherif, my co‐founder and co‐CEO at Medidata, talked to Pharma Times about this very issue back in 2016. “Historically in clinical trials,” he said, “we have collected more or less subjective data through diaries, paper…or by getting patients to come in to the clinic and do a test. These supposedly measure the efficacy of a treatment, but you are taking a snapshot in time.”10

  Indeed, those snapshots in time don't come close to the kind of objective data we can now get with better and more advanced patient instrumentation. We can now come closer than ever before to analyzing real‐world experience than merely a test result in a clinic. We can see what a patient's mobility is like, changes in step count during a trial, sleep data, and more. And while back in 2016 companies were starting to do electronic clinical trials, few were really committing to the idea of wearable trials.

  Things have improved since 2016, but not by enough. And even as we see more and more wearables being incorporated—from Fitbits to Apple Watches—we still don't see sensors in trials across the board. Genetic panels are regularly being utilized in oncology and other therapeutic areas, but we don't see full gene sequencing across all studies, or the kinds of proteomics that proved so powerful for David Fajgenbaum and the Castleman Disease Collaborative Network.

  Kara Dennis, Medidata's former managing director of mobile health—and one of the smartest thinkers I know about new technology in clinical trials—spoke to me about her take on these developments early on in the process of conceiving this book. “It will take some time for pharma to move away from the well‐validated, well‐proven measures that they've used for lots of patients over many years, but we are absolutely seeing the early steps, the process of validating the quality and usefulness of wearable data in studies,” Dennis told me.11

  The biggest challenges with digital data, she explains, are the quality of the sensors themselves, and whether subjects can use them properly. “Even with something as simple as a thermometer, the subjects may not be good enough at using it themselves, and there may be a difference between a clinician taking these measurements and subjects doing it on their own.” The other problem is compliance. “What kind of infrastructure do we need?” Dennis asks. “Will patients remember to use the device? Will they leave it on when they're supposed to, charge it, wear it at night if they're supposed to, or in the shower?”

  As these issues recede into the background—as wearables are more and more accurate, and function with less and less potential for user error (implantables, etc.)—the hope is that pharma will become more comfortable using them. An industry analyst at Gartner has said, “Seismic shifts in this market will not happen until the pharmaceutical lobby has confidence in the underlying systems supporting wearables, and that means that clinical validation expertise for
wearables must improve.”12

  But the digital clinical trial is, fortunately, becoming more of a reality as time passes and knowledge and comfort grow, making trials more accurate, more efficient, and more patient‐friendly than ever before. We can use technology to remove physical barriers, geographic barriers, and temporal barriers that all make launching and completing a study more challenging and more expensive. Between video calling to connect patients, doctors, and researchers and the landscape of wearables, patients can be full trial participants without leaving their homes, and researchers can still get complete and accurate information, images, and data.

  It's one thing—albeit an important thing, without a doubt—to move trials into the twenty‐first century by accepting new technologies and data collection tools. It's an even bigger step to open up trial design itself, take the shackles off traditional mathematical design, and move into new statistical techniques, new ways to compare the safety, efficacy, and value of a therapeutic, and new paradigms through which we can speed up how quickly something can move from the laboratory setting into the market, helping patients far more quickly than ever before.

  Unshackling the Clinical Trial

  In the life sciences industry, since the days of Lind and his experiments with sailors and scurvy, we are used to having a two‐to‐one ratio of patients to evidence. We need one patient treated with one medication and one patient treated with another—two people—in order to make a comparison. One patient gets a traditional course of chemotherapy for their cancer, while the other gets an immunotherapy. Or, one sailor gets lime juice to drink, and the other one seawater.

  This need is changing. Here in our state of data‐driven disease models, as we look for the equations that define the lines between what to treat and what not to treat (or between who to treat with an existing on‐market medication and who will be the best candidate for an experimental therapy), we can start to break that two‐to‐one paradigm and create a more steam table‐like view.

  With better instrumentation and richer patient data, we can begin to look at measures of safety, efficacy, and value in new ways. We will have to, in order to achieve a future state of precision medicine. If you think of the number of patients in a study—the total number—as the denominator in a fraction (where the numerator is how many of those patients benefit from the treatment), as the treatments get more and more targeted, we will have a harder and harder time finding enough of them to reach statistically‐reliable conclusions. We need to get more units of evidence from each patient whose data gets incorporated into research in order to make the research possible in the precise world of tomorrow.

  The phrase “digital transformation” is often thrown around by pharmaceutical executives, as they—correctly and with good intent—realize that the infrastructure and processes they use for research and development are begging for modernization. But rethinking trial design (and breaking the two‐to‐one patient‐to‐evidence ratio) goes a step further. It's a critical step when we think about the ultimate goal of building disease graphs that can truly empower better prediction and decision‐making. To get our treat/don't‐treat lines to be as crisp and precise as possible, we need lots more evidence than we are currently generating, lots more data from our trial patients.

  It is so easy now to dive in deeper than we used to be able to—to get higher‐resolution measurements from sensors, to parse through patient histories, or to use artificial intelligence to find connections that we couldn't identify on our own. We don't have to miss episodes in episodic disease, because we can now gather data in real time, 24/7. We don't just need to draw the binary conclusion of whether, say, lime juice is the right treatment for scurvy. We can go further and try to figure out how much lime juice is the right amount, and whether that changes if you're a man or a woman, a child or an adult, or if you have any number of comorbid conditions. We need this increased data to be able to say with confidence whether to treat a high PSA result or not, whether Keytruda will be better for you than conventional chemotherapy, and whether you are going to have clinical signs of Alzheimer's disease while it still matters, or not until you're projected to be 180 years old. The digital infrastructure makes this possible like never before.

  Enter Thomas Bayes

  Thomas Bayes was a statistician in the 1700s whose work ultimately led to a split in the world between two schools of statistical methodology: the frequentist and the Bayesian. Put simply, a frequentist approach to determining the chance that a coin toss will result in either heads or tails requires us to decide first on a number of times that we will toss the coin, measure the outcome, and then, finally, calculate our conclusions. A Bayesian approach, alternatively, allows for adjustment on the fly. Our predictions don't need to wait for the full set of data. We can modify our expectations and our hypotheses as we see more and more evidence.

  With coin tosses, each toss is a trivial amount of effort—assuming we already have the coin—so deciding to toss a coin 100 times in order to figure out how many heads to expect in the future is a reasonably trivial proposition. But when it comes to patients—real people who are looking to extend their lives or increase the quality of them—it's not trivial at all. One hundred trial subjects—just to form an initial understanding of whether and for whom a treatment works—is a lot of people exposed to something that may not help them.

  Using Thomas Bayes' statistical techniques, we can do better. We can expose as few patients as possible to a treatment that won't work and instead give it to the maximum number of people for whom it will. We can get our therapies through the research process more quickly, to make them more generally available. We can learn something about the nature of a coin toss every time we perform one—which means fewer coin tosses are needed to draw a conclusion. In other words, we can break the two‐to‐one patient‐to‐evidence ratio requirement.

  Don Berry, a professor at the University of Texas M.D. Anderson Cancer Center and the founding chair of its department of biostatistics, is the designer of I‐SPY 2, a breast cancer study that marks the largest and arguably most successful use of Bayesian statistics in clinical trials to date. Berry's work on bringing Bayesian statistics into medical science has been pioneering, and it links directly to the ideas we just talked about in the previous section. When you talk to Berry, you realize how applicable Bayesian thinking is to bringing precision medicine to research.13

  Instead of taking the non‐Bayesian frequentist approach—where we need all of a study's data in order to even make an initial estimate of therapeutic value—the Bayesian approach lets us use a probability distribution for that value, based on past knowledge, and then new data can be used to update that probability distribution as the study goes along. Simply put, the probability distribution acts as a function—an equation—that plots the expected outcome of the experiment, and whether a treatment will be effective for a patient.

  Thus, rather than starting with an assumption, with no idea if that starting assumption is correct, or how to adjust it along the way if it's not, we can keep learning as a study proceeds. We can't predict perfectly, but we can create better and better estimates based on what we already know about the world, about patients, and about how they respond. We can keep updating predictions, using today's data to figure out with greater likelihood where we will be tomorrow. And, ultimately, we can move patients around during a trial in order to maximize their outcomes, and maximize what we can learn from the trial, without sacrificing the objectivity and statistical value.

  Put simply, we learn as we go, explains Berry. And if data from other trials helps us make better inferences about our current one, then we can and should use it to the extent that it is statistically valuable to do so. The I‐SPY 2 trial is aimed at finding the best treatments for early breast cancer in high‐risk patients, where their cancer has not yet become metastatic disease. What are the best therapies for treating this disease effectively? Figure 11.1 is a graphical representation of the kind of trial design
pioneered by studies like I‐SPY 2.

  If a therapy demonstrates poor results for a particular subtype of patients in the trial, patients with that subtype get a lower and lower probability of being assigned to that therapy, all the way down to a zero probability if the treatment proves to most likely have no value for such a patient. That is something you can't do in a standard two‐arm trial: if a therapy isn't working, the trial is over, and you have failed. But in a multi‐arm adaptive trial like I‐SPY 2, there are multiple experimental therapy arms (as well as a standard‐of‐care control arm) and a set of genetic tests that are used to establish which therapies show the best outcomes for patients with particular genetic profiles.

  Figure 11.1 Collaborative Bayesian adaptive trials

  Trials with multiple drugs in different arms that take advantage of Bayesian adaptive assignment of patients to the drugs most likely to help them all share similar designs. Patients enter the study and data is collected before they are assigned to a treatment. This biomarker profile determines which patients previously enrolled (as well as the patients who will come after them) are “like” them. Patients are randomly assigned to a therapy, but with a bias toward drugs that have helped patients like them in the past. The outcomes are measured, the mathematical models that relate combinations of biomarkers with likely successful and unsuccessful treatments are updated, and this data is used when the next patient enters the study. Note that this is a continuously running cycle, with patients constantly enrolling and models being updated while patients are being treated. Finally, when enough evidence is amassed showing that a particular drug works well for a particular group of patients as defined by their initially‐measured biomarkers, it can be “graduated” from the study and moved on for regulatory approval. Similarly, drugs that simply don't work for enough people of any profile are dropped, and room is made for potentially more drugs to become part of the treatment options in the study.

 

‹ Prev