One Drop at a Time
Jeffrey Dachis is the founder, chairman, and CEO of OneDrop, a diabetes management approach that isn't quite tackling the artificial pancreas problem, but trying to use patient data (and its own hardware and software platforms) to get as close as it can—without actually dispensing the insulin automatically. On its face, OneDrop is simply a better version of what already exists across the industry—a fancier glucose testing device, with a well‐designed app, a direct‐to‐consumer subscription plan for testing strips, and a community element for patients to connect with each other online.
But there's more—in 2018, OneDrop unveiled its Automated Decision Support system, which leans on data from its users to predict future blood glucose values—helping users decide how much insulin they need.13 The algorithm working behind the scenes uses personal data along with information about “similar” users, ending up with 91% of predictions falling within a plus‐minus range of 50 mg/dL of glucose and 75% within 27 mg/dL. It's not quite an artificial pancreas, but it does add value beyond traditional blood glucose meters, which merely report the numbers.
The company was started after Dachis, a co‐founder of the media agency Razorfish, was diagnosed with diabetes at age 47. According to a profile in the online site New Atlas, Dachis had suddenly lost 20 pounds in less than eight weeks and was constantly thirsty.14 He ended up diagnosed with latent autoimmune diabetes of adulthood (LADA), a rare form of type 1 diabetes often misdiagnosed because it appears after age 30.15
Dachis was surprised to be given almost no guidance upon diagnosis—“six minutes with a nurse practitioner, who gave him an insulin pen and a prescription”16—and so he decided to take matters into his own hands, interviewing hundreds of diabetics to figure out how he could best address their needs with technology. “This is really a user experience problem, not a medical problem,” he told New Atlas in 2017. “There's all this complex, psychosocial stuff going on with this diabetes condition that has nothing to do with the medical industry.”
The OneDrop system collects glucose readings, physical activity measures (synced through trackers like the Fitbit or Apple Watch), food intake, and medication tracking and presents the data through easy‐to‐read visualizations in its app. Plus, there is chat support with certified diabetes educators and community features so patients can learn from experts and from each other. Dachis told the online site Healthline, “When I was diagnosed I thought, there has to be somebody who's cracked this code already—there has to be the cool gear, the stuff that's going to combine Internet of Things, Quantified Self‐ers, Mobile Computing, and Big Data.”17 He couldn't find it. And so Dachis created OneDrop.
Clinical studies have borne out the usefulness of the OneDrop system. The journal Diabetes reports on OneDrop's Automated Decision Support feature, finding that in one sample of 28,838 forecasts sent to 5,506 users, 92.4% were rated “useful” by those users.18
It's all about connecting the disparate buckets of data, Dachis told the industry site Diabetes in Control.19 “[D]iabetes is such a data‐driven disease and yet all the data that you need to manage the disease, sits in all these different places,” he said. Between carbohydrate data, medication data, insulin data, physical activity, and blood glucose levels, everything was in disparate places without any coordination. “We automatically track your blood glucose if you're using one of the connected meters,” Dachis said, as well as everything else, coordinating it to provide recommendations and advice to patients, and let them make more informed choices.
As opposed to the Medtronic artificial pancreas, and the others in development, OneDrop isn't just for type 1 diabetics, who make up just 5% of the diabetic population. The decision support specifically is meant for type 2 diabetics, the more than 400 million people worldwide who have to think about this problem every day.20 The app pops up notifications with actionable tips—take a 15‐minute walk if your blood glucose is going up, for instance.21
“You can only learn so much by looking back at what already happened,” said Dr. Dan Goldner, vice president of Data Science Operations at OneDrop, in a press release announcing the Automatic Decision Support feature. “We want to empower you to look ahead—to see what's coming and know what you can do about it. Like the collision‐avoidance system in your car, glucose forecasts give you information in time for you to take action and shape the course of your diabetes.”22
Goldner provides a great analogy—the collision‐avoidance system in your car—that encapsulates what we can hope to be able to accomplish for chronic diseases beyond just diabetes. These systems can be more than just trackers, but real tools to get patients to behave in optimal ways and avoid problems that would otherwise arise. For too long, we have been forced by the available technology to be satisfied with “good enough”—as long as you weren't in crisis, you were doing okay. But now, we have the ability to be genuinely proactive in our approaches, to not just settle for good enough but to aim for optimal.
Of course, chronic diseases give us time to learn from an individual's data and patterns, and the ability to refine our predictions day after day, week after week, month after month. There is another set of problems out there that bring a different challenge—acute problems like the flu and sepsis—which are the next ones we can take a look at.
Notes
1. Veena Misra, interview for The Patient Equation, interview by Glen de Vries and Jeremy Blachman, August 11, 2016.
2. Engineering Communications, “NSF Engineering Research Centers: ASSIST and FREEDM,” College of Engineering News, @NCStateEngr, October 9, 2017, https://www.engr.ncsu.edu/news/2017/10/09/nsf-engineering-research-centers-assist-and-freedm/.
3. Veena Misra, “Smart Health at the Cyber‐Physical‐Human Interface,” NAE Regional Meeting at the University of Virginia, May 1, 2019, https://engineering.virginia.edu/sites/default/files/common/offices/marketing-and-communications/Veena%20Misra%20NAE%20Talk%20Final.pdf.
4. Engineering Communications, “NSF Engineering Research Centers: ASSIST and FREEDM.”
5. Veena Misra, “Wearable Devices: Powering Your Own Wellness | Veena Misra | TEDxRaleigh,” YouTube Video, June 14, 2016, https://www.youtube.com/watch?v=noiKR_yWniU.
6. Kady Helme, “Why I Miss My Artificial Pancreas,” Forbes, December 22, 2014, https://www.forbes.com/video/3930264846001/#7eff48c61e78.
7. Amy Tenderich, “Artificial Pancreas: What You Should Know,” Healthline Media, April 2019, https://www.healthline.com/diabetesmine/artificial-pancreas-what-you-should-know#1.
8. Craig Idlebrook, “38 Percent of Medtronic 670G Users Discontinued Use, Small Observational Study Finds,” Glu, March 25, 2019, https://myglu.org/articles/38-percent-of-medtronic-670g-users-discontinued-use-small-observational-study-finds.
9. Clara Rodríguez Fernández, “The Three Steps Needed to Fully Automate the Artificial Pancreas,” Labiotech UG, March 11, 2019), https://labiotech.eu/features/artificial-pancreas-diabetes/.
10. Charlotte K. Boughton and Roman Hovorka, “Advances in Artificial Pancreas Systems,” Science Translational Medicine 11, no. 484 (March 20, 2019): eaaw4949, https://doi.org/10.1126/scitranslmed.aaw4949.
11. “What Is #OpenAPS?,” Openaps.org, 2018, https://openaps.org/what-is-openaps/.
12. Craig Idlebrook, “FDA Warns Against Use of DIY Artificial Pancreas Systems,” Glu, May 17, 2019, https://myglu.org/articles/fda-warns-against-use-of-diy-artificial-pancreas-systems.
13. One Drop, “Predictive Insights | Automated Decision Support,” One Drop, 2019, https://onedrop.today/blogs/support/predictive-insights.
14. Michael Irving, “One Drop: The Data‐Driven Approach to Managing Diabetes,” New Atlas, August 14, 2017, https://newatlas.com/one-drop-diabetes-interview/50885/.
15. “What Is LADA?,” Beyond Type 1, 2015, https://beyondtype1.org/what-is-lada-diabetes/.
16. Michael Irving, “One Drop: The Data‐Driven Approach to Managing Diabetes.”
17. Amy Tenderich, “OneDrop: A Newly
Diagnosed Digital Guru's Big Diabetes Vision,” Healthline, March 19, 2015, https://www.healthline.com/diabetesmine/onedrop-newly-diagnosed-digital-guru-s-big-diabetes-vision.
18. Daniel R. Goldner et al., “49‐LB: Reported Utility of Automated Blood Glucose Forecasts,” Diabetes 68, Supplement 1 (June 2019): 49‐LB, https://doi.org/10.2337/db19-49-lb.
19. Steve Freed, “Transcript: Jeffrey Dachis, Founder and CEO of One Drop,” Diabetes In Control. A free weekly diabetes newsletter for Medical Professionals., November 19, 2016, http://www.diabetesincontrol.com/transcript-jeffrey-dachis-founder-and-ceo-of-one-drop-diabetes-app/.
20. Adrienne Santos‐Longhurst, “Type 2 Diabetes Statistics and Facts,” Healthline, 2014, https://www.healthline.com/health/type-2-diabetes/statistics.
21. One Drop, “Predictive Insights | Automated Decision Support.”
22. One Drop, “One Drop Launches 8‐Hour Blood Glucose Forecasts for People with Type 2 Diabetes on Insulin,” PR Newswire, June 8, 2019, https://www.prnewswire.com/news-releases/one-drop-launches-8-hour-blood-glucose-forecasts-for-people-with-type-2-diabetes-on-insulin-300864192.html.
6
Flumoji and Sepsis Watch—Two Approaches to Predicting and Preventing Acute, Life‐threatening Conditions Through Smarter Data
It's almost a governing principle in health care: the earlier you detect a problem, the less painful (and the more cost‐effective) the treatment will be. If you can identify problems early, you're stacking the deck in your favor, on all sorts of metrics—predictability of the course of treatment, odds of success, and reduced risk for an additional cascade of potentially costly and harmful problems down the line, as just a few examples. With the flu, catch it early and you can minimize the severity (and lower the overall productivity loss, if you're looking at things from a societal level) by giving patients an antiviral like Tamiflu. With sepsis, catch it early and you're absolutely saving lives. Or, to flip the statement around, catch it late and people will die.
Looking at the big picture, early (and accurate) detection—of every condition, every response, every reaction—is the ultimate reason to put energy into finding and deploying sophisticated patient equations. Whether we're looking for cancer, Alzheimer's disease, or, as in the previous two chapters, diabetes, asthma exacerbations, or if someone is about to ovulate—or whether we're looking for how a patient is responding to a treatment, or whether they're responding at all—the earlier we know, the more options we have, the more room to maneuver and find the optimal solution going forward. It's important, whether it's going to be a long‐fought war (like cancer) or a quick battle. Nowhere does this play out more critically than with fast‐acting issues like the flu or sepsis, where mere days or even hours can absolutely mean the difference between life and death.
Catching Sepsis Earlier
Sepsis—a patient's inflammatory response to an infection, leading to rapid organ failure and 50% mortality if it progresses to septic shock1—causes 6% of all deaths in the United States and $23 billion in annual medical costs,2 with over 1.5 million cases every year and more than 250,000 deaths.3 And yet early identification, as the researchers and clinicians behind Duke University Hospital's Sepsis Watch system put it, “remains elusive even for experienced clinicians.”4
The national average as far as catching sepsis “is about 50 percent,” Duke's Dr. Mark Sendak told Inside Signal Processing newsletter.5 “A lot of places struggle with this problem.” At Duke, an average of seven to nine patients develop sepsis every day, with a nearly 10% mortality rate.6 The problem is that there is no one test, no one symptom, no one sure sign of sepsis. So finding it—and fighting it—has for too long been an idiosyncratic process, with doctors and nurses trying to get lucky and notice it before it's too late. Data, researchers at Duke realized, can help us.
In November of 2018, Sepsis Watch was launched at Duke after months of testing. It is a data‐powered artificial intelligence system designed to identify sepsis cases earlier than ever before, and to stop them before it's too late. The system incorporates dozens of variables—86 in all—including patient demographics, comorbidities, lab values, vital signs, medications, and more—pulling information from medical records every five minutes to try to identify patients with signs of sepsis before a doctor could possibly notice, and then alerting the hospital's rapid response team, prompting them to evaluate and potentially intervene.7 Once patients at risk are identified, their progress is tracked through four stages—triage, screened, monitoring, and treatment—ensuring that they are not ignored once they've been flagged by the system.
Before launch, Duke tested the model on retrospective patient data, finding that it could identify sepsis as much as five hours earlier than was typically happening—a huge advantage when it comes to treatment.8 In an interview with American Healthcare Leader, Eric Poon, Duke's chief health information officer, said, “All of us as clinicians have had experiences where we know the patient isn't quite looking right, but with so many things happening, it's hard to pick out those faint signals from the noise.”9 Now, they can react more quickly and potentially make a real difference in patient outcomes.
Partnering with Doctors, Not Replacing Them
The Sepsis Watch system is not an artificial pancreas for sepsis. It does not deliver medication, or even recommend treatment. It puts up a warning sign—and makes sure that doctors and nurses step in to evaluate. It's an example of data working to empower doctors and hospitals to deliver better care, not replace them. The AI can't do it all, Dr. Sendak told IEEE Spectrum.10 It's the doctors who are ultimately making the decisions.
But even this level of intervention can be difficult to introduce into hospitals and become a trusted part of the workflow, even when the data shows that it should. As the Duke team's submitted manuscript to the Journal of Medical Internet Research begins, “Successful integrations of machine learning into routine clinical care are exceedingly rare.”11
Madeleine Clare Elish of the Data & Society Research Institute writes about the difficulties of establishing trust in an artificial intelligence‐based system—and about Duke's Sepsis Watch system specifically—in a paper titled “The Stakes of Uncertainty: Developing and Integrating Machine Learning in Clinical Care.”12 She discusses how as something typically diagnosed by gut instinct, sepsis is a perfect candidate for machine learning, but that implementation of Sepsis Watch still had to be approached carefully to ensure that doctors wouldn't resent the perceived interference with their clinical judgment. Elish writes, “Healthcare, and in particular hospitals, have historically been slow to adopt new technologies…[even when they can] improve patient outcomes…. ‘It's a low hanging fruit, but the fruit has a thick stem. You can't really hit it.’”13
With lessons that can apply to anyone trying to bring change to a health care organization, Elish goes on to explain that it was important to incorporate end users—doctors and nurses—from the very beginning, ensuring that stakeholders felt engaged in the process and invested in the project's success. She also describes how the Sepsis Watch system was deliberately limited in capability—designed just to predict the first appearance of sepsis in a patient and flag it for the team, and not to dictate anything beyond that, so as to ensure that the system was perceived as supplementing the doctor's role, not replacing it.
She talks in addition about “alarm fatigue,” and the risk that alerts would be experienced as “more annoying than helpful.” It was important in developing the system—as it is in all similar types of systems—that the right balance is struck between alarming too often (meaning doctors will end up ignoring the warnings) and not alarming often enough (meaning septic patients will be missed, and no one will end up trusting the system to catch the patients it purports to). Physicians needed proof that the system worked better than their judgment alone (which is why Duke ran its retrospective analysis to show that the system identified septic patients an average of five hours faster than baseline). At the same time, doctors also wanted
to be sure that their autonomy was not threatened.
These social factors are critical—not just here with Sepsis Watch but with any new technological implementation. Another point Elish makes that is worth noting: she found in talking to stakeholders at Duke that people preferred to call Sepsis Watch a “tool” rather than anything else, and that the term “predictive analytics” was preferred to “machine learning” or “artificial intelligence,” which both had more intrusive and threatening implications. A tool is simply there to help, not replace.
Eric Poon and his team are currently evaluating the initial results of Sepsis Watch at Duke.14 “We're not afraid to put something in,” Poon told American Healthcare Leader, “but we want to evaluate rigorously whether it makes an impact in patient care…. We want to innovate but make sure we are doing it smartly.”15 Indeed, smart hospitals need to be developing and deploying exactly these kinds of valuable tools in order to compete in the new data‐driven world. Catching sepsis sooner than before can make a huge difference to a hospital's overall patient outcome statistics, giving it a competitive edge and helping the organization in a whole host of ways.
Looking Beyond Sepsis
Duke's Sepsis Watch is not the only hospital‐based system looking to incorporate data intelligence into practice. El Camino Hospital in California is using machine learning around a set of risk factors in order to predict the likelihood of patients falling.16 In the first six months of their program, they saw a 39% reduction in falls.17
Tested at Kaiser Permanente of the Northwest in Oregon and Washington State, ColonFlag is a machine learning algorithm that produces risk scores from patient data to determine who ought to be referred for colon cancer screening. The system was shown in one study to be 34% better than looking at low hemoglobin alone.18
The Patient Equation Page 10