Novel Digital Endpoints: Adoption Holdups Include Device Variability, Managing Change

By Deborah Borfitz 

March 11, 2020 | Novel digital endpoints, including motion-detecting sensors and electronic patient-reported outcomes, hold tremendous promise for advancing precision medicine, expanding the reach of clinical research and reducing the burden of study participation. But adoption is being slowed by the abundance of tools and measures, sponsor inexperience with the technologies, and unfamiliar risks that software introduces into trials. 

Those were the overarching themes of a panel discussion held at the 11th Annual Summit for Clinical Ops Executives (SCOPE) in Orlando. “We have to remind ourselves that existing measures are not perfect,” says panelist Jennifer Goldsack, executive director of the Digital Medicine Society. Devices that capture snapshot moments can also sometimes be “good enough.” 

Of interest to study participants is the “ability to collect data more passively” so they have time for more enjoyable activities, says Mike Nolte, CEO of Signant Health. “If they have a life-threatening disease, time is even more compressed and limited.” 

Collecting patient outcomes digitally, rather than in person and on paper, also improves data accuracy and removes the geographic barriers to making site visits, Nolte adds. 

Central nervous system disorders lend themselves well to novel digital endpoints that measure movement and can be a model for what happens in other therapeutic areas, according to Sina Djali, head of clinical and operations analytics at the Janssen (owned by Johnson & Johnson). Digital endpoints are apt to be used next in the cardiovascular arena, including sensors to detect atrial fibrillation.  

The trend has been toward functional testing at home, but “great opportunities” exist for tools that measure complex behavioral responses, says Jeremy Wyatt, president of ActiGraph. As evidenced by accelerating consumer adoption and approvals by the Food and Drug Administration, the devices themselves are not the problem. 

But study sponsors tend to “throw technology in” to rescue studies rather than plan for its introduction so the sensors are validated for the specific context of use, Wyatt says. The clock system of sensors also tend to vary from one device to the next.   

The Digital Medicine Society recently released a crowdsourced library of 41 unique digital endpoints used in industry-sponsored studies, Goldsack says. A 2018 literature review of mobile devices used to measure outcomes in clinical research identified over 150 different measures of activity. 

Goldsack’s recommendations include “communication, thoughtful collaboration, a systematic framework to measure [a tool’s performance] and a more modular approach” to the development and use of digital biomarker products. A standard documentation approach would also help identify the version of a technology used in a study when evaluating the options. 

Skill-Building 

The “headwind around adoption” of digital solutions is partly related to challenges such as bugs in software and version control, says Nolte, as well as newfound visibility to missing data. Technical solutions also require a different skillset than paper-based clinical outcomes assessment. 

One of the bigger issues for pharma companies is “trying to figure out what to measure,” says Djali. Data management is also tricky when a secondary digital endpoint is added to a larger trial. 

“A company the size of Janssen will not turn on a dime,” notes Djali. “We’re building our [data management] capabilities, but it’s a slow process. How do we bring data scientists on board? There aren’t enough of them, so we’re trying to organically grow that core of [experts].”  

Regulators have never been “anything but open-minded” about the adoption of digital endpoints, says Goldsack. “But they ask questions. Why are you using this versus a traditional measure? To improve the patient experience, or to capture data more accurately or completely… or [because it’s] a new phenomenon?” Evidence-based rationale for going digital effectively counters any potential pushback.  

“[Adoption] is mostly not a technical problem,” Nolte reiterates. “It’s a function of reliability and how data get used and what data get produced and if…  one device versus another captures the same data the same way.” Even consumer devices like Garmin and Fitbit count steps differently, he notes. A lack of standards for how data get used translates into variation in what the data mean and how information gets interpreted. 

“The problem with measuring steps is they’re being measured on the wrist,” says Wyatt, focusing on why counts may be off. Part of the problem is that the measurement system was set up to take paper inputs and not telemetry data. Digital endpoints also require a lot of data transfer agreements and an understanding of the time gap created by daylight savings time.    

“All of us are gearing up to share 50,000 participants and the accuracy question is key,” says Goldsack. “How representative is the data?” Vulnerable populations need to be included in training data on which algorithms are built, and robust governance is needed to ensure people are properly consenting to how their data is used and accessed since anonymization is “almost impossible.”