Overcoming Barriers To Using Artificial Intelligence In Clinical Research

By Deborah Borfitz 

July 20, 2021 | Current and potential uses of artificial intelligence (AI) and automation in clinical research, and ways to overcome common barriers, were discussed by a panel of industry experts at the recent DIA 2021 Global Annual Meeting. Two multi-stakeholder communities of practice groups now meet regularly to explore ways to leverage AI and machine learning (ML), one focused on improving trial quality and cycle times and the other producing high-quality protocols and reducing risks before the start of a study, according to Linda Sullivan, co-founder and executive director of the WCG’s Metrics Champion Consortium (MCC).

The MCC launched the AI/ML Community of Practice Group in 2019 and the QbD (quality by design)–RBQM (risk-based quality management)–Quality Improvement Community of Practice Group in 2020. The latest group resulted from a presentation on the use of AI in risk-based quality management where clinical operations professionals voiced concern about computers making decisions on their behalf, she says. They were clearly neither knowledgeable nor comfortable with AI and ML although such tools could potentially make their jobs easier. 

Over 40 processes have been computerized by the digital and decentralized solutions group at PPD, where Brandon Graham serves as director. The tasks include data entry, reconciliation, generation, and review, as well as workflow automation, says Graham, a leader in the company’s robotic process automation (RPA) capabilities center. 

RPA has been deployed across many parts of the organization, including the back office (human resources, finance, and IT) and the clinical operations group for startup tasks and pharmacokinetic vigilance, Graham says. The focus is on replacing manual reporting tasks to “free up staff time for high-value activities.” 

For clinical operations staff, MCC’s Community of Practice groups are especially useful in providing a “safe place” to learn more about AI and ML, dispel fears (e.g., job loss), and how the concepts are being successfully applied in the real world, says Sullivan. “That’s the first step.” 

Upskilling subject matter experts is also important so they can be a “partner at the table,” she adds. “It’s all about change management and managing expectations.” Among the tactics used by MCC has been a demonstration by a technology vendor and pharma company on how ML has been used to improve the data query process, “to lift what happens in that black box.”

From a quality perspective, process automation at PPD is taking over unpopular, repetitive tasks, saving teams “thousands of hours” monthly, says Graham. “People are positive and excited” and also have more time to devote to quality reviews and project oversight. 

Resource Constraints

The key adoption barrier for smaller companies is that their revenue stream is based on no more than a few products, says independent consultant Saeeduddin Ahmed, M.D. The resource constraints mean the needed people and infrastructure are unavailable for newer technologies. Smaller companies may not even have the expertise to evaluate vendors, he adds. Overall, they tend to be risk-averse and stick to “tried-and-true” processes.

Part of the learning curve for smaller companies is that past practices may be riskier than the way things can be done today, says Sullivan. Being heavily reliant on outsourcing, they may view technology as their CRO’s “problem” to solve. Small companies also do not have access to enough data to do risk-based quality management using AI and ML solutions.

Where to find staff with critical thinking skills who know what to do with the generated information is a more universal conundrum, she continues, as is “not trusting data that the number-crunching happens on”—particularly when the outputs seem nonsensical. “[Currently], we do not always pay attention to data sitting in our CTMS [clinical trial management system].”

Identifying key risk indicators is often a highly iterative process that gets better at measuring the right things over time, says Sullivan. Some measures (e.g., protocol deviations) will also increase over the course of a study and alerts, if unneeded, will only be annoying and breed distrust in their utility. “Think differently about how you measure things to minimize false positive signals, so [study staff] start to trust what comes out.”  

Executive-level support and engagement can help scale up automation, says Graham. “Without that, it is difficult for any team to get the traction it needs.” 

At PPD, automation efforts are overseen by an executive steering committee with representatives from IT, finance, and various business units—and is weighted toward those with the highest potential to use technologies, Graham continues. The marketing and communications department was also leveraged to create an introductory video for the organization on what RPA is versus AI/ML so staff knew which group to talk to about different opportunities. 

Business function owners at PPD have partnered with Graham’s RPA team to filter ideas coming from within the company and identify the highest value opportunities, he says. 

In the data management arena, a presentation was made to MCC practitioner groups on the use of ML to categorize quality issues into a handful of buckets based on “patterns humans may or may not see,” Sullivan shares. Natural language processing allows unstructured data in monitoring reports to appear in corrective action and preventative action plans. The system can catch issues that people working in clinical operations might miss, she notes, and examples of this are used to educate them on the power of the technology. 

Tips For Beginners 

Start the RPA/ML journey with a “simple process done repeatedly,” advises Graham. “We started with a couple more complex projects that were ultimately successful but took a lot longer than if we had more experience at the time.”    

He additionally suggests starting with RPA since it is a mature, well-defined process. AI and ML both require a large amount of data of sufficiently high quality and, if those are lacking, “don’t go down that road.” 

Ahmed agrees, highlighting the need for education and to begin with small, low-risk projects—for example, testing RPA/ML in phase 4 clinical trials and using it later for signal detection in phase 2 studies. One possibility is 24/7 movement tracking with digital biomarkers to detect sleep and activity patterns, even functioning, he adds. 

If companies are risk-averse now, “that will change by itself in a few years,” he points out. Given that up to 90% of data is unstructured, it will be too risky not to.