AI in Clinical Trials: Industry Leaders Discuss Implementation Challenges and Future Directions
By Allison Proffitt
November 4, 2025 | BARCELONA—At SCOPE Europe, I hosted a panel of pharmaceutical industry leaders to discuss the real-world implementation of artificial intelligence in clinical trials, moving beyond theoretical benefits to address practical challenges and measurable outcomes.
We began with panelists sharing concrete examples of AI delivering value. Jonathan Crowther, Head, Operational Design Center, Merck KGaA, Darmstadt, Germany, emphasized that his team uses AI not simply to identify top-performing sites, but to holistically analyze available data and recommend opportunity sites that fit specific study criteria.
James Weatherall, PhD, Vice President & Chief Data Scientist, BioPharmaceuticals R&D, AstraZeneca, highlighted the widespread adoption of generic AI tools like ChatGPT and Claude across the organization. "I'm just seeing much more of an uptick in that," he noted, describing employees using AI assistants for everything from creating presentations to designing trials and hiring processes. "Enabling that overall use of productivity tools is going to help everybody in a large organization raise their game."
Janie Hansen, Global Development Information Management, Business Systems Transformation, Daiichi Sankyo, pointed to content automation across the development lifecycle as a key opportunity, while Francis Kendall, Head of Statistical Programming, Digital and Data Sciences, Biogen, discussed rolling out patient and site optimization tools, emphasizing the importance of collaboration between data scientists and clinical operations teams.
Process Redesign: The Real Secret to AI Success
A critical theme emerged early in the discussion: AI works best when processes are redesigned around it, rather than simply layering the technology onto existing workflows.
Hanson made this point explicitly using protocol optimization as an example. Traditional protocol drafting moves through siloed processes sequentially, often requiring multiple amendment cycles. With AI, teams can now receive insights on patient burden, cost, recruitment feasibility, and diversity considerations upfront—before finalizing the protocol.
"The technology is advancing at a rapid pace. However, the business processes were created before all of these technology advances," Hanson explained. "The technology strategy and the business strategy with change management need to be aligned."
Weatherall reinforced this point: "Rather than just assuming that an AI tool can be sort of dropped onto an existing process and that suddenly gets you these order of magnitude improvements... if we can sort of reinvent the process at the same time around the technology, rather than just expecting AI to be the fairy dust that improves it with the existing process, I think that's where the really big gains are."
He described how AstraZeneca is using novel AI endpoints, particularly in cardiovascular assessment. Rather than switching immediately from human adjudication to AI-based evaluation, AstraZeneca is running trials where both methods operate in parallel.
"We weren't going to go from one trial where it's entirely done with cardiologists to the next trial where it's entirely done with AI," Weatherall explained.
The parallel approach allows the company to validate AI performance while meeting regulatory expectations, and the results have been promising. Not only can AI match the adjudication threshold of expert cardiologists, but it can also accelerate timelines significantly.
"Often we found with AI, the way it can sort of automatically tap into some of the datasets, it can produce those adjudication events without having to wait for all the data to be kind of reconciled, and so it can actually save us months,” Weatherall reported.
This success, however, raises a philosophical question: "What's your gold standard if AI is now better than human performance? Because the gold standard used to be human performance."
The Literacy Challenge: Beyond Automation
Francis Kendall issued a warning that resonated throughout the panel: the industry shouldn't stop at automation. "Why didn't we automate it in the first place?" he asked, challenging the room to think beyond current operations.
He urged companies to consider the external healthcare environment—tools like Epic are exploring comments that advise doctors on patient treatment—and how clinical trial data might feed such systems in the future. "It's not only going to the submission, but beyond the submission, what do we do to that external environment?"
The discussion then turned to a pressing regulatory requirement: the EU AI Act's mandate for AI literacy training across organizations. Weatherall offered reassurance: "What you want to be wary of is that if an external auditor came through your doors tomorrow that you can demonstrate some kind of record that yes we rolled out some central training."
Hansen emphasized the importance of understanding responsible AI—including traceability and human-in-the-loop best practices. Kendall add three more essential elements everyone should understand: what AI is, how to use AI tools, and how to evaluate AI, knowing when to escalate to specialists.
But Crowther raised a provocative question: "Have we even mastered data literacy?" He noted that organizations are rushing to implement AI literacy programs without having adequately addressed foundational data literacy—and critically, without providing training on prompt engineering.
"Nobody is rolling out prompt trainers," he observed, describing how new employees receive hundreds of mandatory training modules but none on prompting or data fundamentals. He and other panelists recommended DataCamp as a resource for building training programs.
Making Time for Training in a Time-Starved Industry
But how do time-constrained professionals find hours for sandboxing and experimentation with AI tools?
Weatherall’s response was blunt: "It's amazing how people find time when it's in your core mandatory company training." He compared it to code of conduct training—if you don't complete it, you can't work for the company. "This is getting to the stage where we're going to have to be compliant. I mean, it's not like this funny little technology that you play around when you've got time."
Crowther offered a different perspective, suggesting organizations should identify and empower naturally inclined early adopters rather than forcing adoption universally. "You can't stimulate passion with this. It's either you have it or you don't, I think."
Hanson proposed embedding AI tools directly into business processes and workflows, rather than offering them as optional tools. "When we have it as like an ancillary, take or leave, you could use it if you want to or not [tool]... there's less adoption and less diversity."
Accountability, Documentation, and the Attribution Problem
Our audience questions turned to practical governance issues; the panel tackled the thorny question of accountability in agentic AI systems and documentation of AI workflows.
If an AI agent logs into a portal and performs actions, whose credentials should be used? The consensus was clear: it's always the person. "At the end of the day, the person has to sit in front of the auditor," Crowther stated. "The person who is accountable... their details need to be the one that are logged in."
Weatherall acknowledged this might not be the permanent answer: "I do think we're emerging into a future where it is going to become a bit fuzzy, right? That's the same question about risk, liability, intellectual property, who owns the outputs?"
On documentation requirements, the panel confirmed that AI usage should be documented, with prompts and code considered part of work instructions and quality management systems for anything that developing products. However, Crowther admitted candidly: "In terms of individual use cases, I think it's the Wild West... if they don't document it, we don't see it. So unfortunately, it's a bit of a high risk as well."
Preserving the Human Element
The panel concluded with perhaps the most fundamental question: How do companies know when to stop using AI and maintain human decision-making, especially for strategic choices?
Hanson again invoked the concept of responsible AI: "I don't think you could replace all of that experience with a tool at this point. What it can do for you is get you maybe 70% of the way. And then you still need the expert at the end making the decisions, oversight and all of that."
Weatherall offered an inspiring perspective on human uniqueness: "Keep doing the things that make it uniquely human—meet, brainstorm, chat by the water cooler, do all these things that kind of make human innovation work that is incredibly difficult to reproduce with machines."
He added: "We're all wonderfully irrational, aren't we? I think we can handle the rational reasoning to the machines. That's fine. But I think what we should rejoice in is the fact that we can make decisions with feeling, emotion, and irrationality."







Leave a comment