AI in the Clinical Space: What 2030 Looks Like
By Clinical Research News Staff
April 9, 2026 | Clinical research is entering a phase where “using AI” is no longer the headline, and redesigning clinical operations becomes the real work. Mike Sullivan, head of IT globally for development operations at Bristol Myers Squibb, joins The Scope of Things to discuss a persistent industry problem: clinical insight latency, the long delay between data being generated at sites and meaningful decisions being made by sponsors. Compliance concerns, rigid standard operating procedures, and risk aversion can block the kind of clean-sheet thinking required for AI in clinical trials to deliver real value rather than incremental automation.
“The fear is preventing folks from really rethinking how processes should be completely torn down and rebuilt in the face of AI,” says Sullivan. At the recent SCOPE Summit, he gave a presentation that highlighted four pillars of what AI and clinical operations could look like in the next five years.
The first pillar is autonomous clinical workflow powered by AI agents. Instead of deploying isolated tools that generate a document or summarize a protocol, agentic workflows aim to plan and execute multi-step processes across systems, then escalate decisions to humans only at the moments that truly need judgment. This is the difference between “AI assistance” and orchestration. For clinical operations teams, the promise is fewer manual handoffs, less rework, and faster cycle times, while keeping humans in the loop to resolve ambiguity, approve tradeoffs, and manage exceptions that affect patients, quality, and regulatory readiness.
The second pillar is adaptive, machine-readable protocols. When protocol content becomes structured data rather than a static document, teams can simulate designs earlier, quantify patient and site burden, and test feasibility before locking in complexity that later harms enrollment and execution. Multimodal AI and neuro-symbolic AI help connect text, structure data, and utilize logic and evidence, making outputs more explainable. When the protocol becomes software, automation becomes scalable, measurable, and easier to govern.
The third pillar explores digital twins as a way to forecast real-world trial experience across sites and participants. Sponsors can model operational impact before launching, while sites can anticipate capacity constraints, staffing needs, and patient flow, especially for complex modalities. But this quickly raises trust and privacy issues, which is why federated, privacy-preserving learning becomes central to the future of AI in clinical research. Training models where the data lives can unlock broader insights without moving protected health information across borders, yet it demands strong governance, clear permissions, and a shared industry commitment to responsible data collaboration.
The fourth pillar, the “clinical ops nirvana,” is zero-latency data and continuous quality, where data issues self-identify and self-correct. The claim is not that data can move faster, but that AI can compress the messy middle: normalization, standardization, synthesis, analysis, and transformation into insight. This pairs with a human story about work itself. Some tasks will be displaced, but new roles expand around ethical oversight, system governance, exception handling, strategic design, and relationship leadership. The durable skill is learning, unlearning, and relearning so teams can move from hands-on keyboard to higher-value reasoning supported by better, faster data.
To learn about how AI will affect the job market and get the latest news on a new playbook for pediatric clinical trial reporting, lithium treatment for verbal memory decline, and more, listen to The Scope of Things podcast.






