AI’s Promise Hinges on Redesigning Workflows
By Deborah Borfitz
February 10, 2026 | The promise of artificial intelligence (AI) was on full display at last week’s SCOPE event in Orlando, peppering each of hundreds of presentations focused on the future of clinical research. Analysts have projected that AI in clinical trials will represent an $8 billion business segment by 2030, generating up to $110 billion per year in value to pharma, but “we’re not going to get there unless we redesign our workflows,” according to Mike Sullivan, who heads up IT globally for development operations at Bristol Myers Squibb (BMS).
Sullivan was speaking at the conference on what is achievable with AI between now and then, which could potentially help solve the “clinical insight latency” problem that has long plagued the industry. It’s the same issue faced by London physician John Snow in the 1850s with his discovery that a cluster of cholera deaths were tied to a well pump and not bad air, as the scientific community long after continued to insist.
When it comes to clinical trials, the challenge in closing the gap between “when something happens and when we see it ... and have enough information to act on it” is not access to the technology, says Sullivan, pointing to AI’s ubiquity. At least 88% of organizations already have a business function using AI, although only about 23% are deploying agentic AI (as BMS has done)—meaning, “a lot of bots who are orchestrated together to do something really important for your organization.”
Due to the exponential rate of change that has been occurring with modern AI, Sullivan says he is leaving the forecast beyond 2030 to futurists. The key aspirations he is setting about what clinical operations will look like inside of five years revolve around four pillars, the first of which is “autonomous clinical workflows.”
How work gets done will no longer be linear, manual, or reactive, says Sullivan, and will involve planning multi-step workflows, executing across systems, continuously monitoring outcomes, and escalating only when human judgement is required. The “secret sauce” will be the pushbutton nature of the clinical operations process to, for example, generate statistical analysis and data management plans and monitor how patient enrollment and site activation is going. “Humans just look at the results and the recommendations from AI” and make decisions about the next steps.
Among the key technical shifts making this feasible are autonomous agents, agentic architecture, and agentic AI, he adds. But only 5% of companies are reporting that all the AI being embedded into their systems has had any material value to them, “so the gap ... is our operating model.”
Redesigning current workflows is imperative if clinical operations teams are to have any chance at hitting goals on AI’s documented potential that they will also be held accountable for delivering against. The old tenet to “develop a process first, then design the technology to support it” needs to be completely abandoned, stresses Sullivan.
“Now, you should be redesigning process ... with people who understand AI” to determine which specific steps AI gets delegated to break any paradigms of legacy structures, he says. “The value will be realized not by bolting on AI to existing longstanding processes,” which are functional but highly inefficient.
Adaptive and Predictive
Sullivan’s second pillar of envisioned clinical operations for 2030 is “the adaptive, machine-readable protocol” as a replacement for a static PDF. Digital protocols have been widely discussed over the past few years and actively adopted by companies that include BMS, says Sullivan, with some parallel evolution in processes.
BMS has also started thinking about how to tailor some of its AI-based solutions “to reach the audiences who will get the most value,” he continues. This includes clinical scientists and clinical trial physicians in terms of the way they enter data and get real-time insights as they’re making decisions, as well as the notion of continuously simulating a protocol to forecast the impact of considered changes.
Multimodal AI—the kind that can read documents, images, and video—will be the key enabler, freeing humans of the work of acquiring, digesting, and synthesizing all that information so people can put their mind to other things, says Sullivan. Neuro-symbolic AI will likewise be important, combining the ability to recognize something (e.g., image of a cat) with rules to justify a recommendation or insight (e.g., it has whiskers and pointy ears and says meow), and could therefore support decisions on a regulatory pathway.
Protocols could be thought of as “data flowing to the necessary places,” with dashboards no longer required and AI being used “to simulate and drive real change,” Sullivan says.
Pillar 3 imagines “predictive site and patient experience” by 2030, replacing the retrospective views that have long prevailed in the industry, he says. Many vendors are bringing promising early-insight technologies to market, and the next near-future possibilities are to create digital twins of study sites.
As they receive a potential protocol from a sponsor, sites could have AI run it through a simulation to determine if the enrollment target is too high or aspects the protocol are too complex, continues Sullivan. They might as a result want to change their clinical trial agreement or charge the sponsor more. On the flipside, sponsors might have a digital twin for a site and “could figure those things out before it’s even a burden for patients ... [and] sites.”
One of the more controversial aspects of achieving some of these aspirations revolves around federated, privacy-preserving learning. This is where a sponsor regularly runs an algorithm on site data to adjust study protocols, explains Sullivan. “The AI would draw inferences and come to new conclusions based on all the activity and goings on at the site and then send that insight back home.” The process takes no intellectual property or HIPAA information and completely respects country boundaries, and could offer tremendous benefits to both parties, but it’s questionable how agreeable sites would be to that sort of arrangement.
If it is possible to simulate the patient's journey, it becomes an “ethical responsibility” to do so, Sullivan notes. “There’s one thing that we all bring to patients that we never really talk about and that’s hope. The moment a patient receives a difficult diagnosis and then searches for that first clinical trial ... the candle of hope flickers on.”
That reality dictates that sponsors consider patients’ experience between the time they find a trial and randomize into the study, he says. “We need to eliminate the noise of administrative burden and overhead and the risk of dropout before that patient ever has that flicker of hope because it’s really disappointing” to learn a seemingly ideal trial is being conducted too far from their home or they have a comorbidity that is going to exclude them from participation.
Analysis-Ready Data
The fourth and final pillar of 2030-style clinical operations Sullivan terms “zero-latency data and continuous quality.” Data will no longer be captured and cleaned but be “analysis-ready at the moment of creation ... [because] data quality issues will self-identify and self-correct.”
This is made possible by the compression of raw data into knowledge artifacts, he says. When patients have diagnostics done at a site, the resulting data is automatically standardized, cleansed, and transferred back to the sponsor, where the information gets integrated and analyzed with insights driven instantaneously. “Potentially the site or the sponsor could get a very early signal of ... a risk two visits from now that we need to take proactive preventive measures to help this patient have a better experience.”
Surprisingly, 72% of sites still manually transfer data between systems, he reports. “We have more to do as an industry ... to make this go away.
“There is no silver bullet process for us to go after,” Sullivan continues. “We must go after those smaller processes and either get rid of them or rearchitect them in some way for the future. That’s how we’ll slowly create the foundation for AI to drive the acceleration that we’re all expecting to see from this.”
Human Quotient
Much focus has been put on the potential job loss accompanying the rise of AI. “People don’t talk about the jobs to be created,” says Sullivan. The World Economic Forum has projected that AI may displace 85 million jobs by 2025 but also add 97 million new roles. “It’s just like when the internet became the internet,” he adds. “We just need to reskill ... not resist.”
Among the new roles for humans are ethical oversight, exception handling, strategic design, system governance, and relationship leadership. The latter refers to a human focused on managing trust, alignment, and judgment in the face of AI ambiguity or uncertainty, explains Sullivan. It’s another key skill that is going to elevate people by having them use more of their brain power and spend less time clicking on their keyboard.
“The front runners, the people who are going to realize all the value of everything we’ve talked about today, are those that are going to redesign their workflows,” he concludes. “You have to blow them up, start from scratch,” mimicking the unconstrained thinking of a startup company building adaptive systems that learn and use AI to surface insights never before available nor at such speeds.
“Let’s get data flowing across the ecosystem,” he entreaties. “AI can transfer data so much faster than we’re used to ... we just have to agree to do it, and by the way we don’t need an industry standard.” Progress can happen through demonstrations done by one sponsor, site, and vendor at a time.







Leave a comment