The Connectivity Factor: How Interoperability, via AI Agents, Speeds Precision Medicine

Contributed Commentary by Srinivasan Anandakumar, QuartzBio 

July 25, 2025 | What if trial timelines could be cut by years—just by improving how R&D systems and ecosystems talk to each other? In a recent study using generative AI (GAI) to aid decision-making from an integrated data ecosystem, trials sped up by more than 12 months.  

How is this type of acceleration possible? 

Our experience with our biotechnology and pharma clients shows that transforming drug development requires intentionally building interoperability among systems and ecosystems, supporting multiple stakeholders across the precision medicine value chain, and that this interoperability is enabled by GAI.  

Interoperability in Precision Medicine: What Does It Mean? 

In modern drug development, connectedness is crucial for success, especially in decentralized teams. This connectedness involves linking people and information to extract shared insights from data to make faster decisions.  

In practice, this means ensuring interoperability of data, technology, systems, and ecosystems through a scalable data fabric architecture. It means connecting data sources, including enterprise data lakes, EDC, LIMS, CTMS, sponsor-side agentic frameworks, and more, to create a robust precision medicine data value chain.  

Interoperability requires connecting data of different modalities; e.g., clinical data, genomic data, imaging data, proteomic data, and biometric data. Data from multiple vendors, with different formats, frequencies, and data transfer plans, must be connected. 

Finally, an interoperable architecture democratizes access to data and insights, with easy search and role-specific, secured access for different data consumers.  

Interoperability in Precision Medicine: Why Does It Matter? 

Interoperability leads to the realization of business value across the precision medicine value chain by reducing manual processes, enhancing data reproducibility, and facilitating human collaboration. 

Currently, the value chain has been weakened by productivity challenges, largely because of gaps in trial strategy, execution, and monitoring. While innovation drivers remain strong (the number of drugs in development grew from 3,200 in 2012 to 6,100 in 2022), R&D productivity is not increasing.  Only 20% of trials finish on time, and only 13% of assets entering phase I go on to launch.  

When we ask, "Can we leverage data to boost R&D productivity?” it's clear that we can do better.  

Over the past decade, sponsors have been collecting many more data points; phase III trial data alone has increased by 283.2%. But more data hasn't consistently led to better results—data needs to be qualified for relevance and usage. Currently, 85% of trials use inappropriate data sets, and only 5% of trials are designed with commercial relevance as a consideration.  

There is a “data-value gap”—only 32% of companies report being able to extract business value from their data. To close this gap, we must rethink how data is collected, connected, and consumed by employing FAIR (Findability, Accessibility, Interoperability, and Reusability) principles. 

Unfortunately, interoperability is not traditionally built into drug development processes. Each trial is treated like a “snowflake”—with unique objectives, target outcomes, trial design, metadata, and data formats. Individual snowflakes are growing in complexity, and the resulting siloed processes prolong timelines and increase effort.  

Unlocking Opportunities with an Interoperability-first Approach 

With a foundation of interoperable systems and ecosystems, sponsors can leverage aggregated, cross-portfolio insights to improve drug development. For example, researchers at UCSF reported a new approach to cancer immunotherapy that they uncovered from past clinical trial data.   

Interoperability unlocks opportunities to improve: 

  • Trial Strategy & Design: Better biomarkers to identify likely responders; historical control arms 
  • Trial Conduct: Meeting planned operational milestones    
  • Patient Centricity: Monitoring visit adherence, patient dropout, safety 
  • Regulatory Compliance: The recent implementation of ICH E6(R3) Good Clinical Practice guidelines, taking effect in July 2025, heralds increased scrutiny around data integrity, governance, and chain of custody for data assets including biospecimens. Interoperable systems empower sponsors to streamline adherence without having to monitor compliance of individual point solutions. 

A purpose-built data and analytics framework can thus enable sponsors to recognize immediate value. 

A key component of this framework is a platform for quality data, built on secure infrastructure. This platform should enable seamless data ingestion, storage, version control, and annotation at scale. Tagging files and datasets by gene, protein, gene variant, study, patient, etc., makes it easy to retrieve information. 

Another critical component is 360° business intelligence along the precision medicine value chain. Sponsors need analytics that illuminate the entire data ecosystem, from sample lifecycle management to exploratory biomarker data analytics and beyond. 

With this framework in place, GAI can transform the way precision medicine teams work, supporting both data management and business intelligence, as I describe next.  

Promoting Interoperability Through AI Agent Networks: How Do They Work? 

Advances in GAI, involving networks of task-specific, semi-autonomous AI agents, drive increased interoperability and bridge the data-value gap. 

To understand how precision medicine AI agents work, let's examine some examples: 

  • Data Management Agents: 
    • Global Library Agents: Interoperable architecture requires governance by global metadata libraries. Study-level library agents enable master library agents to auto-recommend data elements to be added at a platform level, promoting interoperability across the enterprise.
    • Study Onboarding Agent: Interoperability can break down during onboarding because of study-to-study metadata variation. Onboarding agents leverage global libraries and study metadata to infer details of data ingestion, keeping metadata consistent.
    • Data Quality and Data Mapper Agents are trained in precision medicine-specific rules, enabling autonomous data integration and quality control.
  • Intelligence Agents, such as conversational AI-driven sample and biomarker intelligence agents, turn natural language requests from stakeholders (such as translational researchers and biomarker operations teams) into data queries, shortening the time from question to insight. 
  • Auxiliary Agents interact with and are invoked by other agents to provide task- and domain-specific outputs, such as visualization agents that deliver volcano plots, heatmaps, and Kaplan-Meier curves. 

These are just a few examples of AI agents that connect data ecosystems and human stakeholders. 

The Future of Precision Medicine: Hybrid Teams of Human Experts and Domain-Specific Agents 

As Microsoft recently reported, the future of work is shaped by human-agent teams. Precision medicine teams are no exception. To increase productivity, they must prioritize both domain-specificity and natural language capabilities.  

Not only does domain-specific GAI speed time-to-insight by two-fold* and increase R&D output by over 25%* (*QuartzBio research), but it also enhances interoperability across the precision medicine value chain by improving data integration, speeding up processing, and aligning ontologies.  

Domain-specific GAI also enhances general GAI capabilities through agent-to-agent interoperability using recently published standards, such as Google’s Agent-to-Agent protocol (A2A) and Model Context Protocol (MCP), empowering agent networks to operate across domains and data sources and respond to diverse stimuli, improving real-time decision-making. Human experts are then free to focus on driving value through acting on prescriptive recommendations from AI agents.  

Sponsors building human-agent teams with an interoperability-first approach will lead this wave of innovation—and early movers will maintain their advantage for many R&D cycles to come. 

 

Srini Anandakumar is the Vice President of Product at QuartzBio, the Precision Medicine Intelligence Company. Srini has over 20 years of experience leveraging data and AI to revolutionize clinical development processes. At QuartzBio, Srini leads the development of innovative software-as-a-service (SaaS) solutions powered by a network of task- and domain-specific AI agents. QuartzBio's first-in-class Precision Medicine AI Agent Platform enables autonomous data ingestion and conversational insights across the precision medicine value chain. Thanks to Srini's leadership and innovation, client R&D teams are shortening the time from data to insights, analytics, and visualizations, accelerating study close and time-to-market. He can be reached at  srini.anandakumar@precisionformedicine.com.  

Load more comments
comment-avatar