Regulatory Proposals Good Match to Priorities of New FDA

By Deborah Borfitz 

August 13, 2025 | Regardless of how one views the priorities of the Food and Drug Administration (FDA) under the new administration, it’s an opportunity for badly needed change to “shake things up,” according to Doug Bain, founder and consulting partner of UK-based ClinFlo, a newly launched consulting and service company focused on the application of technology to improve clinical research. The clinical trial data innovation veteran penned a “regulatory blueprint for digital trials” that aims to deliver on many of the proposals recently announced by the agency (JAMA, DOI: 10.1001/jama.2025.10116). 

Bain’s recommendations, written as a blog post, primarily focus on 21 CFR Part 11, the oft referenced but outdated regulation published in the 1990s during the early age of the internet. Among his six priority adjustments for modernizing the FDA is to create a new designation for a “HyperTrial” using criteria-based eligibility such as continuous data flow and use of digital systems to run studies at the legendry speed of COVID trials. 

Further, Bain advocates for the certification of “Trusted Third Parties” (TTPs) to act as neutral custodians of clinical trial data. “These entities would maintain a system and single authoritative dataset accessible by sponsors, CROs [contract research organizations], investigators, and patients—minimizing duplication, reducing bias, and enabling real-time collaboration without any one stakeholder controlling the system,” he says. The term “source” could perhaps be retired in regulatory contexts, with the TTPs overseeing trusted and accurate data serving as everyone’s point of reference for a study. 

Simply adopting modern notions of what constitutes a “certified copy” is perhaps the lowest hanging fruit, says Bain. “The idea that the investigator has got to do some form of [physical or manual] signature is just crazy” given that it “inhibits the use of secure, scalable, and cryptographically signed digital records that are more secure and traceable than paper.” Software could be validated as capable of copying information, a task at which it has proven quite adept. 

The way FDA regulations are written makes the practical adoption of these seemingly common-sense approaches difficult if not impossible, Bain says. Even risk-based monitoring, which would be a necessary feature of COVID-speed clinical trials, remains a choice. Although technology enabling continuous data monitoring has been around for at least 15 years and is known to improve trial efficiency and data quality while maintaining participant safety, it has not been sufficiently supported or encouraged through regulatory guidance. 

On-the-Ground Realities

Bain is no stranger to the clinical trial ecosystem. Prior to founding ClinFlo, he was chief technology officer for KCR, a mid-sized CRO. For more than two decades before that, he worked for technology vendors as well as in consulting roles on behalf of companies and independent of them.   

His five years with KCR were particularly advantageous, Bain says, since it revealed striking differences between “rosy ideas” about how clinical trial technology is going to be applied and the on-the-ground realities of end users. His mission now is to foster “honest dialogue” about both the upsides and downsides of proposed solutions—including his own.  

It’s hard to argue the priorities put forth by the new FDA Commissioner Marty Makary, M.D., and Director of the Center for Biologics Evaluation and Research Vinay Prasad, M.D.: accelerating cures, unleashing artificial intelligence (AI), healthier food for children, harnessing big data, and addressing the high price of drugs. But what that really means in places where the real, practical work gets done remains to be seen, says Bain. 

The FDA’s published viewpoint came out only months after Bain penned his article after pondering the regulatory changes he believed would accelerate clinical trials. Despite significant disruptions happening at the FDA as well as the National Institutes of Health, Bain is quick to point to the window of opportunity that has opened for regulators as well as industry. 

His end game is to spur real-life case studies to test some of his ideas for accelerating trials, which could feed into future regulations. If all goes as planned, studies currently taking five years to complete might instead be wrapped up in two or three years.   

Part of the problem is that 21 CFR Part 11 is “a bit stale and out of date,” says Bain. “One of my targets for regulation change is to update that to reflect the potential of new technology, AI, and cloud systems.” When the regulation was drafted, not only was the internet still in its infancy; software for electronic data capture (EDC) was equally new, and single- and multi-tenant cloud computing didn’t yet exist. 

HyperTrial Designation

One of the silver linings of the pandemic was that COVID vaccine trials were dramatically expedited, raising the question of why other critical trials can’t be accomplished at the same rapid pace, says Bain. With that in mind, the concept of the HyperTrial was born. 

The core idea here is to provide an accelerated regulatory and operational framework for studies that meet defined digital-readiness criteria (e.g., continuous data collection, risk-based monitoring, and real-time analytics), not merely those with large budgets, Bain says. 

These HyperTrials could have a separate set of standards from the ones currently governing all trials, be they investigator-led studies running on budgets of less than $10,000 or Mayo-style trials costing potentially hundreds of millions of dollars, he explains. As with COVID studies, these HyperTrials might optionally run phase 1, 2, and 3 trials in parallel—an approach that would significantly speed things up but could well be too expensive and risky for all but a small subset of companies to undertake.   

All-digital HyperTrials could add speed “by smoothing out and potentially even improving the quality of studies using modern methods,” says Bain. This presumes that the qualifying criteria for the imagined HyperTrial designation includes the adoption of risk-based monitoring. 

It is today possible to aggregate clinical trial data—from electronic systems for patient-reported outcomes, clinical outcome assessments, data capture, and labs—meaning the information can collectively be stored, cleaned, and statistically evaluated in near real time, he continues. Statistical analysis plans could be laid out “right at the beginning as part of your protocol and you could implement code to continue monitoring that data to look for the high-water mark for efficacy or the low-water mark for safety, and when that happens raise a flag.” 

Technology exists to manifest this scenario and eliminate the lag time due to handoffs between different parties, he says. “But it is frowned on because of the norms and the way in which regulations are described.” 

HyperTrials would logically embrace centralized monitoring, a key component of the risk-based approach, to replace the need for regular on-site study visits that can delay study progress, says Bain. By remotely monitoring data from multiple trial sites, the data might also be cleaned and assessed at once using AI. 

Algorithms exist to perform a statistical assessment of the cleanliness of data against historical benchmarks and then adjust the error margin to reflect the level of confidence in the results. Hitting the mark doesn’t stop the study or the randomization process, he notes, but it does allow certain adaptations according to the original plan. 

This in turn presupposes that fully aggregated data is continuously flowing into a common database, Bain adds, although a more “siloed approach” is still most common today. “It’s very much a stage-gate process of getting your data through different steps.” 

Trusted Third Parties 

The use of Trusted Third Parties is recognized in other industries, mostly outside the context of clinical research, says Bain. TTPs are critical to supporting access to pooled, aggregated data across different stakeholder groups in their respective roles in the clinical trial ecosystem. 

Well into the early 2000s, big pharma typically stored its R&D data on-premises in large server rooms. The data was often stored and managed via licensed use of one of the big three cloud-based solution providers—Medidata, Veeva, or Oracle—and CROs were doing the same thing to run their studies, Bain says. 

No one thought of working as a team around one definition of a trial employing a common data repository from which everyone could pull what they needed, he continues. “[But] there is no reason why sites, CROs, and sponsors should have their own instance of a database and have to transfer the data between these systems.” With a TTP, “no one stakeholder actually controls the software; the sponsor, the CRO, the site, [and] the patient are basically all involved in this one big database.” 

TTPs are the ones who would make the software available and validate that the various parties can only fulfill their roles and functions with the data—be that provide metadata, review and clean data, or, if the sponsor, use the information to monitor progress or ensure data quality, says Bain. Having one instance of the data avoids needlessly replicating it and doing so with a cloud system means it can be done securely with fault tolerance.  

‘Authoritative’ Data

The current reality is that eSource and EDC systems typically sit alongside one another in a server, with an “artificial barrier” separating the two. “There is no logical reason not to combine them,” says Bain, getting back to his proposal to eliminate the term “source” altogether. 

His reasoning is that the word implies eSource is something distinct from other digital records, reinforces ongoing use of manual source data verification, and upholds the false belief that data closer to its origin is inherently better, Bain says. Back at the turn of the century, he ran nearly 50 clinical trials where eSource was captured in an EDC system and “the rules and regulations were pretty much the same as they are today.” 

Besides the irrational differentiation between source and eSource data, he continues, electronic health records are treated as if they were the “gold standard of data cleanliness,” which they assuredly are not, he continues. “Electronic health records don’t have edit checks ... [they] are whatever was keyed in.”  

With EDC, on the other hand, many hundreds of edit checks are generally cross-checking data within the bounds of a protocol, says Bain. “It’s not so much that you’re checking that the data entered into an EDC system matches the source data; you’re checking that the source data is up to scratch [i.e., good enough] with the data inside the EDC system.” 

His preference for the term “authoritative” over “source” data is simply an attempt to truthfully define what is being referenced and focus on what is most important, he says. “If an investigator or a site is logging data and it has been through its checks, that to me is the authoritative data that should be the master data and point of reference for the research that has been carried out ... not necessarily the dirty data that has been gathered on the route to achieving clean data within [an EDC] platform.” 

‘Light at the End of the Tunnel’

Bain’s intersecting proposals for modernizing clinical trial regulations wouldn’t necessarily have to be implemented at one time. But that would be ideal, he says, given that 21 CFR part 11 from the very beginning has sought to ensure that investigative sites have primary control over clinical trial data to prevent sponsors from biasing the data for their own aims. The data validation principles at work within the software system of a TTP would help ensure that trial data remains under the control of sites for as long as intended. 

Among his recommendations are that the FDA “introduce a regulatory framework for certification and periodic audit of third-party platforms acting as custodians of regulated trial data” as well as “enable shared use of platforms across sponsors with consistent validation and oversight.”  

Input on his multiple proposals was provided by regulatory experts—including Ron Fitzmartin, Ph.D., former senior informatics advisor at the FDA—as well as sponsors and technology vendors. There is considerable interest in formally taking some of the ideas forward sooner rather than later, says Bain, including a HyperTrial pilot since COVID has already highlighted the methodology as possible. 

He intends to prove each of the concepts individually but will drop any confirmed not to work. Bain says he looks to partner up with sponsor organizations interested in exploring some of the methods and together getting the blessing of the FDA to proceed. “We’ve been feeling bogged down not seeing improvements in throughput or patients enjoying a better experience being involved in a clinical trial ... [but] there is light at the end of the tunnel.” 

Load more comments
comment-avatar