It Took A Village To Speed Pfizer COVID-19 Vaccine Into Arms
By Deborah Borfitz
February 25, 2021 | Deployment of clinical trials for Pfizer’s COVID-19 vaccine candidate was the fastest in the company’s history, and possibly industry-wide. Six weeks after the decision was made to run the combined phase 1/2/3 study, the first patient was enrolled and, six months later, it had all the safety and efficacy data needed to request Emergency Use Authorization (EUA) from the U.S. Food and Drug Administration (FDA) to start getting the vaccine into people’s arms.
The feat was accomplished not by taking procedural shortcuts but by eliminating the “white space,” or downtime in making progress on operational components of the study “based on iterations of the draft protocol rather than waiting until it was finalized,” says Demetris Zambas, head of data monitoring and management for biometrics and data management in global product development at Pfizer.
Zambas will join five of his Pfizer colleagues at the 2021 Summit for Clinical Ops Executives (SCOPE), being held virtually March 2-4, to share how they achieved this breakthrough goal. The perspectives of study clinicians, study monitors, data management, statistical programmers and analysts, and the technology support team will all be represented during their plenary keynote.
“A lot of activities are triggered by the trial protocol,” Zambas says, citing the design of data collection methods as an example. “Obviously, you can’t conclude those designs until you have the final protocol, but from a process perspective that doesn’t mean you can’t start [working on them] at risk earlier.”
To execute a day of activity takes preparation, review, and possibly time in the queue if a new standard is required—and may well affect the cadence of other study imperatives ranging from source data verification to statistical reporting and analysis, says Zambas. So, company leaders made accelerating the pace a top priority across the organization and instructed core clinical operations teams to think strategically and collaboratively about how to shorten the usual timelines while ensuring the study would deliver quality data.
In some cases, it was simply a matter of standardizing or automating a process, says Zambas. But more broadly, a “rich library of lessons learned” was amassed from folks on the ground about how to speed study startup. “We’re now assessing the feasibility of implementing each one. There’s no going back.”
It probably goes without saying that not every clinical study can proceed with the level of human resources and capital investment that was almost reflexively directed at a vaccine targeting a global pandemic, says Zambas. “I had people calling me up, volunteering to take anything off the vaccine team’s plate that was not a critical deliverable. We definitely had ‘a village,’… but culturally, that is not sustainable, even if people are willing to.”
Process automations, however, are highly scalable, he continues. A good example is a machine learning tool called Smart Data Query, which was in the pilot testing last year and now fully implemented across most new clinical studies. For the COVID-19 vaccine trial, it helped shave a month off the time it normally takes to clean up patient data so the results could be analyzed.
Pfizer also accelerated data flow internally and from third-party partners by doing data refreshes multiple times per day. “It was such a big study that if you fell behind by even a couple of days, you could never catch up,” says Zambas.
Source data likewise got uploaded daily for the vaccine study, he adds. Where it makes sense—e.g., the data gets generated at an everyday frequency—that process automation could potentially be a fixture of future trials.
Reporting dashboards with critical operational metrics also started getting monitored in a “more live fashion” during the vaccine study, Zambas says. He expects they will be a feature of the “new normal” for all studies moving forward.
The protocol itself defined the critical analysis and data needed to demonstrate the safety and efficacy of the vaccine, Zambas says. Industry-wide, clinical studies have become bogged down with a lot of extraneous information that can have a ripple effect across functional areas without adding any substantive value.
If a protocol calls for a lot of tertiary analyses—hence increasing output tables, lists, and figures—that affects what study investigators need to collect from subjects, as well as the volume of data in-house teams need to monitor and clean, factor into programming, and analyze. “It affects everyone,” says Zambas.
The vaccine trial saw notable improvement on the enrollment diversity front, adds Zambas, pointing to a company-wide focus on making trial populations more representative of patients who will use its products. “Industry as a whole has not done the best job at increasing diversity historically.”
Helpful tactics include outreach to community groups and identifying new clinical trial sites to include in the study. Some may need only to be trained to conduct research and others may be established sites that can be invited to participate based on their location and diverse clientele.
Library Of Lessons Learned
The library of lessons learned is “massive” as a function of how many roles, technologies, and processes are involved in executing a clinical trial, says Zambas. Some can be easily implemented while others may require process changes, technical updates, or both.
One suggestion for avoiding costly redundancies addressed study setup specific to data collection technologies, such as electronic data capture and electronic patient-reported outcomes, he cites as an example. “If you wait too late in the game to do that you are not learning as you go and end up in a single attempt interpreting the protocol into a collection method or design of a patient-reported outcome or sensor reading. You are never going to get it in one iteration; you are going to do more full-blown designs and builds and then modifying.”
The recommendation was to instead build out the components as they are being defined in the protocol, through iterative prototypes, Zambas says. Contrary to popular belief, that should create fewer rather than more changes because the study team never heads down a wrong path. If team members are communicating correctly, “[they] will solidify components of that protocol before [we] solidify the whole thing.”
Protocol amendments were related to expansion of the study—for example, accommodating the inclusion of younger age groups, says Zambas. Cross-functional sharing of draft iterations of the protocol, and each of its components, “allowed us to stay in lockstep.”
The protocol was “posted online for the whole world to see and designed to [seamlessly] evolve from phase 1 to phase 2 to phase 3,” he says. “Different doses went into phase 1, the optimal ones went into phase 2, and the selected one with the best potential efficacy and safety profile went into phase 3, but all of that was [captured] in one protocol.”
Zambas has previous experience with this type of adaptively designed study. While working at Schering Plough more than a decade ago, he was involved with the operation of the combined phase 1, 2 and 3 study for Keytruda (subsequently taken over by Merck).
The underlying rationale for adaptive trials is to speed study execution by predefining the conditions up front, including planned adaptations that are triggered by observed interim data, explains Zambas. “As you get to those points, depending on the study design, the next stage of the protocol is triggered.
None of the usual steps are skipped with adaptive clinical trials, he notes, but “operational overhead” is saved by not writing three separate protocols and going through multiple review processes. “It takes the bureaucracy out of it but leaves the science completely intact.”
The industry will not necessarily be collapsing phase 1, 2 and 3 trials into a single protocol in every program in the future, Zambas says, because the design requires a lot of upfront work and the development path cannot always be predetermined. Executing such a protocol for a vaccine is “a little simpler” than an oncology trial dependent on multiple complex endpoints.
Every pharmaceutical company has a team that might meet once a month or every few weeks to discuss the conduct of a clinical study, but at Pfizer the study team for the COVID-19 vaccine program met daily at 8 a.m., says Zambas. The daily touchpoint was intended to solve for anything that was needed by a member of the team to get to study launch or address conduct needs sooner. The group “used the KISS [keep it simple] principle and focused on what matters,” he adds. Everyone would wear T-shirts to the meetings reading, “Science will win,” a catchphrase coined by CEO Albert Bourla that has since been trademarked.
The cross-functional vaccine study team referenced several preliminary versions of FDA guidance on EUA for vaccines to prevent COVID-19 before the final document was issued last October, notes Zambas. At that point, “we went back and compared and adjusted accordingly to make sure whatever we were going to submit would meet those asks.” The FDA had previously announced a 50% floor on vaccine efficacy, which Pfizer’s vaccine easily trumped at roughly 95% as reported in the peer-reviewed New England Journal of Medicine (DOI: 10.1056/NEJMoa2034577). But the final guidance also laid out the agency’s safety requirements, including two months of data after the second dose for half of the study population, so the study team knew precisely how to calculate when data cut off needed to be.
Editor’s note: Demetris Zambas will be a moderating the live plenary presentation, “Delivering When It Matters, COVID-19 Edition: Pfizer’s Vaccine Trial Execution Strategies and Technology-Enabled Operations,” on the opening day of the 2021 SCOPE virtual event on March 2. The unifying theme across presenters, he says, will be “one goal, one drive, and constant communication across functions.”