The Practicalities Of Real-World Pragmatic Research

By Deborah Borfitz

June 8, 2021 | The “explanatory-pragmatic continuum” was a major theme at this year’s Colorado Pragmatic Research in Health Conference (COPRH Con), the second of three global annual meetings focused on studies conducted in a real-world context. Speakers discussed the distinction between traditional clinical trials measuring efficacy of a drug in support of policy decisions and pragmatic ones looking at the effectiveness of interventions designed primarily to change the behavior of healthcare providers and patients. Few trials are wholly one or the other, as evidenced by the expanding number of use cases.

The opening keynote on the implementation and conduct of pragmatic research was given by Brian Mittman, Ph.D., senior scientist in the department of research and evaluation at Kaiser Permanente of Southern California, and it covered many of the basics. The goal is not to arrive at a binary yes-or-no decision under idealized conditions but to understand when and under what circumstances an intervention works and how to scale that knowledge, he says.

COPRH Con was co-sponsored by the Adult and Child Consortium for Health Outcomes Research and Delivery Science (ACCORDS) education program at the University of Colorado School of Medicine and the Colorado Clinical and Translational Sciences Institute. ACCORDS is the recipient of a three-year conference grant from the Agency for Healthcare Research and Quality.

The yardsticks of success in pragmatic research are whether an intervention improves outcomes and the study accounts for the heterogeneity of patients, “the other 85%” that strict eligibility criteria of a traditional clinical trial would exclude from participation, Mittman says. Implementation science principles and approaches are integral to pragmatic research, where the interventions can be complex with multiple components that get flexibly deployed.

The intervention needs to fit the context—or adapt to it—and complementary actions may be needed to address issues related to staff motivation and low-resourced and -functioning settings as well as to maximize reach, adoption, and sustainment, he says. Healthcare practitioners, health system leaders, and patients are all actively engaged so the intervention reflects their values and preferences.

Pragmatic clinical trials are well suited to organizations endeavoring to be learning health systems focused on continuous improvement and innovation where best practices get embedded in the care process. Their successful implementation requires multilevel implementation strategies, Mittman notes, since many if not most health issues and care delivery problems are caused by factors on multiple levels.

Importantly, internal and external stakeholder groups need to be identified as a starting point to identifying the implementation barriers, he adds. “Health-related policies, practices, and behaviors are highly stable, slow to change, heterogenous and dynamic, and multifaceted” and failing to recognize that can result in a marginal return on efforts. The barriers that “keep the boulder in place” are many as are the requirements for practice change.

Pragmatic research is focused on the impact or outcome, which makes complex health interventions the answer “sometimes… [or] it depends,” Mittman continues. “Effects occur through multiple pathways that are highly mediated and moderated.” Only with a highly robust intervention that is a “clear yes,” such as adherence to a clinical practice guideline, would strong effects and moderate variance site to site be expected.

An intervention targeting low rates of human papillomavirus vaccination is another matter entirely, he says. No amount of effort on the part of the health system is going to lead to significant improvement if the root cause is parental resistance or beliefs that doctors are being financially incentivized to prescribe them and those community norms are not being addressed.

Complex health interventions will of necessity change across sites and time to maintain their fidelity, says Mittman. This is in part because organizations themselves change.

Sites may also become more supportive of a practice over time, or organizations may mandate it. In pragmatic research, “high-performing sites… will be the more representative sample” and the timeframe for expected effectiveness is three to five years down the road, he notes.

Among his recommendations for conducting pragmatic research are to begin by determining what influences the outcome (e.g., leadership support) and embrace and guide adaptability rather than trying to suppress it. He advises to rethink “core components” of an intervention by perhaps making them optional for some sites and not others or changing the means of delivery to meet local circumstances, and to focus on function (e.g., patient education) rather than form (e.g., printed material or peer education).

He points specifically to the methodology standards of the Patient-Centered Outcomes Research Institute (PCORI). Guidance is readily available on iterative, rapid-cycle evaluation and improvement, and use of real-time measurements, feedback, and refinements.

Among the outstanding challenges are embedding research in academic settings and broadening study aims beyond the does-it-work question.


Four Use Cases

During a subsequent panel discussion on pragmatic research in real-world clinical and community settings, a Substance Abuse Treatment to HIV Care (SAT2HIV) hybrid trial was briefly discussed by Bryan Garner, Ph.D., senior implementation research scientist with the nonprofit research institute RTI International. The study tested the utility of a motivational interviewing technique to help AIDS service organizations address substance abuse disorders and the potential of an organizational-level implementation intervention (involving training, workshops, and coaching) to enhance the effectiveness of the strategy.

The idea was to “adopt and test” rather than “adopt and assume” that the evidence-based practice would move the needle, based on outcome measures that included days of primary substance abuse over the past 28 days, Garner says. In the exploratory phase, 39 organizations across the U.S. were clustered into one group following the strategy used by Addiction Technology Treatment Centers (ATTC) or another that added on the Implementation & Sustainment Facilitation (ISF) intervention.

The pragmatic trial took nearly three years, with 85% of staff able to sustain use of the intervention, he says. “The ATTC strategy was sufficient to help staff demonstrate proficiency, but ISF significantly improved the extent of implementation effectiveness and sustainment.” A second trial is now planned that will introduce a pay-for-performance component to the combined ATTC-ISF strategy.

Erin Kenzie, Ph.D., a senior research associate at Oregon Health & Science University, reports on a pragmatic study called ANTECEDENT that uses causal loop diagramming to capture relationships and dialogue used in primary care that motivate patients to make positive changes that impact outcomes. The diagrams generated show which changes correlate with the quoted words spoken by clinical staff, and the hope is to identify change over time, she says. “This is uncharted territory... a new way to engage with that information.”

The approach takes time as well as expertise, she adds, since individual interviews are being conducted. The interview data are used to build simulation models that can serve as a framework for the clinician-patient relationship.

James Pittman, Ph.D., an assistant clinical professor at the University of California, San Diego (UCSD), discussed implementation of eScreening for suicide prevention among veterans who enroll in the Veterans Affairs (VA) medical system. The web-based triaging tool is integrated with the electronic health record and is in the process of being implemented, tapping a quality improvement methodology (Lean Six Sigma Rapid Process Improvement Workshop, or RPIW) already being utilized by the VA, he says.

A multidisciplinary, geographically diverse group developed a playbook covering all the key considerations—including training, communication, and measurement—for how to effectively introduce eScreening into the workflow of any clinical project, Pittman says. The playbook was piloted at two new eScreening sites, only one of which went through the RPIW process and had “less problems with workflow and staffing” than the comparator site. The methodology will now be explored as part of a multi-step hybrid trial, he adds.  

Nicole Stadnick, Ph.D., assistant professor at UCSD, referenced two COVID-19 “meaningful communication engagement” methods— Share, Trust, Organize​, Partner: The COVID-19 California Alliance (STOP COVID-19 CA) and Community-driven Optimization of COVID-19 testing to Reach and Engage underserved Areas for Testing Equity (CO-CREATE)—to advance testing and vaccine uptake in underserved communities. Both involve the Theory of Change, a methodology for explaining how and why a desired change is expected to happen.

The efforts engaged a 33-member community advisory board (CAB) and two policy partners at the district level in a needs assessment about the concerns in different communities and, importantly, interactions happened in both English and Spanish and all materials were translated accordingly, she says. Fifteen meetings have been held virtually over the past six months and the means have included Zoom breakout rooms and Miro, an online whiteboard tool.

Meetings were held in the 4:30-to-6:30 timeframe and ended with “insightful and informative” reflections, Stadnick shares. Scribes and facilitators were used, and CAB members were compensated for their time.

The project was effectively a “logic model on steroids,” says Stadnick, literally transforming a generic theory of change into a color-coded one focused on hoped-for outcomes. During breakout sessions, groups sorted through different categories of ideas (e.g., access to truthful information sources) and ranked them in order of importance. The next step is to have the CAB evaluate implementation.


Patient-Reported Outcomes

Shelley Vanderhout, Ph.D., with child health evaluative sciences at the Hospital for Sick Children Research Institute in Toronto (Ontario, Canada), spoke briefly about the prevalence and factors associated with the use of patient-reported outcomes (PROs) in pragmatic research. PROs are subjective measures of health, well suited to pragmatic clinical trials, which are intended to guide clinical decision-making in usual care settings.

The use of PROs in pragmatic randomized clinical trials in not well understood, Vanderhout says. A MEDLINE search turned up 415 heath-focused pragmatic trials registered with, 35% of which used PROs as a primary outcome measure and 22% as a secondary measure. Only in 9% of cases did trial descriptions indicate patients were involved in the research process, she notes, and in all other cases it was either unclear or not reported.

The types of PROs used were for reporting symptoms, a health behavior, or quality of life, she continues. Factors associated with use included the study being conducted in Europe and for a behavioral intervention. Lower prevalence of use was linked to the study being done in a low-income country and being industry-funded. Patient and stakeholder engagement was the same for individual and cluster randomization studies.  

Reporting is a significant challenge due to resource restraints, Vanderhout says. When PROs are used as a secondary outcome, those results rarely get reported although the information is still important to capture.


Stakeholder Engagement

Engaging diverse stakeholders in complex environments was the topic of a plenary address by Mónica Pérez Jolles, Ph.D., assistant professor at the University of Southern California’s Gehr Family Center for Health Systems Science. The rationale is to increase the quality of research and the implementation process, speed translation of evidence into practice, address social justice, and to give voice to under-represented and disenfranchised groups.

Key challenges are the scarcity of comparative effectiveness research on patient engagement, inconsistent findings, and lack of reporting on the tactics deployed, Jolles says. She advocates for context-dependent engagement for stakeholders falling into the categories of policymaker, researcher or funder, product maker, payers, purchasers, providers, or patients and consumers. “One size does not fit all.”

Pragmatic researchers might borrow the idea of “concept mapping” from the world of implementation science, notably the ERIC (Expert Recommendations for Implementing Change) list for selecting strategies best suited for different settings (Waltz et. al, 2015). More than 70 implementation strategies—e.g., build a coalition or capture and share local knowledge—have been organized by cluster based on their importance and feasibility.

Relationships with stakeholders need to be established well before research begins, says Jolles, who recommends hosting focus groups and inserting their quotes into grant proposals. Familiarity with the PCORI’s approach to research and funding is a good starting point, she adds. The Clinical and Translational Science Awards program of the National Center for Advancing Translational Sciences is another rich resource for pragmatic researchers. 

The best means for creating meaningful relationships comes down to “a good cup of coffee” and giving stakeholders greater control over the agenda, Jolles says. In one project with a Federally Qualified Health Center, engaged Latinos write tasked and credited with writing the script for a video as part of the implementation strategy.

When surveyed on the question of building a trustful liaison with research partners, Jolles’ colleagues offer this: include them “every step of the way,” including when designing the protocol, collecting data, and disseminating information; seek the input of those with other skillsets, even on topics where they have no experience (e.g., why diabetes is so hard to control); and be sure the research will lead to meaningful conclusions that consider the limitations and generalizability of findings. 

Jolles concludes with her own set of recommendations, which includes holding ongoing, 40-minute meetings with a diverse group of stakeholders that might include people from the IT department, medical records, healthcare providers, managers, schedulers, and receptionist. This is in part to learn the “culture and climate” of an organization but also is an answer to turnover that could disrupt progress.

It may also make sense to grow a group’s composition in a more organic way, Jolles shares. In forming one parent’s group, for example, she began by meeting with one key leader recommended by the clinic and enlarged the group based on her suggestions.

Healthcare professionals and community groups should probably meet separately as a matter of practicality, she adds. Meetings with clinicians will likely be short and held first thing in the morning or during their lunch time, while with others it might be late afternoon with childcare provided.

Load more comments