Step 5: Select a Study Design
Overview of Study Designs in Implementation Science
The field of implementation science seeks to improve the adoption, adaptation, delivery and sustainment of evidence-based interventions in healthcare but requires rigorous study design to do so.
The selection of study design in implementation science is crucial because it directly influences the validity, reliability, and applicability of the research findings. A well-chosen study design ensures that the research effectively addresses the complexities of implementing evidence-based practices in real-world settings. It allows researchers to systematically evaluate the processes, outcomes, and contextual factors that impact implementation.
By selecting an appropriate design, researchers can accurately measure the effectiveness of interventions, identify barriers and facilitators, and provide actionable insights for policymakers and practitioners. Moreover, the right study design helps in balancing rigor with feasibility, ensuring that the research can be conducted ethically and within available resources.
Frequently used study designs to evaluate implementation of evidence-based practices
Quantitative Approaches
Experimental Designs
Experimental designs are used to conduct experiments that aim to test hypotheses or answer research questions. They involve the researcher manipulating one or more independent variables (IVs) and measuring their effect on one or more dependent variables while controlling for other variables that could influence the outcome.
Examples include: Randomized controlled trials (RCTs) which are often used to test the effectiveness of interventions in controlled settings. Cluster randomized trials (cRCTs) extend RCTs by randomizing groups rather than individuals, making them suitable for community or organizational interventions. Stepped-wedge designs introduce interventions to different groups at different times, allowing all participants to eventually receive the intervention while providing robust data on its impact over time. Pragmatic trials evaluate the effectiveness of interventions in real-world, routine practice settings.
Hybrid designs are particularly notable in implementation science. These designs simultaneously evaluate the effectiveness of an intervention and the implementation strategies used to deliver it. They are categorized into three types:
- Type 1 focuses primarily on effectiveness while gathering implementation data
- Type 2 gives equal emphasis to both
- Type 3 focuses on implementation strategies while collecting effectiveness data
Quasi-experimental designs
Quasi-experimental designs are used when researcher-manipulated randomization is not feasible. These designs help to infer causality by comparing outcomes before and after the implementation of an intervention. Examples include interrupted time series design, regression discontinuity design, and difference-in-differences..
Observational designs
Observational designs provide insights into real-world implementation processes and outcomes without manipulating the intervention. Examples include ecological and time-motion studies.
Qualitative Approaches
Qualitative designs
Qualitative designs provide nuanced understanding of the context, barriers, and facilitators of implementation. Examples include interviews and focus groups.
Mixed-method designs
Mixed-method designs combine qualitative and quantitative approaches to provide a comprehensive understanding of implementation processes and outcomes. Examples include convergent design, explanatory sequential design, exploratory sequential design, coincidence analysis, and concept mapping.
How much does it really matter?
Selecting an appropriate study design is crucial because it ensures that the research question is addressed effectively and that the findings are valid and reliable. The choice of design impacts the researcher’s ability to control for confounding variables, the generalizability of the results, and the depth of understanding of the implementation process. A well-chosen design aligns with the research objectives, the nature of the intervention, and the context in which it is implemented, ultimately contributing to the successful translation of evidence into practice.
The IS Research Pathway
Find Examples
Browse our Library of UW community co-authored publications to see examples of how these study designs can be used in implementation science.
Open Access articles will be marked with ✪
Please note some journals will require subscriptions to access a linked article.
✪ An Overview of Research and Evaluation Designs for Dissemination and Implementation (Annual Review of Public Health, 2017)
✪ Variation in Research Designs Used to Test the Effectiveness of Dissemination and Implementation Strategies: A Review (Frontiers in Public Health, 2018)
💻 NCI ISCC: Implementation Science Study Designs Overview
💻 What Is a Research Design | Types, Guide & Examples (Scribbr)
These study designs and many more help ensure that implementation science research is both rigorous and relevant, providing valuable insights into how best to integrate evidence-based practices into diverse, real-world settings. Explore the resources on this page to learn more about each study design and how it is used in implementation science.
Jump to:
Experimental Designs
Randomized Controlled Trials (RCTs)
A randomized controlled trial (RCT) is a type of scientific experiment that aims to reduce bias when testing the effectiveness of new treatments or interventions. Participants are randomly assigned to either the treatment group or the control group. This randomization helps ensure that any differences observed between the groups are due to the treatment itself and not other factors.
In implementation science, RCTs are used to evaluate the effectiveness of strategies designed to promote the adoption and integration of evidence-based practices into real-world settings. There are also several criticisms of using RCTs in implementation science, including: contextual limitations, ethical concerns, bias and generalizability, resource use, and limited use of theory.
- How effective is a particular strategy in promoting the adoption of an evidence-based practice?
- How does one implementation strategy compare to another?
- What are the mechanisms through which an implementation strategy works?
- How do different contextual factors (e.g., organizational culture, resource availability) affect the success of an implementation strategy?
- What is the cost-effectiveness of an implementation strategy?
Methodology
- What is the role of randomised trials in implementation science? (Trials, 2023)
- Experimental and quasi-experimental designs in implementation research (Psychiatry Research, 2019)
- ✪ Designing and undertaking randomised implementation trials: Guide for researchers (BMJ, 2021)
- 💻 NIH Research Methods Resources: Methods Applicable to Most Clinical Trials and Many Other Studies
- ✪ Supporting translation of research evidence into practice—the use of Normalisation Process Theory to assess and inform implementation within randomised controlled trials: a systematic review (Implementation Science, 2023)
Examples of Use
- ✪ Organize and mobilize for implementation effectiveness to improve overdose education and naloxone distribution from syringe services programs: a randomized controlled trial (Implementation Science, 2024)
- ✪ SciComm Optimizer for Policy Engagement: a randomized controlled trial of the SCOPE model on state legislators’ research use in public discourse (Implementation Science, 2023)
Cluster Randomized Controlled Trials (cRCTs)
Cluster randomized controlled trials (CRTs) are a type of experimental study design where groups (or clusters) rather than individuals are randomized to different intervention arms. These clusters can be hospitals, schools, communities, or other groups where the intervention is delivered at the group level.
CRTs are particularly useful in implementation science because they reflect real-world settings where interventions are often delivered at the group level, enhancing the external validity of the findings. CRTs are frequently used to evaluate the effectiveness of different implementation strategies.
- How effective are different strategies (e.g., training programs, policy changes) in improving the adoption of evidence-based practices in various settings?
- How do different implementation strategies compare in terms of their impact on the uptake and sustainability of an intervention?
- Is the implementation strategy cost-effective compared to other strategies?
- How effective are different strategies for integrating new health technologies into routine practice?
Methodology
- ✪ How to design efficient cluster randomised trials (BMJ, 2017)
- ✪ Trials and tribulations: cross-learning from the practices of epidemiologists and economists in the evaluation of public health interventions (Health Policy and Planning, 2018)
- 💻 NIH Research Methods Resources: Parallel Group- or Cluster-Randomized Trials
- 💻 NCI: Cluster Randomized Designs in Cancer Care Delivery Research
Examples of Use
- ✪ Impact of a tailored program on the implementation of evidence-based recommendations for multimorbid patients with polypharmacy in primary care practices—results of a cluster-randomized controlled trial (Implementation Science, 2017)
- ✪ Cluster randomised controlled trial of a theory-based multiple behaviour change intervention aimed at healthcare professionals to improve their management of type 2 diabetes in primary care (Implementation Science, 2018)
- ✪ Assessing the sustainability of the Systems Analysis and Improvement Approach to increase HIV testing in family planning clinics in Mombasa, Kenya: results of a cluster randomized trial (Implementation Science, 2022)
- ✪ Implementation fidelity, student outcomes, and cost-effectiveness of train-the-trainer strategies for Masters-level therapists in urban schools: results from a cluster randomized trial (Implementation Science, 2024)
Stepped Wedge Design
A stepped wedge design is a type of cluster randomized trial used to evaluate implementation strategies, where all clusters (e.g., hospitals, schools, or communities) eventually receive the implementation strategy, but the timing of when each cluster starts is randomized and staggered over different time periods.
The sequential rollout of this design is one key features, as it allows for a comparison between those that have and have not yet received the implementation strategy. Additional key features include randomization (the order in which clusters receive the intervention is randomized to reduce bias) and data collection at multiple time points before and after the strategy is introduced to each cluster.
Stepped wedge designs are particularly useful in implementation science for several reasons, including that it ensures all participants eventually receive the potentially beneficial strategy, that it allows for the evaluation of interventions in real-world settings over time, and helps to account for changes over time that might affect the outcomes independently of the implementation strategy.
- How sustainable is an evidence-based practice when implemented across a national health system?
- Does implementation fidelity vary between clinics receiving two different implementation strategies?
- What impact does a community engagement strategy have on vaccine uptake in the region?
Methodology
- ✪ Stepped‐wedge cluster randomised controlled trials: a generic framework including parallel and multiple‐level designs (Statistics in Medicine, 2014)
- Research Designs for Intervention Research with Small Samples II: Stepped Wedge and Interrupted Time-Series Designs (Prevention Science, 2015)
- ✪ The stepped wedge cluster randomised trial: Rationale, design, analysis, and reporting (The BMJ, 2015)
- ✪ Designing a stepped wedge trial: Three main designs, carry-over effects and randomisation approaches (Trials, 2015)
- ✪ Five questions to consider before conducting a stepped wedge trial (Trials, 2015)
- ✪ Analysis and reporting of stepped wedge randomised controlled trials: synthesis and critical appraisal of published studies, 2010 to 2014 (Trials, 2015)
- Evaluating Public Health Interventions: 2. Stepping Up to Routine Public Health Evaluation With the Stepped Wedge Design (American Journal of Public Health, 2016)
- 💻 NIH Research Methods Resources: Stepped Wedge Group-Randomized Trials
- 💻 NIH ODP: When is the Stepped Wedge Study a Good Study Design Choice?
- 💻 NIH Pragmatic Trials Collaboratory: Stepped Wedge Designs
Examples of Use
- ✪ Early ART initiation among HIV-positive pregnant women in central Mozambique: a stepped wedge randomized controlled trial of an optimized Option B+ approach (Implementation Science, 2015)
- An implementation science protocol of the Women’s Health CoOp in healthcare settings in Cape Town, South Africa: A stepped-wedge design (BMC Women’s Health, 2017)
- ✪ Successful and sustained implementation of a behaviour-change informed strategy for emergency nurses: a multicentre implementation evaluation (Implementation Science, 2024)
- ✪ Major influencing factors on routine implementation of shared decision-making in cancer care: qualitative process evaluation of a stepped-wedge cluster randomized trial (BMC Health Services Research, 2023)
- ✪ Systems analysis and improvement approach to optimize tuberculosis (SAIA-TB) screening, treatment, and prevention in South Africa: a stepped-wedge cluster randomized trial (Implementation Science Communications, 2024)
MOST
The Multiphase Optimization Strategy (MOST) is used in implementation science to develop, optimize, and evaluate multicomponent implementation strategies.
It consists of three phases: preparation, optimization, and evaluation. During the preparation phase, researchers identify and define the components of the strategy. In the optimization phase, they use experimental designs, such as factorial experiments, to test and refine these components to achieve a balance between effectiveness, affordability, scalability, and efficiency. Finally, in the evaluation phase, the optimized strategy is rigorously tested, often through randomized controlled trials, to ensure it meets the desired outcomes. MOST can be used to create implementation strategies that are not only effective but also practical and sustainable in real-world settings.
- How do different combinations of implementation strategy components (e.g., digital reminders, in-person counseling, and training materials) impact patient health outcomes relating to a specific evidence-based practice?
- What is the cost-effectiveness of various components of implementing a chronic disease management program, and how can program implementation be optimized to balance cost and effectiveness?
- How do different resource allocation strategies (e.g., more frequent follow-ups vs. enhanced initial training) affect the overall cost and outcomes of an EBP implementation?
- What is the impact of different policy implementation strategies (e.g., phased rollout, immediate full implementation) on the adoption and effectiveness of a new evidence-based curriculum?
Methodology
- ✪ Optimization of implementation strategies using the Multiphase Optimization STratgey (MOST) framework: Practical guidance using the factorial design (Translational Behavioral Medicine, 2024)
- ✪ Applying the resource management principle to achieve community engagement and experimental rigor in the multiphase optimization strategy framework (Implementation Research and Practice, 2024)
- ✪ Optimization Methods and Implementation Science: An Opportunity for Behavioral and Biobehavioral Interventions (Implementation Research and Practice, 2021)
- The Multiphase Optimization Strategy (MOST) and the Sequential Multiple Assignment Randomized Trial (SMART) (The American Journal of Preventive Medicine, 2007)
- ✪ Achieving the Goals of Translational Science in Public Health Intervention Research: The Multiphase Optimization Strategy (MOST) ( American Journal of Public Health, 2019)
- ✪ Randomization Procedures for Multicomponent Behavioral Intervention Factorial Trials in the Multiphase Optimization Strategy Framework: Challenges and Recommendations (Translational Behavioral Medicine, 2019)
- ✪ Human-centered design methods to achieve preparation phase goals in the multiphase optimization strategy framework (Implementation Research and Practice, 2022)
Examples of Use
- Moving beyond the treatment package approach to developing behavioral interventions: addressing questions that arose during an application of the Multiphase Optimization Strategy (MOST) (Translational Behavioral Medicine, 2014)
- ✪ Tobacco dependence treatment in the emergency department: A randomized trial using the Multiphase Optimization Strategy (Contemporary Clinical Trials, 2018)
- ✪ Applying the multiphase optimization strategy to evaluate the feasibility and effectiveness of an online road safety education intervention for children and parents: a pilot study (BMC Public Health, 2024)
- Increasing pre-exposure prophylaxis (PrEP) in primary care: A study protocol for a multi-level intervention using the multiphase optimization strategy (MOST) framework (Contemporary Clinical Trials, 2024)
Pragmatic Trials
A pragmatic trial is a type of clinical trial designed to evaluate the effectiveness of interventions in real-world, routine practice settings. Unlike explanatory trials, which test whether an intervention works under ideal conditions, pragmatic trials aim to determine how well an intervention performs in everyday practice. In implementation science, pragmatic trials are crucial for understanding how to effectively integrate evidence-based interventions into everyday practice. Key features of pragmatic trials include:
- Real-World Settings – Conducted in typical practice environments rather than controlled research settings.
- Broad Eligibility Criteria – Includes a diverse population to reflect the variety of patients seen in routine practice.
- Flexible Protocols – Allows for variations in how the intervention is implemented, mirroring real-world conditions.
- Relevant Outcomes – Focuses on outcomes that are meaningful to patients, providers, and policymakers.
- Can the intervention be implemented in real-world settings?
- Do patients and providers find the intervention acceptable?
- Does the intervention improve outcomes in routine practice?
- Can the intervention be maintained over time in real-world settings?
Methodology
- Similarities and Differences Between Pragmatic Trials and Hybrid Effectiveness-Implementation Trials (Journal of General Internal Medicine, 2024)
- ✪ Integrating pragmatic and implementation science randomized clinical trial approaches: a PRagmatic Explanatory Continuum Indicator Summary-2 (PRECIS-2) analysis (Trials, 2023)
- A pragmatic–explanatory continuum indicator summary (PRECIS): a tool to help trial designers (Journal of Clinical Epidemiology, 2009)
- ✪ White Paper on Pragmatic Randomized Trials: Considerations for Design and Implementation (Evidera PPD, 2019)
- 💻 NIH Pragmatic Trials Collaboratory Living Textbook
Examples of Use
- ✪ Safer Care for Older Persons in (residential) Environments (SCOPE): a pragmatic controlled trial of a care aide-led quality improvement intervention (Implementation Science, 2023)
- Advance Care Planning Coaching in CKD Clinics: A Pragmatic Randomized Clinical Trial (American Journal of Kidney Disease, 2021)
- ✪ Cluster Randomized Pragmatic Clinical Trial Testing Behavioral Economic Implementation Strategies to Improve Tobacco Treatment for Patients With Cancer Who Smoke (Journal of Clinical Oncology, 2023)
- ✪ The implementation and effectiveness of multi-tasked, paid community health workers on maternal and child health: A cluster-randomized pragmatic trial and qualitative process evaluation in Tanzania (PLoS Global Public Health, 2023)
Hybrid Designs
Hybrid study designs in implementation science are used to simultaneously evaluate the effectiveness of an intervention and the implementation strategies used to deliver it. These designs are particularly valuable because they allow researchers to understand both the clinical outcomes and the processes involved in implementing the intervention. There are three main types of hybrid designs:
- Type 1: This design primarily focuses on testing the effectiveness of an intervention while also gathering information on implementation outcomes. It helps to understand how well the intervention works in real-world settings and provides preliminary data on implementation processes.
- Type 2: This design gives equal emphasis to both effectiveness and implementation outcomes. It aims to assess the impact of the intervention and the implementation strategies simultaneously, providing a comprehensive understanding of both aspects.
- Type 3: This design primarily focuses on testing the implementation strategies while also gathering information on the intervention’s effectiveness. It is particularly useful for understanding the best ways to implement an intervention and how these strategies affect clinical outcomes.
These hybrid designs help accelerate the translation of research findings into practice by providing insights into both the efficacy of interventions and the practicalities of their implementation.
- How do different implementation strategies (e.g., training vs. coaching) affect the adoption and effectiveness of a new diabetes management protocol in primary care settings?
- How does the implementation of a new health policy affect the delivery and outcomes of preventive services in urban vs. rural healthcare settings?
- What is the cost-effectiveness of a centralized vs. decentralized approach to implementing a new vaccination program in various healthcare settings?
Methodology
- ✪ Hybrid effectiveness-implementation trial designs—critical assessments, innovative applications, and proposed advancements (Frontiers in Health Services, 2024)
- ✪ From innovative applications of the effectiveness-implementation hybrid trial design to the dissemination, implementation, effectiveness, sustainment, economics, and level-of-scaling hybrid trial design (Frontiers in Health Services, 2022)
- Effectiveness-implementation Hybrid Designs: Combining Elements of Clinical Effectiveness and Implementation Research to Enhance Public Health Impact (Medical Care, 2012)
- ✪ Effectiveness-implementation hybrid designs: implications for quality improvement science (Implementation Science, 2013)
- An introduction to effectiveness-implementation hybrid designs (Psychiatry Research, 2019)
- ✪ Expanding Hybrid Studies for Implementation Research: Intervention, Implementation Strategy, and Context (Frontiers in Public Health, 2019)
- ✪ Reflections on 10 years of effectiveness-implementation hybrid studies (Frontiers in Health Services, 2022)
- ✪ Design and management considerations for control groups in hybrid effectiveness-implementation trials: Narrative review & case studies (Frontiers in Health Services, 2023)
- ✪ Applying hybrid effectiveness-implementation studies in equity-centered policy implementation science (Frontiers in Health Services, 2023)
- 💻 “Hybrid Designs” Combining Elements of Clinical Effectiveness and Implementation Research
- 💻 Hybrid Effectiveness Implementation Approaches
- 💻 How to use implementation hybrid designs
Examples of Use
- ✪ Improving measurement-based care implementation in youth mental health through organizational leadership and climate: a mechanistic analysis within a randomized trial (Implementation Science, 2024)
- ✪ A Hybrid III stepped wedge cluster randomized trial testing an implementation strategy to facilitate the use of an evidence-based practice in VA Homeless Primary Care Treatment Programs (Implementation Science, 2017)
- ✪ Implementation findings from a hybrid III implementation-effectiveness trial of the Diabetes Prevention Program (DPP) in the Veterans Health Administration (VHA) (Implementation Science, 2017)
- ✪ Using a continuum of hybrid effectiveness-implementation studies to put research-tested colorectal screening interventions into practice (Implementation Science, 2019)
- ✪ Results of a multi-site pragmatic hybrid type 3 cluster randomized trial comparing level of facilitation while implementing an intervention in community-dwelling disabled and older adults in a Medicaid waiver (Implementation Science, 2022)
SMART
A Sequential Multiple Assignment Randomized Trial (SMART) is an advanced experimental design used in implementation science to develop and evaluate adaptive implementation strategies. In a SMART, participants undergo multiple stages of randomization, allowing researchers to test different sequences of the strategy.
By adapting the strategy at various points, SMARTs provide insights into the best ways to tailor strategies over time, ensuring that they are responsive to the changing needs of the organization or community.
- What is the most effective sequence of implementation strategies to improve the adoption of a new healthcare intervention?
- How can implementation strategies be tailored based on initial response to improve outcomes in different subpopulations?
- What is the comparative effectiveness of adaptive versus static implementation strategies in diverse healthcare settings?
- What are the mechanisms through which adaptive implementation strategies influence the adoption and sustainability of evidence-based practices?
Methodology
- The Multiphase Optimization Strategy (MOST) and the Sequential Multiple Assignment Randomized Trial (SMART) (The American Journal of Preventive Medicine, 2007)
- Sequential Multiple-Assignment Randomized Trials: Developing and Evaluating Adaptive Interventions in Special Education ( Remedial and Special Education, 2018)
- SMART Thinking: a Review of Recent Developments in Sequential Multiple Assignment Randomized Trials (Current Epidemiology Reports, 2016)
- ✪ Adaptive Designs in Implementation Science and Practice: Their Promise and the Need for Greater Understanding and Improved Communication (Annual Review of Public Health, 2023)
- ✪ Design of experiments with sequential randomizations on multiple timescales: the hybrid experimental design (Behavior Research Methods, 2024)
- 💻 Sequential Multiple Assignment Randomized Trials (SMART) & Adaptive Designs for Implementation Studies
- 💻 University of Michigan d3c: Introduction to SMARTs
Examples of Use
- ✪ High-Yield HIV Testing, Facilitated Linkage to Care, and Prevention for Female Youth in Kenya (GIRLS Study): Implementation Science Protocol for a Priority Population (JMIR Research Protocols, 2017)
- Getting “SMART” about implementing multi-tiered systems of support to promote school mental health (Journal of School Psychology, 2018)
- Sequential multiple assignment randomization trial designs for nursing research (Research in Nursing & Health, 2019)
- ✪ Scaling and sustaining COVID-19 vaccination through meaningful community engagement and care coordination for underserved communities: hybrid type 3 effectiveness-implementation sequential multiple assignment randomized trial (Implementation Science, 2023)
- ✪ Primary aim results of a clustered SMART for developing a school-level, adaptive implementation strategy to support CBT delivery at high schools in Michigan (Implementation Science, 2022)
Understanding Experimental Designs, Dr. J. Michael Oakes, PhD (University of Minnesota)
Quasi-Experimental Designs
Overview
Quasi-experimental designs in implementation science are research methods used to evaluate the causal relationships between variables when random assignment is not feasible. Unlike true experimental designs, quasi-experimental designs do not rely on random assignment to create control and experimental groups. Instead, they use pre-existing groups or non-random criteria to assign participants. This approach allows researchers to study the effects of interventions in real-world settings where randomization may be impractical or unethical.
In implementation science, quasi-experimental designs are used to assess the impact of interventions or treatments on outcomes. They help researchers understand how interventions work in practice, identify factors that influence their effectiveness, and provide evidence for scaling up successful practices. These methods enable researchers to draw conclusions about the effectiveness of interventions while accounting for potential confounding variables.
- What is the impact of a public health social media campaign on increasing vaccination rates among different demographic groups?
- Is there any difference in impact of a new health policy on patient outcomes in rural and urban areas?
- How do different methods of delivering a health intervention (e.g., in-person vs. telehealth) compare in terms of implementation success and patient outcomes?
- How do organizational factors (e.g., leadership support, staff engagement) influence the success of implementation efforts in different healthcare settings?
Methodology
- ✪ Experimental and quasi-experimental designs in implementation research (Psychiatry Research, 2020)
- ✪ Selecting and Improving Quasi-Experimental Designs in Effectiveness and Implementation Research (Annual Review of Public Health, 2018)
- 💻 Scribbr: Quasi-Experimental Design | Definition, Types & Examples
- ✪ UNICEF Methodological Briefs: Quasi-Experimental Design and Methods
Examples of Use
- ✪ Strategies for primary HPV test-based cervical cancer screening programme in resource-limited settings in India: Results from a quasi-experimental pragmatic implementation trial (PLoS ONE, 2024)
- ✪ Evaluation of the Little Rock Green Schoolyard initiative: a quasi-experimental study protocol (BMC Public Health, 2023)
- Can a Home-Based Collaborative Care Model Reduce Health Services Utilization for Older Medicaid Beneficiaries Living with Depression and Co-occurring Chronic Conditions? A Quasi-experimental Study (Administration and Policy in Mental Health and Mental Health Services Research, 2023)
Interrupted Time Series
Interrupted time series (ITS) designs in implementation science are used to evaluate the impact of an intervention by comparing data collected at multiple time points before and after the intervention is implemented. This design helps to determine whether the intervention has had an effect that is greater than any underlying trend in the data.
In implementation science, ITS designs are used to assess the effectiveness of interventions such as policy changes, quality improvement programs, or new treatments. By analyzing the level and trend of outcomes before and after the intervention, researchers can identify immediate and sustained effects, as well as any changes in the trajectory of the outcomes. This method is particularly valuable for evaluating interventions in real-world settings where randomization is not feasible.
- How does the implementation of a new electronic health record system affect clinical workflow and patient care over time?
- What are the temporal effects of a public health campaign on vaccination rates in a community?
- How did the introduction of a hand hygiene protocol affect the incidence of hospital-acquired infections over time?
- What was the impact of a public health campaign on smoking cessation rates before and after its implementation?
Methodology
- ✪ Reflection on modern methods: a common error in the segmented regression parameterization of interrupted time-series analyses (International Journal of Epidemiology, 2020)
- A methodological framework for model selection in interrupted time series studies (Journal of Clinical Epidemiology, 2018)
- ✪ Use of Interrupted Time Series Analysis in Evaluating Health Care Quality Improvements (Academic Pediatrics, 2013)
- ✪ Methods, Applications and Challenges in the Analysis of Interrupted Time Series Data: A Scoping Review (Journal of Multidisciplinary Healthcare, 2020)
- A robust interrupted time series model for analyzing complex health care intervention data (Statistics in Medicine, 2017)
- A matching framework to improve causal inference in interrupted time‐series analysis (Journal of Evaluation in Clinical Practice, 2017)
- Heterogeneity in application, design, and analysis characteristics was found for controlled before-after and interrupted time series studies included in Cochrane reviews (Journal of Clinical Epidemiology, 2017)
- ✪ The use of controls in interrupted time series studies of public health interventions (International Journal of Epidemiology, 2018)
Examples of Use
- ✪ Bridging the Gap: using an interrupted time series design to evaluate systems reform addressing refugee maternal and child health inequalities (Implementation Science, 2015)
- ✪ Effect of a tailored sepsis treatment protocol on patient outcomes in the Tikur Anbessa Specialized Hospital, Ethiopia: results of an interrupted time series analysis (Implementation Science, 2022)
- ✪ Effect of a population-level performance dashboard intervention on maternal-newborn outcomes: an interrupted time series study (BMJ Quality & Safety, 2017)
- ✪ The Effect of an Electronic Medical Record–Based Clinical Decision Support System on Adherence to Clinical Protocols in Inflammatory Bowel Disease Care: Interrupted Time Series Study (JMIR Medical Informatics, 2024)
- Impact of differentiated service delivery models on 12-month retention in HIV treatment in Mozambique: an interrupted time-series analysis (The Lancet HIV, 2023)
- ✪ Implementation of new technologies designed to improve cervical cancer screening and completion of care in low-resource settings: a case study from the Proyecto Precancer (Implementation Science Communications, 2024)
Difference-in-Differences
Difference-in-Differences (DiD) is a quasi-experimental research design used to estimate causal relationships. It compares the changes in outcomes over time between a group that is exposed to a treatment (the treatment group) and a group that is not (the control group). This method helps to control for confounding variables that could affect the outcome, by assuming that any differences between the groups before the treatment would remain constant over time if the treatment had not been applied.
In implementation science, DiD is used to evaluate the impact of interventions, policies, or programs by comparing the changes in outcomes between groups that receive the intervention and those that do not. This method is particularly useful when researcher-induced randomization is not feasible.
- How does the implementation of telehealth services affect patient health outcomes in rural versus urban areas?
- What is the impact of a new training program on the quality of care provided by healthcare professionals?
- How do changes in vaccination policy (e.g., mandatory vaccination) affect vaccination rates among different demographic groups?
- What is the effect of implementing electronic health records on patient safety incidents in hospitals?
Methodology
- 💻 Introduction to Difference in Differences (Tilburg Science Hub)
- 💻 Difference-in-Difference Estimation (Columbia University, Mailman School of Public Health)
- 💻 Knowledge Bank: Difference-in-Difference (Evaluation Observatory)
- ✪ A Tutorial on Applying the Difference-in-Differences Method to Health Data (Current Epidemiology Reports, 2023)
- ✪ Impact evaluation using Difference-in-Differences (RAUSP Management Journal, 2019)
Examples of Use
- ✪ ✪ An implementation strategy package (video education, HIV self-testing, and co-location) improves PrEP implementation for pregnant women in antenatal care clinics in western Kenya (Frontiers in Reproductive Health , 2023)
- ✪ Rapid ethnography and participatory techniques increase onchocerciasis mass drug administration treatment coverage in Benin: a difference-in-differences analysis (Implementation Science Communications, 2023)
- ✪ The implementation and effectiveness of multi-tasked, paid community health workers on maternal and child health: A cluster-randomized pragmatic trial and qualitative process evaluation in Tanzania (PLoS Global Public Health, 2023)
- ✪ An EMR-Based Alert with Brief Provider-Led ART Adherence Counseling: Promising Results of the InfoPlus Adherence Pilot Study Among Haitian Adults with HIV Initiating ART (AIDS & Behavior, 2020)
Regression Discontinuity Designs
Regression discontinuity designs (RDD) in implementation science are used to estimate the causal effects of implementations. In RDD, the assignment to treatment is determined by whether an observed covariate falls above or below a fixed threshold. This creates a clear cutoff point, allowing researchers to compare outcomes just above and below this threshold, which approximates random assignment.
In implementation science, RDD is used to evaluate the impact of implementation strategies when randomization is not feasible. By focusing on individuals near the cutoff, researchers can infer the causal effects of the strategy with reduced bias. This method is particularly useful in settings where strategies are assigned based on specific criteria, such as implementation fidelity scores, hours of training completed, or other continuous variables. RDD helps to provide robust evidence on the effectiveness of implementation strategies by leveraging naturally occurring thresholds in observational data.
- How does the assignment to a supplemental education program based on test scores influence academic performance and graduation rates?
- What is the effect of eligibility for a health insurance program (determined by income threshold) on healthcare utilization and health outcomes?
- How does crossing a threshold for mandatory training hours affect healthcare providers’ adherence to evidence-based guidelines?
Methodology
- Regression Discontinuity Design (JAMA Guide to Statistics and Methods, 2020)
- Regression discontinuity designs are underutilized in medicine, epidemiology, and public health: a review of current and best practice (Journal of Clinical Epidemiology, 2015)
- Advancing Quality Improvement with Regression Discontinuity Designs (Annals of the American Thoracic Society, 2018)
- Why High-Order Polynomials Should Not Be Used in Regression Discontinuity Designs (Journal of Business & Economic Statistics, 2017)
- Regression discontinuity designs in healthcare research (BMJ, 2016)
- ✪ Regression Discontinuity Designs in Epidemiology: Causal Inference Without Randomized Trials (Epidemiology, 2014)
- Alternatives to Randomized Control Trial Designs for Community-Based Prevention Evaluation (Prevention Science, 2017)
- ✪ RD or Not RD: Using Experimental Studies to Assess the Performance of the Regression Discontinuity Approach (Evaluation Review, 2018)
Examples of Use
- Effect of human papillomavirus (HPV) vaccination on clinical indicators of sexual behaviour among adolescent girls: the Ontario Grade 8 HPV Vaccine Cohort Study (Canadian Medical Association Journal, 2015)
- School turnaround in North Carolina: A regression discontinuity analysis (Economics of Education Review, 2017)
- ✪ Estimating the real-world effects of expanding antiretroviral treatment eligibility: Evidence from a regression discontinuity analysis in Zambia (PLoS Medicine, 2018)
- ✪ A Mobile Health Strategy to Support Adherence to Antiretroviral Preexposure Prophylaxis (AIDS Patient Care and STDs, 2018)
Observational Designs
Time & Motion Studies
Time and motion studies, also known as work measurement or motion studies, are systematic methods used to observe, document, and analyze work processes to improve efficiency. These studies involve breaking down tasks into their basic components, timing each element, and analyzing the movements involved. The primary goal is to identify and eliminate unnecessary steps, reduce wastage of time and resources, and enhance overall productivity.
In implementation science, time and motion studies are used to understand workflows, identify inefficiencies, and improve processes, particularly in healthcare settings. By meticulously observing and recording the time taken to complete specific tasks and the movements involved, researchers can gain insights into how work is performed and where improvements can be made.
- How much time is spent on different tasks within the implementation workflow, and where are the inefficiencies?
- What are the most common interruptions or delays in the implementation process, and how do they impact overall implementation success?
- How do different work environments or settings affect the time required to complete specific tasks?
- What are the effects of implementing new technologies or interventions on the efficiency and effectiveness of existing workflows?
Methodology
- 📖 The ECPH Encyclopedia of Psychology Entry: Motion and Time Study
- The Purpose of Time-Motion Studies (TMSs) in Healthcare: A Literature Review (Cureus, 2022)
- 💻 Study.com Video: Time & Motion Study | Definition, History & Methodology
- ✪ Time and motion study design: Handling variability and confounding of results (Value in Health, 2013)
Examples of Use
- ✪ Optimizing naloxone distribution to prevent opioid overdose fatalities: results from piloting the Systems Analysis and Improvement Approach within syringe service programs (BMC Health Services Research, 2023)
- ✪ A one-stop shop model for improved efficiency of pre-exposure prophylaxis delivery in public clinics in western Kenya: a mixed methods implementation science study (Journal of the International AIDS Society, 2021)
- ✪ Time–motion analysis of external facilitation for implementing the Collaborative Chronic Care Model in general mental health clinics: Use of an interval-based data collection approach (Implementation Research and Practice, 2022)
- ✪ Wait and consult times for primary healthcare services in central Mozambique: a time-motion study (Global Health Action, 2016)
Ecological Studies
Ecological studies in implementation science examine the relationships between environmental or contextual factors and health outcomes at a group or population level. These studies do not focus on individual-level data but rather on aggregated data to understand how broader social, economic, and environmental contexts influence the implementation and effectiveness of interventions.
In implementation science, ecological studies are used to explore how different settings, such as communities, healthcare systems, or regions, impact the adoption, implementation, and sustainability of interventions. By analyzing data from various sources, researchers can identify patterns and correlations that may not be evident at the individual level. This approach helps in understanding the broader determinants of health and the contextual factors that can facilitate or hinder the successful implementation of evidence-based practices.
- How do regional differences in healthcare infrastructure impact the adoption and effectiveness of new medical technologies?
- What is the relationship between socioeconomic factors and the success of public health interventions across different communities?
- How do environmental policies at the local or national level influence the implementation of sustainable practices in healthcare settings?
- What are the effects of cultural and social norms on the uptake and sustainability of health promotion programs in various populations?
Methodology
- ✪ Study Design VI – Ecological Studies (Evidence-Based Dentistry, 2006)
- Ecologic Studies Revisited (Annual Review of Public Health, 2008)
- ✪ Ecologic Studies and Natural Experiments (Indian Journal of Dermatology, 2017)
- ✪ Eight characteristics of rigorous multilevel implementation research: a step-by-step guide (Implementation Science, 2023)
- Observational Studies: Naturalistic Observation (Scribbr)
Examples of Use
- ✪ Facilitators and barriers to safe emergency department transitions for community dwelling older people with dementia and their caregivers: A social ecological study (International Journal of Nursing Studies, 2013)
- ✪ At-home testing to mitigate community transmission of SARS-CoV-2: protocol for a public health intervention with a nested prospective cohort study (BMC Public Health, 2021)
- Future Directions for Dissemination and Implementation Science: Aligning Ecological Theory and Public Health to Close the Research to Practice Gap (Journal of Clinical Child & Adolescent Psychology, 2016)
- ✪ Ecosystem change and human health: implementation economics and policy (Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 2017)
Qualitative Designs
Overview
Qualitative methods are approaches used to gather and analyze non-numerical data to understand concepts, opinions, or experiences. These methods focus on exploring complex phenomena and the meanings attributed to them by individuals or groups.
Common qualitative methods include interviews, focus groups, observations, and the analysis of texts or artifacts. Qualitative research aims to capture the richness and depth of human experiences, beliefs, attitudes, and behaviors, often through detailed, descriptive data collection. This type of research is exploratory and contextual, emphasizing the importance of understanding phenomena within their social, cultural, and historical contexts.
Qualitative study design is used in implementation science to gain a deep understanding of the processes, contexts, and experiences involved in implementing evidence-based practices. Researchers use qualitative methods to explore the specific settings and contexts where implementation occurs, such as organizational culture, stakeholder perspectives, and environmental factors. This helps identify barriers and facilitators to implementation, which is crucial for tailoring interventions to fit the local context and developing strategies to overcome challenges.
Additionally, qualitative methods are employed to conduct process evaluations, assessing how an intervention is being implemented. This includes examining fidelity, adaptations made during implementation, and the day-to-day dynamics of the process. By gathering detailed feedback from participants and stakeholders, qualitative research informs the development and refinement of interventions, ensuring they are relevant, acceptable, and feasible for the target population.
Moreover, qualitative research generates new hypotheses and theories about how and why certain implementation strategies work or do not work. This leads to the development of new frameworks and models that guide future research and practice. Often, qualitative methods are combined with quantitative approaches in mixed-methods studies to provide a comprehensive understanding of implementation processes and outcomes.
- What factors influence the adoption of evidence-based practices in a specific setting?
- How do healthcare providers perceive the barriers and facilitators to implementing a new intervention?
- What adaptations are made to an intervention during its implementation, and why?
- How do patients and other stakeholders experience the implementation of a new practice?
Methodology
- ✪ Qualitative methods in implementation research: An introduction (Psychiatry Research, 2019)
- ✪ Pragmatic approaches to analyzing qualitative data for implementation science: an introduction (Implementation Science Communications, 2021)
- Sample sizes for saturation in qualitative research: A systematic review of empirical tests (Social Science & Medicine, 2022)
- 💻 Qualitative and Mixed Methods in Dissemination & Implementation Research (TIDIRC Module with Dr. Alison Hamilton)
- ✪ Data collection in qualitative research (Evidence-Based Nursing, 2018)
Examples of Use
- ✪ Qualitative evaluation of the Systems Analysis and Improvement Approach as a strategy to increase HIV testing in family planning clinics using the Consolidated Framework for Implementation Research and the Implementation Outcomes Framework (Implementation Science Communications, 2022)
- ✪ A qualitative study identifying implementation strategies using the i-PARIHS framework to increase access to pre-exposure prophylaxis at federally qualified health centers in Mississippi (Implementation Science Communications, 2024)
- ✪ Barriers and enablers in the implementation of a quality improvement program for acute coronary syndromes in hospitals: a qualitative analysis using the consolidated framework for implementation research (Implementation Science, 2022)
- ✪ Health plan adaptations to a mailed outreach program for colorectal cancer screening among Medicaid and Medicare enrollees: the BeneFIT study (Implementation Science, 2020)
Qualitative and Mixed Methods in Dissemination & Implementation Research, Dr. Alison B. Hamilton, PhD (for TIDIRC)
Interviews
Conducting interviews involves engaging participants in a structured or semi-structured conversation to gather in-depth information about their perspectives, experiences, and insights on a specific topic. This method allows researchers to explore complex issues in detail, capturing nuances that might be missed with quantitative methods.
In implementation science, interviews are used to understand the contextual factors that influence the adoption, implementation, and sustainability of evidence-based practices. By interviewing stakeholders such as healthcare providers, patients, and policymakers, researchers can identify barriers and facilitators to implementation, gather feedback on intervention strategies, and tailor approaches to better fit the needs of diverse populations. This method is particularly valuable in capturing the lived experiences of individuals affected by the interventions, ensuring that the implementation process is informed by real-world insights and is more likely to be effective and equitable.
- What are the barriers and facilitators to the adoption of evidence-based practices in a specific healthcare setting?
- How do healthcare providers perceive the effectiveness of a newly implemented intervention?
- What contextual factors influence the sustainability of an intervention in community settings?
- How do patients and community members experience and respond to a new health intervention?
Methodology
- 💻 Types of Interviews in Research | Guide & Examples (Scribbr)
- ✪ Introduction to Qualitative Research Methods: Chapter 11 – Interviewing
- ✪ A methodological guide to using and reporting on interviews in conservation science research (Methods in Ecology and Evolution, 2018)
- ✪ Introduction: Making the case for qualitative interviews (International Journal of Social Research Methodology, 2020)
Examples of Use
- ✪ Integration of a Digital Health Intervention Into Immunization Clinic Workflows in Kenya: Qualitative, Realist Evaluation of Technology Usability (JMIR Formative Research, 2023)
- ✪ Acceptability and Feasibility of Pharmacy-Based Delivery of Pre-Exposure Prophylaxis in Kenya: A Qualitative Study of Client and Provider Perspectives (AIDS and Behavior, 2021)
- ✪ Applying the Consolidated Framework for Implementation Research to Identify Implementation Determinants for the Integrated District Evidence-to-Action Program, Mozambique (Global Health: Science and Practice, 2022)
- ✪ Development of a Field Guide for Assessing Readiness to Implement Evidence-Based Cancer Screening Interventions in Primary Care Clinics (Preventing Chronic Disease: Public Health Research, Practice, and Policy, 2022)
Focus Groups
Focus groups are a qualitative research method that involves guided discussions with a small group of participants, typically ranging from 6 to 10 people. These discussions are led by a skilled moderator who facilitates conversation around specific topics or questions. The goal is to gather diverse perspectives, opinions, and experiences from participants, providing rich, detailed data that might not emerge through individual interviews or surveys.
In implementation science, focus groups are used to explore the contextual factors that influence the adoption and integration of evidence-based practices. By engaging stakeholders such as healthcare providers, patients, and community members, researchers can identify barriers and facilitators to implementation, understand the needs and preferences of different groups, and refine intervention strategies to enhance their relevance and effectiveness. This method is particularly valuable for capturing the collective insights and dynamics of group interactions, which can inform more equitable and context-sensitive implementation efforts.
- What are the perceived barriers and facilitators to implementing a new health intervention among healthcare providers?
- How do patients and community members perceive the acceptability and feasibility of a proposed health intervention?
- What are the contextual factors that influence the success or failure of an intervention in different settings?
- How can implementation strategies be adapted to better meet the needs of diverse populations?
Methodology
- 💻 What is a Focus Group | Step-by-Step Guide & Examples (Scribbr)
- ✪ Towards an anticipatory public engagement methodology: deliberative experiments in the assembly of possible worlds using focus groups (Qualitative Research, 2020)
- ✪ Focus group methodology: some ethical challenges (Quality & Quantity, 2019)
- ✪ Methodological Aspects of Focus Groups in Health Research: Results of Qualitative Interviews With Focus Group Moderators (Global Qualitative Nursing Research, 2016)
Examples of Use
Mixed Methods Designs
Overview
Mixed methods research design is an approach that combines both qualitative and quantitative research methods within a single study. This integration allows researchers to draw on the strengths of both types of data to gain a more comprehensive understanding of the research problem. By using mixed methods, researchers can explore complex phenomena from multiple perspectives, providing richer and more nuanced insights than either method alone. This approach is particularly useful in multidisciplinary settings and for addressing complex situational or societal issues, as it allows for the triangulation of data, enhancing the validity and reliability of the findings.
In implementation science, mixed-methods designs are essential for capturing the complexity of implementing evidence-based practices. Commonly used mixed-methods designs include:
- Convergent Study Design: This design involves collecting both qualitative and quantitative data simultaneously, analyzing them separately, and then merging the results to draw comprehensive conclusions.
- Explanatory Sequential Design: In this approach, quantitative data is collected and analyzed first, followed by qualitative data to help explain or elaborate on the quantitative findings.
- Exploratory Sequential Design: This design starts with qualitative data collection and analysis to explore a phenomenon, followed by quantitative data collection to test or generalize the initial qualitative findings.
- Configurational Analysis: This family of designs is used to understand how different conditions or factors combine to produce a particular outcome. It focuses on identifying patterns and configurations of causally relevant conditions rather than examining the net effects of individual variables.
- Concept Mapping: This design engages stakeholders in a structured process to visually represent the relationships among a set of related concepts.
These designs help researchers gain a more comprehensive understanding of implementation processes and outcomes by leveraging the strengths of both qualitative and quantitative methods.
- What percentage of clinics continue to use an EBP one year after initial implementation and what factors influence the long-term sustainability of the EBP in these clinics?
- How does participation in a training program affect teachers’ use of evidence-based instructional strategies and what are teachers’ experiences and challenges in applying these strategies in the classroom?
- To what extent are clinics adhering to the prescribed implementation protocols of an EBP and what are the contextual factors that affect fidelity to the implementation protocols?
- How do different contextual factors (e.g., urban vs. rural settings) affect the success of EBP implementation and what are the specific contextual challenges and supports identified by implementers in different settings?
Methodology
- 💻 Qualitative and Mixed Methods in Dissemination & Implementation Research (TIDIRC Module with Dr. Alison Hamilton)
- ✪ Mixed Method Designs in Implementation Research (Administration and Policy in Mental Health and Mental Health Services Research, 2011)
- Optimizing Mixed Methods for Implementation Research in Large Systems (Administration and Policy in Mental Health and Mental Health Services Research, 2015)
- Qualitative and Mixed Methods Research in Dissemination and Implementation Science (Journal of Clinical Child & Adolescent Psychology, 2014)
- ✪ Combining the Power of Stories and the Power of Numbers: Mixed Methods Research and Mixed Studies Reviews (Annual Review of Public Health, 2013)
- Purposeful Sampling for Qualitative Data Collection and Analysis in Mixed Method Implementation Research (Administration and Policy in Mental Health and Mental Health Services Research, 2015)
- ✪ Innovations in Mixed Methods Evaluations (Annual Review of Public Health, 2019)
Examples of Use
- ✪ Comparing organization-focused and state-focused financing strategies on provider-level reach of a youth substance use treatment model: a mixed-method study (Implementation Science, 2023)
- ✪ “They are gaining experience; we are gaining extra hands”: a mixed methods study to assess healthcare worker perceptions of a novel strategy to strengthen human resources for HIV in South Africa (BMC Health Services Research, 2023)
- ✪ Soil-transmitted helminth surveillance in Benin: A mixed-methods analysis of factors influencing non-participation in longitudinal surveillance activities (PLoS Neglected Tropical Diseases, 2023)
- ✪ Centering School Leaders’ Expertise: Usability Evaluation of a Leadership-Focused Implementation Strategy to Support Tier 1 Programs in Schools (School Mental Health, 2024)
Mixed Methods in Implementation Science, Dr. Lawrence A. Palinkas, PhD (for TIDIRC)
Coincidence Analysis
Coincidence analysis is a type of configurational analysis commonly used in implementation science. Coincidence analysis is a method of causal inference and data analysis achieved by grouping causes into bundles that are jointly effective and placing them on alternative causal routes to their effects.
In implementation science, coincidence analysis is used to understand how different implementation conditions work together to achieve desired outcomes. For example, it can help identify which combinations of intervention components, strategies, and contextual factors are most effective in achieving high implementation fidelity or improved health outcomes.
This method can uncover empirical findings that might be missed by traditional approaches, providing deeper insights into the mechanisms driving successful implementation.
- What combinations of implementation strategies and contextual factors lead to successful adoption of an intervention?
- Are there multiple pathways to achieving effective implementation?
- How do different components of an implementation strategy interact to produce desired outcomes?
- What are the necessary conditions for sustaining an intervention over time?
Methodology
- ✪ Coincidence analysis: a new method for causal inference in implementation science (Implementation Science, 2020)
- 💻 Coincidence Analysis Methodology (University of Bergen)
Examples of Use
- ✪ Facility-level program components leading to population impact: a coincidence analysis of obesity treatment options within the Veterans Health Administration (Translational Behavioral Medicine, 2022)
- ✪ Identifying factors and causal chains associated with optimal implementation of Lynch syndrome tumor screening: An application of coincidence analysis (Genetics in Medicine, 2024)
- ✪ Engaging Operational Partners Is Critical for Successful Implementation of Research Products: a Coincidence Analysis of Access-Related Projects in the Veterans Affairs Healthcare System (Journal of General Internal Medicine, 2023)
- ✪ Uncovering determinants of perceived feasibility of TF-CBT through coincidence analysis (Implementation Research and Practice, 2024)
Concept Mapping
Concept mapping is a mixed-methods procedure that involves engaging stakeholders in a structured process to visually represent the relationships among a set of related concepts. This process typically includes brainstorming, sorting, and rating activities, followed by statistical analyses like multidimensional scaling and cluster analysis to create concept maps.
In implementation science, concept mapping is used in several ways. It helps identify and quantify factors affecting the implementation of evidence-based practices by involving stakeholders, ensuring that the identified factors are relevant and comprehensive. It also aids in developing conceptual models of implementation processes, which can guide the planning and execution of implementation strategies.
Additionally, concept mapping helps prioritize implementation strategies by rating their importance and feasibility.
This was exemplified in the ERIC study, where experts used concept mapping to categorize and rate 73 implementation strategies. The visual nature of concept maps facilitates communication among stakeholders, helping to build a shared understanding of the implementation process and the relationships between different factors. Furthermore, concept mapping can be used to assess readiness for implementation by identifying strengths and gaps in current practices and resources.
- What are the main challenges and supports for implementing a new evidence-based curriculum in schools?
- What are the key barriers and facilitators to implementing EBPs in primary care settings and how do different stakeholders (e.g., clinicians, administrators, patients) perceive these barriers and facilitators?
- How do different stakeholder groups (e.g., healthcare providers, patients, policymakers) prioritize implementation strategies for EBPs?
- What are the common strategies for customizing EBPs, and how effective are they?
Methodology
- ✪ Concept mapping: an introduction to structured conceptualization in health care (International Journal for Quality in Health Care, 2005)
- Methods to Improve the Selection and Tailoring of Implementation Strategies (The Journal of Behavioral Health Services & Research, 2017)
- ✪ A Systematic Review to Inform the Development of a Reporting Guideline for Concept Mapping Research (Methods & Protocols, 2023)
Examples of Use
- ✪ Fostering international collaboration in implementation science and research: a concept mapping exploratory study (BMC Research Notes, 2019)
- ✪ Aligning implementation and user-centered design strategies to enhance the impact of health services: results from a concept mapping study (Implementation Science Communications, 2020)
- ✪ Priority skills for equity-focused, evidence-based cancer control in community-based organizations: A group concept mapping analysis with academics and practitioners (Journal of Clinical and Translational Science, 2023)
- ✪ Use of concept mapping to inform a participatory engagement approach for implementation of evidence-based HPV vaccination strategies in safety-net clinics (Implementation Science Communications, 2024)
PAUSE AND REFLECT
❯ How will the study ensure the inclusion of diverse populations, especially those historically marginalized?
❯ What are the potential biases in the study design? Are there inherent biases in the chosen design that could affect the results? How can these biases be identified and mitigated?
❯ How will data be collected and analyzed? Are the data collection methods culturally sensitive and appropriate for all target populations? How will the data analysis account for differences across diverse groups?
❯ What are the ethical considerations? Are there ethical concerns related to the study design that could disproportionately affect certain groups? How will informed consent and confidentiality be ensured for all participants?
❯ What are the potential unintended consequences? Could the study design inadvertently reinforce existing inequities or create new ones? How will these risks be monitored and addressed?