Skip to content

What is implementation science?

How we get “what works” to the people who need it, with greater speed, fidelity, efficiency, quality, and relevant coverage.

Defining Implementation Science

Interventions and evidence-based practices that are poorly implemented – or not implemented at all – do not produce expected health benefits.

Implementation science is the scientific study of methods and strategies that facilitate the uptake of evidence-based practice and research into regular use by practitioners and policymakers. The field of implementation science seeks to systematically close the gap between what we know and what we do (often referred to as the know-do gap) by identifying and addressing the barriers that slow or halt the uptake of proven health interventions and evidence-based practices.

As an emerging field, there are many different ways people and agencies have defined implementation science. Some definitions focus on closing the gap between what we know (research) and what we do (practice). Others emphasize creating solutions that can be easily adjusted to work well in different places. A non-exhaustive list of definitions includes:

  • ✪ Glasgow, Eckstein, and ElZarrad (2013): Implementation science is the 'application and integration of research evidence in to practice and policy.'
  • ✪ Allottey et al (2008): Implementation science is 'applied research that aims to develop the critical evidence base that informs the effective, sustained and embedded adoption of interventions by health systems and communities.'
  • Peters et al (2013): Implementation science is 'the scientific inquiry into questions concerning implementation — the act of carrying an intention into effect, which in health research can be policies, programmes, or individual practices (collectively called interventions).'
  • 💻 National Institutes of Health, National Cancer Institute: 'Implementation science is the study of methods to promote the adoption and integration of evidence-based practices, interventions, and policies into routine health care and public health settings to improve the impact on population health.'

The Implementation Science Program in the UW Department of Global Health sees the fundamental question of implementation science as: How do we get "what works" to the people who need it, with greater speed, fidelity, efficiency, quality, and relevant coverage?

This inclusive stance values the systematic application of research methods from a range of diverse disciplines that are seen as critical for understanding the process, context, and outcomes of implementation, with an end goal of enabling scale-up and population-level benefits.

The UW Department of Global Health Implementation Science Program embraces thirteen research methods common to implementation science ranging from randomized controlled-trials, to hybrid effectiveness-implementation trials, to economic evaluation, to ethnography. Visit Select Research Methods to learn more.

Implementation strategy vs intervention

Implementation research starts with evidence-based interventions and practices. These could be programs, practices, principles, products, pills, or policies. We call these the 7 Ps. What’s important is that these interventions should have some evidence supporting them.

The 7 P's: Programs, Practices, Principles, Procedures, Products, Pills, Policies

A key distinction in implementation science exists between the implementation strategy and the intervention or evidence-based practice (one of the 7 Ps) being implemented. Following Dr. Geoff Curran’s Implementation science made too simple, this distinction becomes clearer:

Intervention/practice/innovation = the thing
Effectiveness research = study of whether the thing works
Implementation research = study of how best to help people or places do the thing
Implementation strategies = the stuff we do to try to help people or places do the thing
Implementation outcomes = how much and how well the people or places do the thing

Implementation outcomes vs health outcomes

‘Because an intervention or treatment will not be effective if it is not implemented well, implementation outcomes serve as necessary preconditions for attaining subsequent desired changes in clinical or service outcomes.’

Proctor et al, 2011

The main outcomes in effectiveness research are health outcomes, asking “does the thing improve health?” Implementation research focuses instead on how much or how well people and places do the thing. Examples of implementation outcomes include the percentage of clinicians delivering the thing, the percentage of eligible patients receiving thing, the extent to which clinicians are following the guideline for the thing, or even the skill with which clinicians deliver the thing.

In a landmark moment for the field, Dr. Enola Proctor and colleagues published Outcomes for Implementation Research: Conceptual Distinctions, Measurement Challenges, and Research Agenda, distinguishing implementation outcomes from service system outcomes and clinical treatment outcomes.

Implementation outcomes are the effects of deliberate and purposive actions to implement new treatments, practices, and services and are conceptually distinct from the other two types of outcomes. Implementation outcomes serve three key functions as:

  • indicators of the implementation success
  • proximal indicators of implementation processes
  • key intermediate outcomes of service system or clinical outcomes

Proctor et al (2011) Implementation Outcomes Definitions*

Acceptability: The perception among implementation stakeholders that a given treatment, service, practice, or innovation is agreeable, palatable, or satisfactory.

Adoption: The intention, initial decision, or action to try or employ an innovation or evidence-based practice. Adoption also may be referred to as “uptake.”

Appropriateness: The perceived fit, relevance, or compatibility of the innovation or evidence-based practice for a given practice setting, provider, or consumer; and/or perceived fit of the innovation to address a particular issue or problem. “Appropriateness” is conceptually similar to “acceptability,” and the literature reflects overlapping and sometimes inconsistent terms when discussing these constructs. We preserve a distinction because a given treatment may be perceived as appropriate but not acceptable, and vice versa.

Cost: The financial impact of an implementation effort. The true cost of implementing a treatment depends upon the costs of the particular intervention, the implementation strategy used, and the location of service delivery.

Feasibility: The extent to which a new treatment, or an innovation, can be successfully used or carried out within a given agency or setting. Typically, the concept of feasibility is invoked retrospectively as a potential explanation of an initiative’s success or failure, as reflected in poor recruitment, retention, or participation rates. While feasibility is related to appropriateness, the two constructs are conceptually distinct.

Fidelity: The degree to which an intervention was implemented as it was prescribed in the original protocol or as it was intended by the program developers. Fidelity has been measured more often than the other implementation outcomes, typically by comparing the original evidence-based intervention and the disseminated/implemented intervention in terms of (1) adherence to the program protocol, (2) dose or amount of program delivered, and (3) quality of program delivery.

Penetration: The integration of a practice within a service setting and its subsystems. From a service system perspective, the construct is also similar to “reach” in the RE-AIM framework.

Sustainability: The extent to which a newly implemented treatment is maintained or institutionalized within a service setting’s ongoing, stable operations.

* Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, Hensley M. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011 Mar;38(2):65-76. doi: 10.1007/s10488-010-0319-7. PMID: 20957426; PMCID: PMC3068522.

Throughout this website open access articles are marked with the ✪ symbol and free online resources are marked with the 💻 symbol.
What is Implementation Science?, with Dr. Kenny Sherr (University of Washington, Department of Global Health)

Resources to Learn More

Accessible Accordion

Videos from our friends

Orientation to the Science of Dissemination and Implementation, with Drs. Rinad Beidas, Cara C. Lewis, & Byron J. Powell (National Cancer Institute)
The Development of Implementation Science & Future Directions, with Dr. David Chambers (National Cancer Institute Fireside Chat)
Implementation Science: An Introductory Workshop for Researchers, Clinicians, Policy Makers and Community Members, with C. Hendricks Brown, PhD (Northwestern University, Center for Prevention Implementation Methodology)

Find Examples

library shelves looking down the aisle

Browse our Library of UW co-authored publications to see what our community is up to.

Visit the Library

Implementation research vs intervention effectiveness research

In a sense, implementation research picks up where effectiveness research leaves off. Once there is sufficient evidence to support doing the thing, scientific attention should be focused on how to get the thing integrated into routine practice. Implementation science differs from intervention research in that it focuses on the strategies (the stuff we do to do the thing) used to implement evidence-based practices and interventions (the thing), rather than on intervention effectiveness alone.

Implementation Science

Aim: To evaluate an implementation strategy
Outcomes: Acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, penetration/reach, and sustainability
Unit of analysis and randomization: The clinician, team, facility, or organization

Intervention Research

Aim: To assess a specific intervention
Outcomes: Effectiveness, safety, patient-reported outcomes (quality of life, satisfaction), behavioral changes, and changes in health metrics
Unit of analysis and randomization: Patient/recipient, groups of patients/recipients, organizations, or communities

Although implementation research and effectiveness research differ from each other, one can answer implementation research questions and effectiveness research questions in the same study through hybrid effectiveness-implementation study designs.

Overlapping fields of study

There are several other newly emerging fields of science that overlap with implementation science, including improvement science, knowledge translation, program science, and delivery science. The main distinction between these highly related fields is their focal point.

Improvement science ‘deploys rapid tests of change to guide the development, revision and continued fine-tuning of new tools, processes, work roles and relationships’ and is ‘a methodology for using disciplined inquiry to solve a specific problem of practice,’ (💻 Improvement Science at the Carnegie Foundation). The focus of improvement science is narrower than implementation science, as it seeks to maximize the impact of lessons learned from a specific improvement effort with the intent to maximize local benefits from local solutions. Learn more about how these two fields relate in Bridging the Silos: A Comparative Analysis of Implementation Science and Improvement Science (Frontiers in Health Services, 2022).

Delivery science ‘is applied research that evaluates clinical or organizational practices that systems can implement or encourage’ (✪ Lieu & Madvig, 2019), and has a broad focus compared to implementation science, targeting health systems strengthening and addressing the societal factors that impact public policy and system functioning.

Knowledge translation is ‘the exchange, synthesis, and ethically sound application of researcher findings with a complex system of relationships among researchers and knowledge users,’ (✪ Khalil, 2016). Where knowledge translation differs from implementation science is that KT does not cover how to implement knowledge.

Program science is focused on ‘the totality of a program, including an appraisal of the epidemic transmission dynamics, setting appropriate prevention objectives by sub-population, selecting and combining interventions and allocating resources between interventions accordingly’ (💻 Program Science: The Concept and Applications). Implementation science focuses on the uptake of an evidence-based practice into programmatic settings, but not on improvement of a whole program in and of itself.