Skip to content

UW Publication Library:
View by Implementation Outcome

Why are implementation outcomes important?

Studying implementation outcomes bridges the gap between research and practice, ensuring that evidence-based interventions reach their intended beneficiaries effectively.

The study of implementation outcomes is crucial to improved health equity for several reasons, including:

Effective Implementation Strategies: Understanding implementation outcomes helps identify which implementation strategies work best in real-world settings. Even the most evidence-based interventions may fail if not implemented effectively.

Optimizing Resources: By assessing outcomes like feasibility and cost, organizations can allocate resources efficiently. Implementing an intervention with limited resources requires strategic decision-making.

Tailoring Implementation Strategies: Implementation outcomes guide adaptation. Researchers and practitioners can modify strategies to fit specific contexts, improving their chances of success.

Quality Improvement: Monitoring fidelity and penetration allows for continuous improvement. Feedback on implementation informs adjustments and refinements.

Policy and Practice: Policymakers use implementation outcomes to inform decisions about scaling up interventions. Successful implementation drives policy changes.

Below, you can explore our archive by implementation outcome to see examples of each across a range of journals. Open access articles are marked with the ✪ symbol.

Acceptability is the perception among implementation stakeholders that a given treatment, service, practice, or innovation is agreeable, palatable, or satisfactory. Lack of acceptability has long been noted as a challenge in implementation (Davis 1993). The referent of the implementation outcome “acceptability” (or the “what” is acceptable) may be a specific intervention, practice, technology, or service within a particular setting of care. Acceptability should be assessed based on the stakeholder’s knowledge of or direct experience with various dimensions of the treatment to be implemented, such as its content, complexity, or comfort.

Acceptability is different from the larger construct of service satisfaction, as typically measured through consumer surveys. Acceptability is more specific, referencing a particular treatment or set of treatments, while satisfaction typically references the general service experience, including such features as waiting times, scheduling, and office environment. Acceptability may be measured from the perspective of various stakeholders, such as administrators, payers, providers, and consumers. We presume rated acceptability to be dynamic, changing with experience. Thus ratings of acceptability may be different when taken, for example, pre-implementation and later throughout various stages of implementation.’ (Excerpted from Proctor et al, 2011)

Browse the articles below to see how UW researchers are studying acceptability across contexts.

Adoption is defined as the intention, initial decision, or action to try or employ an innovation or evidence-based practice. Adoption also may be referred to as “uptake.” Our definition is consistent with those proposed by Rabin et al. (2008) and Rye and Kimberly (2007). Adoption could be measured from the perspective of provider or organization.’ (Excerpted from Proctor et al, 2011)

Browse the articles below to see how UW researchers are studying adoption across contexts.

Appropriateness is the perceived fit, relevance, or compatibility of the innovation or evidence-based practice for a given practice setting, provider, or consumer; and/or perceived fit of the innovation to address a particular issue or problem. “Appropriateness” is conceptually similar to “acceptability,” and the literature reflects overlapping and sometimes inconsistent terms when discussing these constructs. We preserve a distinction because a given treatment may be perceived as appropriate but not acceptable, and vice versa. For example, a treatment might be considered a good fit for treating a given condition but its features (for example, rigid protocol) may render it unacceptable to the provider.

The construct “appropriateness” is deemed important for its potential to capture some “pushback” to implementation efforts, as is seen when providers feel a new program is a “stretch” from the mission of the health care setting, or is not consistent with providers’ skill set, role, or job expectations. For example, providers may vary in their perceptions of the appropriateness of programs that co-locate mental health services within primary medical, social service, or school settings. Again, a variety of stakeholders will likely have perceptions about a new treatment’s or program’s appropriateness to a particular service setting, mission, providers, and clientele. These perceptions may be function of the organization’s culture or climate (Klein and Sorra 1996).’ (Excerpted from Proctor et al, 2011)

Browse the articles below to see how UW researchers are studying appropriateness across contexts.

Cost (incremental or implementation cost) is defined as the cost impact of an implementation effort. Implementation costs vary according to three components. First, because treatments vary widely in their complexity, the costs of delivering them will also vary. Second, the costs of implementation will vary depending upon the complexity of the particular implementation strategy used. Finally, because treatments are delivered in settings of varying complexity and overheads (ranging from a solo practitioner’s office to a tertiary care facility), the overall costs of delivery will vary by the setting.

The true cost of implementing a treatment, therefore, depends upon the costs of the particular intervention, the implementation strategy used, and the location of service delivery.’ (Excerpted from Proctor et al, 2011)

Browse the articles below to see how UW researchers are studying cost across contexts.

Feasibility is defined as the extent to which a new treatment, or an innovation, can be successfully used or carried out within a given agency or setting (Karsh 2004). Typically, the concept of feasibility is invoked retrospectively as a potential explanation of an initiative’s success or failure, as reflected in poor recruitment, retention, or participation rates. While feasibility is related to appropriateness, the two constructs are conceptually distinct. For example, a program may be appropriate for a service setting—in that it is compatible with the setting’s mission or service mandate, but may not be feasible due to resource or training requirements.’ (Excerpted from Proctor et al, 2011)

Browse the articles below to see how UW researchers are studying feasibility across contexts.

Fidelity is defined as the degree to which an intervention was implemented as it was prescribed in the original protocol or as it was intended by the program developers (Dusenbury et al. 2003; Rabin et al. 2008). Fidelity has been measured more often than the other implementation outcomes, typically by comparing the original evidence-based intervention and the disseminated/implemented intervention in terms of (1) adherence to the program protocol, (2) dose or amount of program delivered, and (3) quality of program delivery. Fidelity has been the overriding concern of treatment researchers who strive to move their treatments from the clinical lab (efficacy studies) to real-world delivery systems.

The literature identifies five implementation fidelity dimensions including adherence, quality of delivery, program component differentiation, exposure to the intervention, and participant responsiveness or involvement (Mihalic 2004; Dane and Schneider 1998). Adherence, or the extent to which the therapy occurred as intended, is frequently examined in psychotherapy process and outcomes research and is distinguished from other potentially pertinent implementation factors such as provider skill or competence (Hogue et al. 1996). Fidelity is measured through self-report, ratings, and direct observation and coding of audio- and videotapes of actual encounters, or provider-client/patient interaction. Achieving and measuring fidelity in usual care is beset by a number of challenges (Proctor et al. 2009; Mihalic 2004; Schoenwald et al. 2005). The foremost challenge may be measuring implementation fidelity quickly and efficiently (Hayes 1998).’ (Excerpted from Proctor et al, 2011)

Browse the articles below to see how UW researchers are studying fidelity across contexts.

Penetration is defined as the integration of a practice within a service setting and its subsystems. This definition is similar to (Stiles et al. 2002) notion of service penetration and to Rabin et al.s’ (2008) notion of niche saturation. Studying services for persons with severe mental illness, Stiles et al. (2002) apply the concept of service penetration to service recipients (the number of eligible persons who use a service, divided by the total number of persons eligible for the service). Penetration also can be calculated in terms of the number of providers who deliver a given service or treatment, divided by the total number of providers trained in or expected to deliver the service.

From a service system perspective, the construct is also similar to “reach” in the RE-AIM framework (Glasgow 2007b). We found infrequent use of the term penetration in the implementation literature; though studies seemed to tap into this construct with terms such a given treatment’s level of institutionalization.’ (Excerpted from Proctor et al, 2011)

Browse the articles below to see how UW researchers are studying reach and penetration across contexts.

Sustainability is defined as the extent to which a newly implemented treatment is maintained or institutionalized within a service setting’s ongoing, stable operations. The literature reflects quite varied uses of the term “sustainability,” but our proposed definition incorporates aspects of those offered by Johnson et al. (2004), Turner and Sanders (2006), Glasgow et al. (1999), Goodman et al. (1993), and Rabin et al. (2008).

Rabin et al. (2008) emphasizes the integration of a given program within an organization’s culture through policies and practices, and distinguishes three stages that determine institutionalization: (1) passage (a single event such as transition from temporary to permanent funding), (2) cycle or routine (i.e., repetitive reinforcement of the importance of the evidence-based intervention through including it into organizational or community procedures and behaviors, such as the annual budget and evaluation criteria), and (3) niche saturation (the extent to which an evidence-based intervention is integrated into all subsystems of an organization). Thus the outcomes of “penetration” and “sustainability” may be related conceptually and empirically, in that higher penetration may contribute to long-term sustainability. Such relationships require empirical test.’ (Excerpted from Proctor et al, 2011)

 

Browse the articles below to see how UW researchers are studying sustainability across contexts.