Skip to content

TIDIRC: Where you go to learn, meet new collaborators, and enhance your science

Dr. McKenna Claire Eastment

We asked Dr. McKenna Eastment (Departments of Medicine, Allergies & Infectious Diseases) to share her experience attending TIDIRC 2019 (the Training Institute for Dissemination and Implementation Research in Cancer) with us. She generously agreed to do so, and is our guest author this week.

 

Guest author: Dr. McKenna Eastment, MD, MPH

TIDIRC 2019: Where you go to learn, meet new collaborators, and enhance your science

I had the incredible opportunity to participate in TIDIRC 2019—the Training Institute for Dissemination and Implementation Research in Cancer. This training consisted of online modules with readings, recorded lectures, and assignments focusing on different aspects of implementation science. These included theories, models, frameworks, study designs, fidelity and adaptation, and outcome measurement.

Throughout the course we worked on a concept note for a grant, and I worked on my funded K08 through NCI focusing on improving cervical cancer screening implementation in Mombasa County, Kenya. This training helped me delve deeper into my methods and the outcomes I am proposing to collect.

One area that I have learned a great deal about through TIDIRC (but still have a ways to go!) is around theories, frameworks, and models. Additionally, the readings and discussion have helped me distinguish between my intervention (cervical cancer screening) and my implementation strategy (SAIA-Systems Analysis and Improvement Approach). Lastly, an area that I was able to specifically focus on was thinking about ‘core components’ of SAIA and what that means.

There are several other studies testing SAIA in a variety of medical conditions (pediatric HIV, mental health, PMTCT, hypertension) and we are trying to coordinate our efforts across all studies to really understand what makes SAIA, SAIA. TIDIRC 2019 provided a valuable space for me to think critically about this.

Overall, this was a great course with engaged faculty mentors and fellow students. The course culminated in an in-person two-day workshop held at the National Cancer Institute in Maryland, where we heard talks from leaders in the field of implementation science, met with others members of our virtual group (mine was focused on HPV, youth, and screening), and continued working on our concept notes. Our group was led by three amazing facilitators including Cindy Vinson, Heather Brandt, and UW’s own Bryan Weiner.

I have immensely enjoyed the learning and collegiality of TIDIRC 2019! I felt I gained colleagues in the field of implementation science and potential future collaborators. Our group worked well together and we offered to continue to share our work and provide feedback after the course ended. We have even made tentative plans to get together at the annual D&I conference in December in Washington, DC. I am so thankful I had this opportunity to attend TIDIRC!

 
Back to blog home


icon with white cloud on blue circular background

Implementation climate matters: Evidence from three new studies

Implementation climate is a construct that figures prominently in the inner setting domain of the Consolidated Framework for Implementation Research (CFIR). Three recently published articles provide quantitative evidence that implementation climate matters. Moreover, its contribution to implementation outcomes differs from the contribution of other organizational climate. What is Implementation Climate? In 1996, Katherine Klein and...Continue reading


Icon of white voice bubble on teal circular background

An introduction to stakeholder and policy analysis

It is sometimes difficult to understand the difference between policy and programmatic and/or intervention goals. Specifically, a policy is a plan for future action adopted by a governmental entity and formulated through the political process. For example, a programmatic action that focuses on engaging with women’s groups to change local perceptions around HPV vaccination is...Continue reading

An introduction to quality improvement

The goal of quality improvement (QI) is to test multiple rapid PDSA (Plan-Do-Study-Act) cycles, so QI is often implementing changes to existing processes that are practical and, ideally, simple. Identifying appropriate change concepts, change innovations, tests of change, and selecting an appropriate outcome variable are all necessary aspects of PDSA, but doing so can be...Continue reading

Welcome!

Welcome to the new online home for implementation science at the University of Washington! This resource has been created by the Department of Global Health's Implementation Science Program to help further the ongoing implementation research and education at the University of Washington, already a global hub for this young field of study. Our three main...Continue reading

Implementation climate matters: Evidence from three new studies

Implementation climate is a construct that figures prominently in the inner setting domain of the Consolidated Framework for Implementation Research (CFIR). Three recently published articles provide quantitative evidence that implementation climate matters. Moreover, its contribution to implementation outcomes differs from the contribution of other organizational climate.

What is Implementation Climate?

In 1996, Katherine Klein and Joanne Sorra developed the construct of implementation climate based on an extensive review of the determinants of effective implementation of information technology, noting that organizations use a wide variety of policies and practices to promote innovation use. Examples include training, technical support, incentives, persuasive communication, end-user participation in decision making, workflow changes, workload changes, alterations in staffing levels, alterations in staffing mix, new reporting requirements, new authority relationships, implementation monitoring, and enforcement procedures.

Not only do organizations vary in their use of specific ‘implementation policies and practices,’ but the effectiveness of these policies and practices varies from organization to organization and innovation to innovation. In light of such diversity in organizational practice and variability in effectiveness, Klein and Sorra proposed the construct of implementation climate to shift attention to the collective influence of the multiple policies and practices that organizations employ to promote innovation use.

Implementation climate is a shared perception among intended users of an innovation that innovation use is expected, supported, and rewarded. The stronger the implementation climate, they asserted, the more consistent, high-quality innovation use will be in an organization. Moreover, if implementation climates of equal strength can result from different combinations of implementation policies and practices, as Klein and Sorra claim, then a focus on implementation climate could bring greater clarity to scientific knowledge about the organizational determinants of innovation implementation.

Implementation climate is a central construct in the inner setting domain of the Consolidated Framework for Implementation Research (CFIR). Over the years, implementation scientists have introduced the construct of implementation climate to the field and situated it within a theory of organizational determinants of implementation effectiveness (Weiner et al, 2009), examined the role of implementation climate qualitatively in case studies (, discussed the meaning and measurement of implementation climate (Weiner et al., 2011), evaluated existing measures of implementation climate, and developed new measures with stronger psychometric properties (Jacobs et al., 2011).

Even with all of this work, implementation scientists had not demonstrated quantitatively that implementation climate matters, nor had they empirically disentangled the specific construct of implementation climate from the broader construct of organizational climate, which captures how the work environment is perceived by organizational members. They have now.

New Evidence from Three Studies

In their 2018 publication, Kea Turner and her colleagues tested Klein and Sorra’s organizational theory of innovation implementation effectiveness in a community pharmacy medication management program. Using hurdle regression analysis, they examined whether organizational determinants, such as implementation climate and innovation-values fit, were associated with effective implementation. They defined effective implementation in two ways: implementation versus non-implementation and program reach (i.e., the proportion of the target population that received the intervention.

Turner’s team observed that implementation climate was positively and significantly associated with implementation versus non-implementation and with program reach. In addition, they observed that innovation-values—the extent to which innovation use is consistent with intended users’ values—moderated the relationship between implementation climate and implementation effectiveness, just as the theory predicted.

For implementation climate researchers, these are exciting findings as they add to prior qualitative research indicating that implementation climate matters. Moreover, as Turner and her colleagues note, implementation climate is a modifiable factor that can be targeted through intervention with implementation strategies.

In a second recently published study, Nate Williams and his colleagues tested the hypothesis that organizational climate and implementation climate have joint, cross-level effects on clinicians’ implementation of evidence-based practices in behavioral health organizations. Specifically, they proposed that organizational climate moderates implementation climate’s current and long-term relationships with clinicians’ use of evidence-based practice such that strategic implementation climate will have its most positive effects when it is accompanied by a positive organizational climate.

Sure enough, they observed that in organizations with more positive organizational climates at baseline, higher levels of implementation climate predicted increased evidence-based practice use among clinicians who were present at baseline and among clinicians who were present in the organizations at 2-year follow-up. However, in organizations with less positive organizational climates, implementation climate was not related to clinicians’ use of evidence-based practice at either time point.

Again, these are exciting findings as they demonstrate that organizational climate and implementation climate are distinct constructs. Moreover, optimizing implementation requires attention to both constructs, as strategies that focus solely on strengthening implementation climate might not promote effective implementation unless the organization possesses or engenders a positive organizational climate.

In a third recently published study, Michael Pullman and his colleagues tested the hypothesis that implementation climate is associated with the intensity of workplace-based clinical supervision for evidence-based treatment delivery for children. They noted that workplace-based clinical supervision, in which supervisors provide oversight, feedback, and training to clinicians on clinical practice, is a promising strategy for supporting high-fidelity implementation of evidence-based mental health treatment for children, such as trauma-focused cognitive behavioral therapy (TF-CBT). In a state-funded EBT training initiative in public mental health in Washington State, they examined whether positive implementation climate supported more intense (i.e., frequent and thorough) coverage of core TF-CBT content areas in clinical supervision sessions. A nifty feature of this study is that the authors captured the intensity of supervisors’ coverage of core TF-CBT content areas not with self-reported measures from supervisors but with coded audio-recordings of supervisory sessions with clinicians.

Using three-level mixed effects models, Pullman and his colleagues found that implementation climate was significantly and positively associated two core TF-CBT content areas: exposure, a clinical intervention component and active ingredient of TF-CBT; and assessment, a structural element that supports TF-CBT delivery by guiding treatment decisions and monitoring client progress. They concluded that “a climate that supports, expects, and rewards EBT use may be one of the most important factors for improving the degree to which supervisors cover EBT in their supervision sessions.”

Conclusion

The results of these three studies have significant implications not only for the development of implementation theory, but also the deployment of implementation strategies in organizations seeking to promote the use or delivery of evidence-based practices in health and mental health.

Author: Dr. Bryan J. Weiner


Icon of white voice bubble on teal circular background

An introduction to stakeholder and policy analysis

It is sometimes difficult to understand the difference between policy and programmatic and/or intervention goals. Specifically, a policy is a plan for future action adopted by a governmental entity and formulated through the political process. For example, a programmatic action that focuses on engaging with women’s groups to change local perceptions around HPV vaccination is...Continue reading

An introduction to quality improvement

The goal of quality improvement (QI) is to test multiple rapid PDSA (Plan-Do-Study-Act) cycles, so QI is often implementing changes to existing processes that are practical and, ideally, simple. Identifying appropriate change concepts, change innovations, tests of change, and selecting an appropriate outcome variable are all necessary aspects of PDSA, but doing so can be...Continue reading

Welcome!

Welcome to the new online home for implementation science at the University of Washington! This resource has been created by the Department of Global Health's Implementation Science Program to help further the ongoing implementation research and education at the University of Washington, already a global hub for this young field of study. Our three main...Continue reading

An introduction to stakeholder and policy analysis

It is sometimes difficult to understand the difference between policy and programmatic and/or intervention goals. Specifically, a policy is a plan for future action adopted by a governmental entity and formulated through the political process.

For example, a programmatic action that focuses on engaging with women’s groups to change local perceptions around HPV vaccination is not, by itself, a public policy change, as it does not necessarily have to be adopted by a governmental entity and does not touch the political process. Similarly, for road traffic accidents, training emergency providers in better treatment or transport of trauma patients also does not necessarily intersect with the government or need to be formulated through the political process.

In contrast, enacting a law to increase the drinking age from 18 to 21 to control road traffic injuries or instituting taxes on tobacco products for cardiovascular disease prevention are examples of policy changes as they necessitate engaging government structures within the political process. Similarly, for HPV vaccination campaigns, creating legislation that requires all girls who attend school to be vaccinated is also a policy change. However, even local efforts to work with women’s groups to change perceptions regarding HPV vaccines could be re-framed into a policy change. Perhaps the government could enact a guideline requiring that all public clinics hire community outreach workers focused on HPV vaccination messaging. While this might still mobilize women’s group to engage with HPV vaccination campaigns, it would do so via a government guideline or law.

Health leadership around the world frequently need to prioritize one potential policy change over another. Often “implementation feasibility” or “cost of enacting the policy change” are primary considerations, but there are many other factors to include in this decision making process. One available resource for weighing one intervention against another is the Disease Control Priorities Project, which provides a table of different common interventions and their cost-effectiveness.

During the comparison process, scoring proposed policy changes by developing standardized criteria within a matrix can be a great way to be open and transparent. A transparent policy selection process can be very compelling to stakeholders who want to understand why you are advocating for a certain policy change over another.

In stakeholder and policy analysis, it is critical to identify appropriate stakeholders and understand their influence on the policy process, their motivators given the proposed policy change, and strategies for engaging the stakeholder in the policy process. It can be tempting to focus on stakeholders who are champions of the proposed change and to frame their motivators in positive/supportive terms rather than identifying stakeholders who might be opposed to a given policy change. However, it is often the opposed stakeholders that you need to work hard to understand and reach out to. One successful strategy is to engage dissenting stakeholders in coalitions with other stakeholders who you have identified as supportive.

Lastly, be sure to think critically about the motivations for all stakeholders. For example, a policy change of increasing taxes on cigarettes or alcohol may address a public health issue by decreasing disease burden, but it is important to understand if the general population (which includes both people who smoke and/or drink) would be supportive of the policy change. Many people in the community might be opposed to this policy because they might see themselves as at low risk of associated diseases, but impacted by the increased financial burden. Individuals who are alcohol dependent or addicted to cigarettes will see the price of their addiction increase, and while many of them may want to quit, there are systemic and structural issues that can make this difficult. Be sure you consider all stakeholders and how you could develop strategies to engage broad groups with diverse interests, many of which may not be explicitly or publicly stated.

Authors: Dr. Arianna Rubin Means & Dr. Brad Wagenaar


icon with white cloud on blue circular background

Implementation climate matters: Evidence from three new studies

Implementation climate is a construct that figures prominently in the inner setting domain of the Consolidated Framework for Implementation Research (CFIR). Three recently published articles provide quantitative evidence that implementation climate matters. Moreover, its contribution to implementation outcomes differs from the contribution of other organizational climate. What is Implementation Climate? In 1996, Katherine Klein and...Continue reading

An introduction to quality improvement

The goal of quality improvement (QI) is to test multiple rapid PDSA (Plan-Do-Study-Act) cycles, so QI is often implementing changes to existing processes that are practical and, ideally, simple. Identifying appropriate change concepts, change innovations, tests of change, and selecting an appropriate outcome variable are all necessary aspects of PDSA, but doing so can be...Continue reading

Welcome!

Welcome to the new online home for implementation science at the University of Washington! This resource has been created by the Department of Global Health's Implementation Science Program to help further the ongoing implementation research and education at the University of Washington, already a global hub for this young field of study. Our three main...Continue reading

An introduction to quality improvement

The goal of quality improvement (QI) is to test multiple rapid PDSA (Plan-Do-Study-Act) cycles, so QI is often implementing changes to existing processes that are practical and, ideally, simple. Identifying appropriate change concepts, change innovations, tests of change, and selecting an appropriate outcome variable are all necessary aspects of PDSA, but doing so can be challenging.

When identifying a change concept to target, it can be tempting to identify those more akin to traditional research study questions, such as exploring how to design valid diabetes clinical screening guidelines. Instead, a QI approach change concept could be focused on optimizing a specific aspect of the delivery of these evidence-based diabetes guidelines.

There is a bit of an art to defining a change concept that is neither too general nor too specific. A change concept such as “Increase HIV testing” is probably too general, while a change concept such as “Increase HIV testing by providing incentives in the form of branded t-shirts” is probably too specific. You want an overall change concept that is specific enough to guide your tests of change, but also general enough to allow multiple different approaches to be tested if your first test of change fails to have the desired impact. One possibility for a change concept would be to increase the number of people consenting to be tested for HIV in health facilities.

When determining change innovations that build upon your change concept, you may want to avoid defining a multi-component innovation. Whether or not you observe improvements during your QI, you may face difficulty untangling which part of your innovation contributed to the result you observed. One example of a testable innovation linked to the change concept above is improve patient satisfaction with HIV testing processes. All of your tests of change, therefore, would be focused on increasing the number of people who consent to be tested (change concept) by improving satisfaction with HIV testing processes (change innovation).

When designing tests of change, it is important to think about how such tests are linked to their stated change concept. For example, if your change concept is to increase voluntary participation in HIV testing at a health facility, it likely would not be helpful for your test of change to involve monitoring antiretroviral adherence among HIV-infected populations. Rather you might want to focus on tests of change such as launching new sensitization messaging to target individuals in the waiting room, or renovating HIV testing spaces to make them more comfortable or private.

It is also critical keep in mind the scale of your proposed test of change. PDSA tests of change are typically quick, and do not need to be on a large scale with a very large sample size. Your tests of change can even target a few patients being seen by one provider, and then progressively be scaled-out to larger sample sizes if the results are promising. Initial ideas that are too large in scale for PDSA (i.e. to test a community-wide sensitization strategy), can often be brought down to a much smaller, more PDSA-appropriate scale (i.e. test the use of sensitization meetings with 2-3 religious leaders in the community).

Another exciting aspect of QI is that frontline staff play a key role in determining the tests of change that are appropriate for addressing a specific outcome. These are the staff most likely to understand how their particular system and context operate, and therefore are best placed to make or suggest necessary adaptations to an intervention. You should spend time thinking about how you will meaningfully involve frontline staff in identifying and evaluating tests of change in your QI work.

Identifying an appropriate outcome for your QI work is important so that you have a helpful benchmark of success. With QI, you want a clear relationship between the change you are testing and the metric you are using to measure results so that you can rapidly evaluate whether your change should be continued as-is, scaled up, modified, or discontinued. Given the example above, a possible outcome metric is increase the number of adult patients who consent to be HIV tested from 12% (baseline) to 25% in 3 months.

Additionally, there are clinical outcomes that are either expensive to measure, or could take years to observe. Measuring morbidity and mortality is often too distal or difficult to rapidly and repeatedly measure in QI (unless perhaps you are talking about QI process improvement in a high-volume trauma care center where mortality would be frequently recorded). Thus, QI is often best positioned to maximize program outputs or outcomes rather than program impact.

The main point? With PDSA cycles, simple metrics can be most helpful given that you will evaluate the outcome frequently and will want the outcome to be inexpensive and straightforward to measure.

Authors: Dr. Arianna Rubin Means & Dr. Brad Wagenaar


icon with white cloud on blue circular background

Implementation climate matters: Evidence from three new studies

Implementation climate is a construct that figures prominently in the inner setting domain of the Consolidated Framework for Implementation Research (CFIR). Three recently published articles provide quantitative evidence that implementation climate matters. Moreover, its contribution to implementation outcomes differs from the contribution of other organizational climate. What is Implementation Climate? In 1996, Katherine Klein and...Continue reading


Icon of white voice bubble on teal circular background

An introduction to stakeholder and policy analysis

It is sometimes difficult to understand the difference between policy and programmatic and/or intervention goals. Specifically, a policy is a plan for future action adopted by a governmental entity and formulated through the political process. For example, a programmatic action that focuses on engaging with women’s groups to change local perceptions around HPV vaccination is...Continue reading

Welcome!

Welcome to the new online home for implementation science at the University of Washington! This resource has been created by the Department of Global Health's Implementation Science Program to help further the ongoing implementation research and education at the University of Washington, already a global hub for this young field of study. Our three main...Continue reading

Welcome!

Welcome to the new online home for implementation science at the University of Washington! This resource has been created by the Department of Global Health’s Implementation Science Program to help further the ongoing implementation research and education at the University of Washington, already a global hub for this young field of study. Our three main aims are to:

  • provide an introduction to the field of implementation science for students and new researchers, curating selections of supporting resources for further study
  • facilitate new research in the field of implementation science through our 8 Step Research Guide, which walks through implementation research design from identifying an implementation science question through reporting results
  • cross disciplinary lines, bringing together critical developments from across the university's many departments already conducting implementation science

Throughout the website you will find links to external resources, from journal articles to webinars, intended to further your understanding of a particular topic or to provide examples of use. Although we have done our best to provide resources that are open access, there will occasionally be links to journal articles requiring a subscription to access the full content. We will continue to add new resources and monitor the functionality and utility of existing resources. Please don't hesitate to contact us if you think we're missing a key resource or you come across a non-functioning link.

For now, happy exploring!


icon with white cloud on blue circular background

Implementation climate matters: Evidence from three new studies

Implementation climate is a construct that figures prominently in the inner setting domain of the Consolidated Framework for Implementation Research (CFIR). Three recently published articles provide quantitative evidence that implementation climate matters. Moreover, its contribution to implementation outcomes differs from the contribution of other organizational climate. What is Implementation Climate? In 1996, Katherine Klein and...Continue reading


Icon of white voice bubble on teal circular background

An introduction to stakeholder and policy analysis

It is sometimes difficult to understand the difference between policy and programmatic and/or intervention goals. Specifically, a policy is a plan for future action adopted by a governmental entity and formulated through the political process. For example, a programmatic action that focuses on engaging with women’s groups to change local perceptions around HPV vaccination is...Continue reading

An introduction to quality improvement

The goal of quality improvement (QI) is to test multiple rapid PDSA (Plan-Do-Study-Act) cycles, so QI is often implementing changes to existing processes that are practical and, ideally, simple. Identifying appropriate change concepts, change innovations, tests of change, and selecting an appropriate outcome variable are all necessary aspects of PDSA, but doing so can be...Continue reading