Skip to content

Step 6: Choose Measures

The state of measurement in implementation science

Measurement in implementation science is currently characterized by both significant progress and notable challenges.

As a relatively new field, implementation science has seen the development of numerous measures aimed at evaluating various aspects of implementation processes and outcomes. However, many of these measures have not undergone rigorous psychometric evaluation, raising concerns about their validity and reliability. This lack of standardized, validated measures makes it difficult to compare results across studies and hinders the accumulation of generalizable knowledge.

One of the primary issues in the field is the development of measures that are often specific to particular studies or contexts and are rarely used more than once. This practice limits the ability to build a cohesive body of evidence and makes it challenging to draw broader conclusions about implementation strategies and their effectiveness. Additionally, there is a lack of minimal reporting standards for these measures, which further complicates efforts to assess their quality and applicability.

Current debates in the field revolve around several key issues. One major debate concerns the balance between developing context-specific measures and creating more generalizable tools that can be applied across different settings and populations. While context-specific measures can provide detailed insights into particular interventions, they often lack the broader applicability needed for generalizable research findings. Another ongoing discussion focuses on the need for measures that not only have strong psychometric properties but are also pragmatic and feasible for use in real-world settings. This includes considerations of the burden on respondents and the practicality of implementing these measures in various contexts.

Furthermore, there is a growing emphasis on the importance of stakeholder involvement in the development of implementation measures. Engaging practitioners, policymakers, and other stakeholders in the measurement development process can help ensure that the tools are relevant, useful, and feasible for those who will be using them. This collaborative approach is seen as essential for advancing the field and improving the quality and utility of implementation science research.

Overall, while the field of implementation science has made significant strides in developing measures, ongoing efforts are needed to address these challenges and debates to enhance the rigor and impact of research in this area.

The Instrument Review Project: Measures of Implementation Science in Behavioral Health

There is extensive work underway in the field to map the measurement landscape and evaluate the strength of the psychometric evidence for measures used. One example of this work is the SIRC- and NIMH-funded SIRC Instrument Review Project, headed by Dr. Cara Lewis, where all instruments used in behavioral health to answer implementation science questions were evaluated for their psychometric and pragmatic strength. You can read international advisory board member Dr. Michel Wensing’s reflection on this work in the open access journal Implementation Research and Practice.

Associated Publications

Instrumentation issues in implementation science (Implementation Science, 2014)

The Society for Implementation Research Collaboration Instrument Review Project: A methodology to promote rigorous evaluation (Implementation Science, 2015)

Advancing implementation science through measure development and evaluation: A study protocol (Implementation Science, 2015)

Outcomes for implementation science: an enhanced systematic review of instruments using evidence-based rating criteria (Implementation Science, 2015)

Measurement resources for dissemination and implementation research in health (Implementation Science, 2016)

Toward criteria for pragmatic measurement in implementation research and practice: A stakeholder-driven approach using concept mapping (Implementation Science, 2017)

Psychometric assessment of three newly developed implementation outcome measures (Implementation Science, 2017)

Operationalizing the ‘pragmatic’ measures construct using a stakeholder feedback and a multi-method approach (BMC Health Services Research, 2018)

An updated protocol for a systematic review of implementation-related measures (Systematic Reviews, 2018)

Stakeholder perspectives and use of implementation science measurement tools (Conference Presentation, 2018)

Measures and Outcomes in Implementation Science (Advancing the Science of Implementation Across the Cancer Continuum, 2018)

Pragmatic measures for implementation research: development of the Psychometric and Pragmatic Evidence Rating Scale (PAPERS) (Translational Behavioral Medicine, 2019)

Enhancing the Impact of Implementation Strategies in Healthcare: A Research Agenda (Frontiers in Public Health, 2019)

Psychometric and Pragmatic Properties of Social Risk Screening Tools: A Systematic Review (American Journal of Preventive Medicine, 2019)

Quantitative measures of health policy implementation determinants and outcomes: a systematic review (Implementation Science, 2020)

Measuring implementation outcomes: An updated systematic review of measures’ psychometric properties (Implementation Research and Practice, 2020)

Measures of organizational culture, organizational climate, and implementation climate in behavioral health: A systematic review (Implementation Research and Practice, 2021)

Measuring characteristics of individuals: An updated systematic review of instruments’ psychometric properties (Implementation Research and Practice, 2021)

EQUITY CHECK

PAUSE AND REFLECT

❯ Are the measures appropriate and relevant for all groups, especially those historically or currently marginalized? How will the measures capture the experiences and outcomes of diverse populations?

❯ What are the potential biases in the measures? Are there inherent biases in the chosen measures that could affect the results? How can these biases be identified and mitigated?

❯ How inclusive is the data collection process? Are the data collection methods culturally sensitive and accessible to all target populations? How will the process ensure the participation of underrepresented groups?

❯ What are the ethical considerations? Are there ethical concerns related to the measures that could disproportionately affect certain groups? How will informed consent and confidentiality be ensured for all participants?

❯ What are the potential unintended consequences? Could the measures inadvertently reinforce existing inequities or create new ones? How will these risks be monitored and addressed?

The IS Research Pathway

Videos from our friends

Advancing Implementation Science through Measure Development and Evaluation, Dr. Cara C. Lewis, PhD
Design and Measurement Considerations for Implementation Science Projects, Dr. William Calo, PhD

Find Examples

library shelves looking down the aisle

Browse our Library of UW community co-authored publications to see examples of research on implementation measures and measurement.

Visit the Library