Skip to content

Pragmatic measures for implementation research: development of the Psychometric and Pragmatic Evidence Rating Scale (PAPERS)

Authors:

Cameo F Stanick, Heather M Halko, Elspeth A Nolen, Byron J Powell, Caitlin N Dorsey, Kayne D Mettert, Bryan J Weiner, Melanie Barwick, Luke Wolfenden, Laura J Damschroder, & Cara C Lewis

University of Washington affiliated authors are displayed in bold.

Subscription Required for Full Text Access

Published: November 2019

Read the full text in the subscription access journal Translational Behavioral Medicine

✪ Operationalizing the ‘pragmatic’ measures construct using a stakeholder feedback and a multi-method approach

Authors:

Cameo F. Stanick, Heather M. Halko, Caitlin N. Dorsey, Bryan J. Weiner, Byron J. Powell, Lawrence A. Palinkas & Cara C. Lewis

University of Washington affiliated authors are displayed in bold.

Published: November 2018

Read the full text in the open access journal BMC Health Services Research

Abstract:

Context

Implementation science measures are rarely used by stakeholders to inform and enhance clinical program change. Little is known about what makes implementation measures pragmatic (i.e., practical) for use in community settings; thus, the present study’s objective was to generate a clinical stakeholder-driven operationalization of a pragmatic measures construct.

Evidence acquisition

The pragmatic measures construct was defined using: 1) a systematic literature review to identify dimensions of the construct using PsycINFO and PubMed databases, and 2) interviews with an international stakeholder panel (N = 7) who were asked about their perspectives of pragmatic measures.

Evidence synthesis

Combined results from the systematic literature review and stakeholder interviews revealed a final list of 47 short statements (e.g., feasible, low cost, brief) describing pragmatic measures, which will allow for the development of a rigorous, stakeholder-driven conceptualization of the pragmatic measures construct.

Conclusions

Results revealed significant overlap between terms related to the pragmatic construct in the existing literature and stakeholder interviews. However, a number of terms were unique to each methodology. This underscores the importance of understanding stakeholder perspectives of criteria measuring the pragmatic construct. These results will be used to inform future phases of the project where stakeholders will determine the relative importance and clarity of each dimension of the pragmatic construct, as well as their priorities for the pragmatic dimensions. Taken together, these results will be incorporated into a pragmatic rating system for existing implementation science measures to support implementation science and practice.

**This abstract is posted with permission under the Creative Commons Attribution 4.0 International License**

✪ Toward criteria for pragmatic measurement in implementation research and practice: a stakeholder-driven approach using concept mapping

Authors:

Byron J. Powell, Cameo F. Stanick, Heather M. Halko, Caitlin N. Dorsey, Bryan J. Weiner, Melanie A. Barwick, Laura J. Damschroder, Michel Wensing, Luke Wolfenden, and Cara C. Lewis

University of Washington affiliated authors are displayed in bold.

✪ Open Access

Published: October 2017

Read the full text in the open access journal Implementation Science

Abstract:

Background

Advancing implementation research and practice requires valid and reliable measures of implementation determinants, mechanisms, processes, strategies, and outcomes. However, researchers and implementation stakeholders are unlikely to use measures if they are not also pragmatic. The purpose of this study was to establish a stakeholder-driven conceptualization of the domains that comprise the pragmatic measure construct. It built upon a systematic review of the literature and semi-structured stakeholder interviews that generated 47 criteria for pragmatic measures, and aimed to further refine that set of criteria by identifying conceptually distinct categories of the pragmatic measure construct and providing quantitative ratings of the criteria’s clarity and importance.

Methods

Twenty-four stakeholders with expertise in implementation practice completed a concept mapping activity wherein they organized the initial list of 47 criteria into conceptually distinct categories and rated their clarity and importance. Multidimensional scaling, hierarchical cluster analysis, and descriptive statistics were used to analyze the data.

Findings

The 47 criteria were meaningfully grouped into four distinct categories: (1) acceptable, (2) compatible, (3) easy, and (4) useful. Average ratings of clarity and importance at the category and individual criteria level will be presented.

Conclusions

This study advances the field of implementation science and practice by providing clear and conceptually distinct domains of the pragmatic measure construct. Next steps will include a Delphi process to develop consensus on the most important criteria and the development of quantifiable pragmatic rating criteria that can be used to assess measures.

**This abstract is posted with permission under the Creative Commons Attribution 4.0 International License**