Improving implementation research

By Barbara Goodson, William T. Grant Foundation

Over the past two decades, the education research and policy landscape has been shaped by the concept of using evidence to make policies or adopt practices. And not just any type of evidence, but high-quality evidence generated by “scientifically valid research.” The methods for assessing intervention effectiveness are becoming a matter of greater consensus, thanks to the development of agreed-upon standards of evidence for evaluating the impact of interventions. But, in an environment where effective programs are often adopted and implemented in new contexts, is it time to focus on implementation research that can support this adoption?

A Focus on Evidence of Effectiveness

Today’s definitions and standards of high-quality research and evidence are far more specific because of well-articulated systems for assessing the evidence generated by studies of intervention effectiveness. For instance, the What Works Clearinghouse (WWC), developed in 2002, seeks to inform decision making by rating evidence from individual studies and from a body of research on particular interventions or outcomes. Similar evidence rating systems now exist for other policy fields. The U.S. Department of Labor’s Clearinghouse for Labor Evaluation and Research (CLEAR) and the GRADE rating system in health care are two examples of systems that assess the strength of evidence gathered from studies of program or policy effectiveness.

This focus on effectiveness continues to grow and sustain itself as more evidence is produced. The Obama Administration, for example, has embedded a strong evaluation focus into many new funding initiatives, such as the Investing in Innovation Fund (i3), which uses a tiered evidence framework to provide greater funding to programs with scientifically valid evidence of impact. If one of the goals of initiatives like i3 is to generate more rigorous evaluation of program effectiveness in a given field, this is certainly being accomplished. Over 90 percent of the 150 programs funded by i3 between 2010 and 2013—an unprecedented majority—are conducting impact evaluations with the potential of generating valid and credible evidence on effectiveness. This includes all of the larger grants and even a majority of smaller grants that were not required to conduct rigorous studies.

What About Evidence on Implementation?

More recently, though, evidence on the implementation of effective practices has received increased attention.

Decisions about which practices to adopt depend on a variety of considerations, including questions of implementation. Stakeholders need evidence that tells them about the feasibility of implementing the new practice in similar contexts or with certain populations, as well as data on the costs associated with adoption. These potential adopters also need explicit instructions on how to implement the practice, and how to provide necessary supports, such as training, coaching, or infrastructure.

If impact evidence is believed to lead to the dissemination and adoption of improved educational practices, then we need to think more systematically about how to support this adoption. We need to think about the types and quality of evidence on implementation of practices.

Concern now centers on whether standards of what constitutes high-quality implementation evidence may be developed to parallel the standards now commonly accepted for evidence on effects.

The Investing in Innovation Fund is playing a key role in promoting a focus on implementation by setting expectations that grantees conduct high-quality implementation studies that are intended to provide data to support replication and broader dissemination of effective practices. But whereas i3 uses WWC standards as the metric against which to measure the strength of impact evidence generated by grantees, it does not have a comparable set of standards to gauge for strength of implementation evidence.

The absence of commonly accepted standards for implementation studies led i3 to ask its national evaluator, Abt Associates Inc. and its partners, to propose a set of evidence standards for measurement of implementation for programs funded through the initiative. The standards that have been developed are based on a comprehensive logic model, a system for measuring the fidelity of implementation of the key inputs identified in the logic model, and annual assessment and reporting of the extent to which fidelity of implementation of the key inputs has been achieved.

In light of the i3 experience, and the increasing expectations among funders that high-quality implementation evaluations be conducted in concert with rigorous impact studies, the field needs to take the next step toward systematic standards for research evidence on implementation. Even if it is too early to think about standards such as those delineated by the WWC, is it time to develop guidance for researchers that would, at a minimum, lay out types of information on implementation that they should collect in different evaluation contexts (formative, efficacy, effectiveness, or scale-up studies, for example), implementation measures, and ways to analyze and report data on implementation. As part of the evolution of standards for implementation studies, we can provide signals to researchers about best practices in measuring implementation.

Ultimately, if our search for effective reforms for educational practice is successful, having strong and reliable evidence on implementation will be crucial for enacting real reform in our schools.

 

This is a repost from William T. Grant Foundation, for original post please click here.