Evaluation 2009 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Taking It to the Next Level: Challenges, Strategies and Lessons Learned in Linking Implementation and Outcomes
Panel Session 307 to be held in Sebastian Section L4 on Thursday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Theories of Evaluation TIG
Chair(s):
Susan Berkowitz, Westat, susanberkowitz@westat.com
Discussant(s):
Larry L Orr, Independent Consultant, larry.orr@comcast.net
Abstract: Evaluators are often faced with the problem of how to effectively link evaluation of the implementation of a program or intervention with the outcomes of these interventions. Maintaining a strict separation between process evaluation, on the one hand, and outcome evaluation, on the other, seems unproductive. However, there is no clearly indicated path to follow in addressing the middle ground occupied by these linkages. This panel will focus on key issues that arise in meeting the challenges of linking implementation and outcomes for a range of education, mental health, and substance abuse prevention interventions. Panelists will discuss strategies for testing a program's underlying theory of change, ways of measuring implementation and treatment fidelity, and analytic techniques employed to make the implementation-to-outcome linkage. They will assess the relative success and utility of these different approaches and offer lessons learned. The discussant will draw together their arguments and suggest potentially promising future paths.
Linking Implementation and Outcomes for Three Educational Interventions: Challenges, Strategies and Lessons for the Future
Joy Frechtling, Westat, joyfrechtling@westat.com
Joy Frechtling, an evaluator who has conducted a wide range of studies looking at intervention programs and their outcomes, will discuss several evaluation studies in which the connections between implementation and outcomes were examined. Focusing on educational interventions, she will describe three studies, one examining an arts education reform, and two examining reading programs, in which various strategies and analytic techniques were used to test the theory of change underlying the program. Relationships based on both an overall implementation score and multi-part implementation scores will be discussed. She will describe the extent to which these various endeavors were successful in tracking connections, challenges encountered, and emergent ideas about how to more effectively assess these linkages.
The Hype and Futility of Measuring Implementation Fidelity
David Judkins, Westat, davidjudkins@westat.com
In qualitative evaluation there is tremendous enthusiasm for "looking inside the black box." A recent popular book from a well respected evaluator,"Learning More from Social Experiments," provides encouragement that statistically rigorous methods can be developed to understand the role of implementation fidelity and other mediators in the results of randomized trials. David Judkins, a statistician, recently led the analysis of a group randomized trial of alternative preschool curricula in Even Start projects. He will argue that attempts at relating fidelity to intervention outcomes are for the most part well-intentioned but ill-considered and expensive endeavors doomed to failure. Worse, they tend to warp the design of experiments, lowering power for primary experimental endpoints due to the need to lower the cost of fidelity measurement. He will review technical challenges in the statistical inference of mediated effects and provide recent examples to illustrate his points.
Examples of Success in Using Implementation/Fidelity Measures to Understand Cross-site Variation in Multisite Evaluations
Joseph Sonnefeld, Westat, josephsonnefeld@westat.com
Robert Orwin, Westat, robertorwin@westat.com
Use of treatment fidelity and implementation measures has been repeatedly recommended in evaluating multisite interventions, which are often characterized by substantial cross-site variation in both the program models being implemented and success of implementation. Responding to arguments that individually randomized designs implemented at multiple sites may be usefully analyzed without reference to variation in intervention fidelity, Joe Sonnefeld and Rob Orwin, experienced evaluators of national substance abuse and mental health initiatives, examine ways that well-measured fidelity to an evidence-based model, or cross-site differences in program "dose," contributed to understanding variations in effectiveness, led to meaningful recommendations, and prevented false negative conclusions about overall program effectiveness. They draw on examples from two Center for Mental Health Services (CMHS) initiatives, Access to Community Care and Effective Services and Supports (ACCESS) and Consumer Operated Programs and an ongoing evaluation of the Center for Substance Abuse Prevention (CSAP)'s Strategic Prevention Framework State Incentive Grants.

 Return to Evaluation 2009

Add to Custom Program