Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: The Impact of Experimental Designs and Alternative "Evidence" Sources
Multipaper Session 649 to be held in Centennial Section F on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Research on Evaluation TIG
Chair(s):
Mehmet Ozturk,  Arizona State University,  ozturk@asu.edu
The Impact of Randomized Experiments on the Education Programs they Evaluate
Presenter(s):
Anne Chamberlain,  Johns Hopkins University,  amchambe@yahoo.com
Abstract: Recent efforts to strengthen the quality of products and processes in the education arena have resulted in the legislation of randomized methodology where federal funds are involved in either the usage or evaluation of education initiatives. While the desire to improve education and education evaluation is both intuitive and laudable, it is not without potential drawbacks. In addition to the benefits of randomized experiments, drawbacks involving cost and ethics are well documented in the literature. However, there is another potential issue that has been virtually unrecognized: The impacts of conducting a randomized experiment on the implementation of education programs –the evaluands-- themselves. The purpose of this presentation is to share early findings from a study of how the implementation of education programs is affected by participation in randomized evaluation. Three questions will be addressed: Do education programs change during randomized evaluation? How? Why should this matter to policymakers?
Using Program Evaluation to Document “Evidence” for Evidence-Based Strategies
Presenter(s):
Wendi Siebold,  Evaluation, Management and Training Associates Inc,  wendi@emt.org
Fred Springer,  Evaluation, Management and Training Associates Inc,  fred@emt.org
Abstract: National registries of evidence-based prevention have traditionally relied on experimental-design studies as the “gold standard” for establishing evidence of effectiveness. This approach has contributed to a research-to-practice gap for two major reasons. First, programs have been accepted as the primary unit of analysis. Second, the experimental design is based on the logic of creating “most similar systems” to isolate the effects of an experimental variable (e.g., a program). This paper draws on cross-cultural research traditions to demonstrate how “most different system” designs such as multi-site evaluations (MSE’s), meta-analysis, and more recent studies of practice-based evidence and systematic review are more appropriate for developing standards of evidence-based practice that are robust across realistic implementation diversity. Findings from the National Cross-site Evaluation of High Risk Youth Programs, and select studies of violence prevention, will demonstrate the application of such evaluation methodologies for establishing evidence-based strategies in violence prevention.

 Return to Evaluation 2008

Add to Custom Program