Evaluation 2009 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Maintaining Evaluation's Integrity in Trying Times: Three Strategies
Multipaper Session 883 to be held in Panzacola Section H3 on Saturday, Nov 14, 3:30 PM to 5:00 PM
Sponsored by the Evaluation Use TIG , the Pre-K - 12 Educational Evaluation TIG, and the Research on Evaluation TIG
Chair(s):
Susan Tucker,  Evaluation and Development Associates LLC, sutucker1@mac.com
Discussant(s):
Jennifer Iriti,  University of Pittsburgh, iriti@pitt.edu
Maintaining Integrity: Evaluation as a Tool for Assisting Programs Undergoing Major and Unexpected Budget Reductions
Presenter(s):
Gary Walby, Ounce of Prevention Fund of Florida, gwalby@ounce.org
Emilio Vento, Health Connect in the Early Years, evento@hscmd.org
Abstract: The evaluation of Health Connect in the Early Years, a maternal child health home visiting program in Miami-Dade County Florida, and the adaptation to a mandatory downsizing due to a major drop in revenue is presented. Project management, staff, evaluators, and the funding body, worked together to help the program streamline and manage the direct and devastating effect of a 50% reduction in program funding as a result of the global economic downturn while maintaining program and evaluation integrity. Focus groups, document analysis, ongoing engagement with management and staff, and analysis of data captured pre- and post program reduction was used to help the program make decisions on program implementation as well as provide information on program impact on participants. This presentation tells the story of evaluation and program responses and provides lessons learned for evaluators in similar circumstances.
Using the Bloom Adjustment to Distinguish Intention-to-Treat Estimates and Impact-on-the-Treated Estimates: The Striving Readers Evaluation
Presenter(s):
Matthew Carr, Westat, matthewcarr@westat.com
Jennifer Hamilton, Westat, jenniferhamilton@westat.com
Allison Meisch, Westat, allisonmeisch@westat.com
Abstract: In a randomized controlled trial evaluation researchers are occasionally restricted to performing Intention-to-Treat studies. Difficulties understanding impacts of the treatment arise when participants assigned to the treatment group do not actually receive the treatment. Removing 'no-shows' from the sample can bias the composition of the treatment group because of the potential self-selection of these participants. As a result, researchers typically include these participants, trading potential underestimation of treatment effects for maintenance of the randomized evaluation design. However, policymakers are typically more interested in Impact-on-the-Treated estimates, which more accurately reflect the effects of the program. In this paper we examine the potential for the Bloom adjustment to provide researchers with information about both estimates, thereby mitigating the need to make the traditional trade-off between design consistency and research focus. Data from the Striving Readers evaluation, a program designed to improve middle school students' literacy skills, are used as an example.
Troubled Asset or Valued Resource? A Study of Recommendations From 53 Evaluation Reports
Presenter(s):
Kari Nelsestuen, Northwest Regional Educational Laboratory, nelsestk@nwrel.org
Elizabeth Autio, Northwest Regional Educational Laboratory, autioe@nwrel.org
Ann Davis, Northwest Regional Educational Laboratory, davisa@nwrel.org
Angela Roccograndi, Northwest Regional Educational Laboratory, roccogra@nwrel.org
Caitlin Scott, Northwest Regional Educational Laboratory, scottc@nwrel.org
Abstract: In evaluation circles, ongoing debate surrounds decisions about whether to include recommendations in evaluation reports; and if so, what information to include. In this study, we examine recommendations from 53 state evaluation reports of the same federal program, Reading First. The presence of recommendations varied; 62 percent had recommendations; 38 percent did not. When recommendations were present, we analyzed each recommendation on six different characteristics, including addressing a general problem with or without a course of action, linking to research, and offering one strategy or a set of strategies to solve the problem. For reports with recommendations, we survey project directors about their relevance and usefulness. We also survey those project directors with reports without recommendations about their preference for recommendations. Our findings contribute to the ongoing dialogue among evaluators about the role and characteristics of recommendations.

 Return to Evaluation 2009

Add to Custom Program