2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Assessing the Use and Influence of Evaluations: Evidence of Impacts and Predictors of Success
Multipaper Session 465 to be held in Avila A on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Evaluation Use TIG
Chair(s):
Lyn Shulha,  Queen's University, lyn.shulha@queensu.ca
Discussant(s):
Lyn Shulha,  Queen's University, lyn.shulha@queensu.ca
Evaluation of NCLB-mandated Supplemental Educational Services in the Chicago Public Schools: What Predicts a Successful Program?
Presenter(s):
Curtis Jones, University of Wisconsin, Madison, cjjones5@wisc.edu
Abstract: As part of No Child Left Behind (NCLB), low-income students attending failing schools may receive free math and reading tutoring known as Supplemental Educational Services (SES). In this paper, I present my evaluations of SES and how the Chicago Public Schools (CPS) both used and misused their results. My evaluations consisted of several methods. I used 'Vale-added' multi-level modeling to establish impact. I analyzed student attendance and registration records to document implementation. Surveys of school staff were used to determine which providers worked most respectfully within schools. Finally, I surveyed providers to explore practices and policies that predict effectiveness and school relationships. I discuss how CPS used these results in constructive ways to development an accountability system. I then discuss how CPS attempted to misuse the results for political gain. Finally, I discuss how, by engaging in a dialogue with multiple stakeholders, I minimized the misuse of my work.
Assessing the Use and Influence of Impact Evaluations: Evidence from Impact Evaluations of the World Bank Group
Presenter(s):
Javier Baez, World Bank, jbaez@worldbank.org
Izlem Yenice, World Bank Group, iyenice@ifc.org
Abstract: There has been a rapid expansion in recent years in the production of impact evaluation(IE) as a method to assess the impacts of development projects, which is largely driven by an increasing demand for credible evidence of development results. Much of the development community perceives IE as a tool that provides rigorous and objective estimates of the causal effects of specific interventions. Largely motivated by this, and as part of its results and knowledge agenda, the World Bank Group (WBG) has made important efforts to expand and deepen its IE work. However, little is known whether IEs have actually influenced resource allocation, project design/implementation, future evaluation, strategy and policy making. This evaluation looks at the experience of around 300 IEs supported by the WBG to assess their contribution to improving development practices.
Evaluating the Post-Grant Impacts of Evaluation Capacity Building in a K-20 Partnership
Presenter(s):
Edward McLain, University of Alaska, Anchorage, afeam1@uaa.alaska.edu
Susan Tucker, Evaluation & Development Associates LLC, sutucker1@mac.com
Patricia Chesbro, University of Alaska, Anchorage, afprc@uaa.alaska.edu
Abstract: As the US begins new Teacher Quality grants and Math Science Partnerships with increased accountability requirements the relevance of reflecting on sustainable evaluation practices from past grants becomes ever more timely. Using a grounded case study of a USDE teacher quality enhancement funded Alaska Education Innovation Network (AEIN), this paper focuses on how evaluation capacity building (ECB) efforts in the K20 network shaped the final two years of the grant and what evaluation use was sustained by this K20 partnership after six years of federal funding ceased. The piloting of three protocols resulted in the creation of rubrics for monitoring ECB impacts in post-grant decision-making. Where network partners used a collaborative process of cyclical logic modeling over a three-year period, AEIN evaluators noted four shifts in participants and stakeholder leaders regarding: a) evaluation purpose, b) evaluation questions, c) capacity building strategies, and d) evaluation use.

 Return to Evaluation 2011

Add to Custom Program