Evaluation 2009 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Evaluating Education in Mexico, New Zealand, South Africa and the United States
Multipaper Session 842 to be held in Sebastian Section I1 on Saturday, Nov 14, 1:40 PM to 3:10 PM
Sponsored by the Government Evaluation TIG and the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Maria Whitsett,  Moak, Casey and Associates, mwhitsett@moakcasey.com
Implementation of an Evaluation Model to Assess the Synergistic Impacts of State Initiatives to Improve Teaching and Learning
Presenter(s):
Janet Usinger, University of Nevada, Reno, usingerj@unr.edu
Bill Thornton, University of Nevada, Reno, thorbill@unr.edu
George Hill, University of Nevada, Reno, gchill@unr.edu
Abstract: To improve teaching and learning, states have undertaken two interrelated activities, (1) implementation of performance-based accountability systems and (2) allocation of funding for targeted school improvement. Annually, state departments of education provide analyses of performance data. Special funding activities are routinely evaluated. Yet these activities are seldom assessed to understand their synergistic impact on teaching and learning. Weinbaum (2005) has developed an evaluation framework to investigate how state policies and programs have affected the key components of schools that influence student achievement. This model has been adapted and implemented in Nevada, using a multiple case study methodology. Five cases, each consisting of a district office, a high school, up to two feeder middle schools, and up to three feeder elementary schools, have been undertaken. The model uses both qualitative and quantitative data. This presentation will describe the challenges and opportunities of implementation in urban, bedroom community, and rural settings.
Transform the Context: How a Negative Evaluation Context Has Been Transformed in South Africa
Presenter(s):
Jennifer Bisgard, Khulisa Management Services, jbisgard@khulisa.com
Gregg Ravenscroft, Khulisa Management Services, gravenscroft@khulisa.com
Abstract: Post-apartheid, democratic South Africa has blended new and old bureaucrats, many of whom are resistant to evaluation and implementing recommendations. Since 2003, we have been evaluating data quality in the public education sector for the South African Government. The process of conducting the evaluation cycles and the sponsorship of provincial and national government officials has positively changed the political and operational context. As resistance decreased, the evaluation scope expanded. Now senior officials have emerged as champions of the data quality evaluation process and are using the results to improve performance, acknowledge perverse incentives and to inform policy making. Elements that have contributed to this outcome include the evaluation champion's consistent commitment over the last six years, building evaluator-government trust, using easily understandable and brief evaluation report formats, and implementing well-documented, rigorous methodologies for collecting and analyzing data. Improving data quality and trustworthiness are key precursors to data-driven decision making.
Evaluation in Mexican Educational System: From Political to Technical Evaluation?
Presenter(s):
Teresa Bracho, Facultad Latinoamericana de Ciencias Sociales, teresa.bracho@flacso.edu.mx
Jimena Hernandez, Facultad Latinoamericana de Ciencias Sociales, jimena.hernandezf@gmail.com
Abstract: In Mexico, evaluation in the educational policy is a relatively new issue. Twenty years ago government programs were designed without any systematic analysis about the problems they tried to solve; neither had any explicit statement about their expected results. The purpose of this paper is to analyze how governmental context of policy program's evaluation affects the way educational programs are designed and evaluated. Despite the Mexican government has implemented changes in budgeting mechanisms resulting in greater emphasis on evaluation as an input for policy design and implementation, the new evaluation process still faces many challenges. In this paper we emphasize the problems involved in the use of the recent census of students academic performance evaluation (ENLACE) as the main input for programs evaluation, its design and technical difficulties; we also point out the risks of its uses mainly for political purposes.

 Return to Evaluation 2009

Add to Custom Program