|
How has Twenty Years of Educational Evaluation Contributed to Lifting the Quality of Government Evaluation in New Zealand?
|
| Presenter(s):
|
| Carol Mutch, Education Review Office, carol.mutch@ero.govt.nz
|
| Kathleen Atkins, Education Review Office, kathleen.atkins@ero.govt.nz
|
| Abstract:
The Education Review Office (ERO) in New Zealand was established just over 20 years ago in October 1989. It was one of the outcomes of the education reforms of the 1980s. Although similar reforms were considered in many other countries at that time, New Zealand is seen as having undertaken the most comprehensive set of reforms. Over the last 20 years successive governments have devolved educational governance to individual school boards, decentralised curriculum decision making to schools themselves and put a greater emphasis on school self evaluation. One function the government did retain was the responsibility for external evaluation of the quality of education provided by each school. This paper traces the history of the Education Review Office and explores how the notion of quality evaluation evolved over this time and the influence that ERO was to have on wider developments in government evaluation.
|
|
Recommendations That Catch the Eye, Stimulate the Grey Cells and Generate Change: What Makes a Good Evaluation Recommendation? Lessons from the United Kingdom’s (UK) Department for International Development (DFID)
|
| Presenter(s):
|
| Kerstin Hinds, Department for International Development, k-hinds@dfid.gov.uk
|
| Abstract:
The UK Government’s agency with a mandate for reducing global poverty and responsible for spending a budget of almost £8 billion per year on international development (DFID) has been taking strenuous steps to improve the quality of evaluations and ensure that evaluation findings are taken forward within the organisation. In implementing a system for tracking recommendations, it has become clear that some recommendations better lend themselves to follow up than others. A review of recommendations – and their traction - was undertaken to understand the key features of ‘good’ recommendations and hence improve our guidance and practice. This paper considers issues of recommendation targeting, length, complexity, wording and number and also discusses whether all key evaluation lessons can be framed into actionable recommendations and how else key findings from evaluations can be taken forward. Issues of institutional culture and context which are also significant are discussed.
|
|
Implementing Government of Canada Evaluation Policy Requirements: Using Risk to Determine Evaluation Approach and Level of Effort
|
| Presenter(s):
|
| Courtney Amo, National Research Council Canada, courtney.amo@nrc-cnrc.gc.ca
|
| Shannon Townsend, National Research Council Canada, shannon.townsend@nrc-cnrc.gc.ca
|
| Abstract:
The 2009 Government of Canada Evaluation Policy is transitioning the evaluation function away from a risk-based approach to planning which evaluations to do, towards a risk-based approach to determining how each evaluation will be done. Within this context, evaluations are expected to assess organization-specific risk criteria in order to calibrate the approach and level of effort to be put towards each study. This calibration is aimed at ensuring that departments and agencies are able to plan for and meet the requirement for full evaluation coverage of all programs over a five-year cycle. This paper presents the five-step approach developed by the National Research Council of Canada (NRC) to apply risk in determining a study’s approach and level of effort, while ensuring that minimum standards for evaluation are met in the context of “low-risk” programs. The use of the approach in the context of three evaluation studies is also presented.
|
|
Government, Implementation And Evaluation: The Viability and Evaluability of National Policy Programs
|
| Presenter(s):
|
| Anna Petersén, Orebro University, anna.petersen@oru.se
|
| Lars Oscarsson, Orebro University, anna.petersen@oru.se
|
| Christian Kullberg, Orebro University, christian.kullberg@oru.se
|
| Ove Karlsson Vestman, Malardalen University, ove.k.vestman@dh.se
|
| Abstract:
In many countries, the government is using state subsidies to promote local authorities to improve, e.g., social welfare services. But evaluations of these initiatives often show small effects compared to the politicians´ goals. In the paper we present results from a Swedish study including eight larger national programs aimed at promoting local authorities’ social services. The aim is to analyze and discuss if success or failure to achieve the national goals of the programs can be related to unrealistic political ambitions, to the implementation process, to limitations or deficiencies in the evaluations, or to all three. The first two issues are analysed within a political science model, and in the latter case data availability and evaluation designs are focused. In the light of the results, program theoretical, ethical and qualitative issues in evaluation are discussed.
|
|
Whose Fault is It? A Federal Government's Effort to Improve Evaluation Quality
|
| Presenter(s):
|
| Laura Tagle, Italy's Ministry for Economic Development, laura.tagle@tesoro.it
|
| Laura Tagle, Italy's Ministry for Economic Development, laura.tagle@tesoro.it
|
| Massimiliano Pacifico, Evaluation Unit of Region Lazio, massimiliano.pacifico@gmail.com
|
| Abstract:
In a recent blog post(1), Davidson wonders about clients’ responsibilities on bad evaluation and her role as an evaluator. We take on the same issue from a government’s standpoint. The quality of evaluations critically depends on clients’ engagement and investment in evaluation. From this starting point, we discuss which choices influence the quality of evaluations--evaluation policy, what is evaluated, which evaluation questions are asked, and, mainly, the style and mode of the evaluation management. We analyze ways to induce government engagement in evaluation. Available options include formal and on-the-job training, institutional building, and the setting up of incentive systems. The paper is based on the experience of the Italian National Evaluation System, a coalition of public Evaluation Units which collectively provides guidance and services to public authorities responsible for planning and implementing regional development policies--and for evaluating them.
(1) http://genuineevaluation.com/whos-responsible-for-un-genuine-evaluation/
|
| | | | |