|
Session Title: The Role of Metaevaluation in Promoting Evaluation Quality: National and International Cases
|
|
Panel Session 282 to be held in Lone Star A on Thursday, Nov 11, 1:40 PM to 3:10 PM
|
|
Sponsored by the Presidential Strand
|
| Chair(s): |
| Leslie Cooksy, University of Delaware, ljcooksy@udel.edu
|
| Discussant(s):
|
| Donald Yarbrough, University of Iowa, d-yarbrough@uiowa.edu
|
| Abstract:
Evaluation quality is a primary concern to the evaluation profession. However, different organizations and professionals may conceptualize, operationalize, practice, and use “evaluation quality” differently. This panel focuses on metaevaluations and their role in promoting quality in national and international contexts. The first two presentations will emphasize experiences at the United Nations Children's Fund (UNICEF) and CARE International from the perspectives of those who are or have been managing such metaevaluations. The third presentation will reflect on the experience in conducting guided, external, independent appraisals for the International Labour Organisation and reflect on challenges and opportunities for assessing evaluation quality over multiple years. The fourth presentation discusses pitfalls associated with applying the Joint Committee’s Program Evaluation Standards to written reports and identifies ways to improve consistency in metaevaluation. Together, these presentations allow for exploring theoretical and practical dimensions of evaluation quality in national and international metaevaluation contexts.
|
|
The Use of Evaluation Quality Assurance System in Meta-evaluation at the United Nations Children's Fund (UNICEF) (The opinions expressed are the personal thinking of the presenter and do not necessarily reflect the policies or views of UNICEF)
|
| Marco Segone, United Nations Children's Fund, msegone@unicef.org
|
|
Based on the United Nations Evaluation Group Evaluation Standards, UNICEF adopted a two-tier approach to improve the quality of evaluation: a formative regional-level Quality Assurance system, complemented by a summative global-level Quality Assurance system. The Regional Office for Eastern Europe and Central Asia set up a Regional Evaluation Quality Assurance System to assist UNICEF country offices in meeting quality standards by reviewing draft evaluation Terms of References and reports, giving real-time feedback so that Country Offices can enhance and improve the final version. This regional-level system is complemented by a summative approach to monitor the impact of efforts and strengthen UNICEF’s evaluation function globally. This system reviews the quality of final evaluation reports supported by UNICEF worldwide, by having an independent institution rate final evaluation reports commissioned by country offices, regional offices and headquarter divisions. Reports meeting satisfactory ratings are made available in the UNICEF Global Evaluation Database.
|
|
|
Can Metaevaluations be Helpful to International NGOs? A Case Study From CARE International
|
| Jim Rugh, Independent Consultant, jimrugh@mindspring.com
|
|
During the 12 years Jim Rugh led the M&E unit for CARE, he developed a system of biannual meta-evaluations. These were called MEGA evaluations, mainly to reflect the fact that they were really big meta-evaluations of as many evaluation reports as had been submitted to the Web-based evaluation library from projects all around the world during the past two years. The MEGA acronym was also defined as Meta-Evaluation of Goal Achievement to acknowledge that one of the expectations on the part of senior management and the board was that this would help answer the question of what impact this very large INGO was having globally. So they served as a synthesis of what was being learned from these evaluations. They also were meta-evaluations in the classical sense of assessing the methodologies used by the evaluators and judging how well they addressed the standards articulated in CARE’s Evaluation Policy.
| |
|
The Role of Metaevaluation in Promoting Evaluation Quality at the International Labor Organization
|
| Daniela Schroeter, Western Michigan University, daniela.schroeter@wmich.edu
|
| Anne Cullen, Western Michigan University, anne.cullen@wmich.edu
|
| Kelly Robertson, Western Michigan University, kelly.robertson@wmich.edu
|
| Craig Russon, International Labor Organization Evaluation Unit, russon@ilo.org
|
|
The International Labour Organisation (ILO) maintains a large portfolio of technical cooperation projects and, given the size of its investment, is interested in learning about the quality of its projects for improvement purposes, accountability purposes, and decision making about allocation of funding. To ensure the quality and credibility of its evaluations, since 2006 ILO has mandated annual appraisals of all independent evaluation reports. As a result, ILO’s evaluation unit (EVAL) is contracting annual independent, external appraisals of a sample of technical cooperation project evaluation reports since 2006. ILO EVAL supports these efforts by integrating and harmonizing existing evaluation policies and practices and encouraging the development of an evaluation culture throughout the organization. This presentation focuses on ILO metaevaluations conducted in 2007 and 2008 with an emphasis on methodologies used and potentials and challenges of these methodologies for metaevaluation in ILO.
| |
|
The Use of the Program Evaluation Standards in Metaevaluation: Potential and Pitfalls
|
| Lori Wingate, Western Michigan University, lori.wingate@wmich.edu
|
|
The Program Evaluation Standards have been widely accepted as the prevailing criteria for assessing evaluation quality in North America. They were designed to be applicable to a broad array of evaluation contexts. Their generality makes them adaptable for different settings and uses, but also leaves them open to substantial interpretation by users. Although the Standards were not put forth as a rating tool, they are commonly used in that capacity for metaevaluation purposes. Problems related to consistency in the application of the Standards are exacerbated when information about the evaluation(s) being assessed is limited to what is documented in evaluation reports, since many standards refer to aspects of evaluations that are not commonly detailed in writing. Based largely on a study that investigated interrater reliability in metaevaluation, this presentation describes the pitfalls associated with applying the standards to written reports for metaevaluation purposes and identifies ways to improve consistency in metaevaluation.
| |