2010 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Impact Evaluation and Beyond: Methodological and Cultural Considerations in Measuing What Works
Multipaper Session 738 to be held in REPUBLIC A on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Tessie Catsambas,  EnCompass LLC, tcatsambas@encompassworld.com
Assessing the Relevance of International Development and Humanitarian Response Projects at Aggregate Levels: A Key Consideration in Achieving Equality of Wellbeing Outcomes and Just Resource Allocation- An Example of Summary Child Wellbeing Outcome Reporting From World Vision International
Presenter(s):
Nathan Morrow, World Vision International, nathan_morrow@wvi.org
Nancy Mock, Tulane University, mock@tulane.edu
Abstract: A key assumption of many impact evaluations conducted by International Non-Governmental Organizations is that programs were designed to be relevant to community needs. Yet demonstrating this relevance with respect to community needs, NGO organizational policy, and organizational capacity is rarely documented. International NGOs often collect essential information for understanding the relevance of their programming portfolio, but lack the tools or experience to analyze the question of relevance at aggregated levels. In 2009, World Vision International undertook a large thematic review of their programming portfolio at the national, regional, and global levels. The paper describes the availability and quality of data that can be used to assess relevance. It presents several novel conceptual tools used to distill indicators of relevance from participatory workshops and document review. The thematic review of relevance found mixed results and scope to improve the relevance of programming through more effective use of routinely collected data.
Taking the Long View in Evaluating International Development Assistance
Presenter(s):
R Gregory Michaels, Chemonics International, gmichaels@chemonics.com
Alphonse Bigirimana, Chemonics International, abigirimana@chemonics.com
Abstract: Program evaluations addressing the sustainability of international development assistance have not adequately informed understanding of development effectiveness (the focus, for example, of OECD’s DAC Evaluation Network). Program evaluations are typically limited to the specific project life span, missing possibilities of evaluating projects after closure. In contrast, appraising project results well after closure offers a robust standard for evaluating sustainability –what actually worked and what did not in the long run. Ex post project sustainability investigations offer an untapped mine of information. This paper reports experience from two retrospective studies examining the sustainability of USAID-funded projects’ benefits several years after closure. The first study (conducted in 2008) evaluated the promotion of agricultural exports from Central America (1986-1994). The second study (2009) investigated the performance of a Moroccan wastewater treatment facility constructed in 2000. These studies illustrate the power of ex-post sustainability appraisals to offer valuable insights into the durability of development.
Using rUbrics Methodology in Impact Evaluations of Complex Social Programs: The Case of the Maria Cecilia Souto Vidigal Foundation’s Early Childhood Development Program
Presenter(s):
Thomaz Chianca, COMEA Evaluation Ltd, thomaz.chianca@gmail.com
Marino Eduardo, Independent Consultant, eduardo.marino@yahoo.com
Abstract: Rubrics are important tools to describe evaluatively how well an evaluand is doing in terms of performance or quality related to specific dimensions, components and/or indicators. Even though extremely useful to synthesize and communicate evaluative conclusions, its adoption as part of evaluations of complex interventions seems to be quite limited. This paper will discuss the main methodological aspects involved in the development and use of rubrics in the context of an impact evaluation: (i) values identification, (ii) description of standards to score performance or quality, (iii) definition of data collection strategies, (iv) development of tools for data entry and analysis, (v) dealing with causal attribution strategies, and (vi) communicating findings. The external impact evaluation of an early childhood intervention, supported by the Maria Cecília Souto Vidigal Foundation in six municipalities in São Paulo (Brazil), will be discussed as the case study for this session.
Quality is More Than Rigor: A Political and a Scientific Perspective for Impact Evaluation in Development
Presenter(s):
Rahel Kahlert, University of Texas, Austin, kahlert@mail.utexas.edu
Abstract: The paper reflects on the current debate about impact evaluation in international development. One controversial issue remains whether the randomized controlled trial (RCT) represents the “gold standard” approach. The author analyzes the randomization debate in international development since 2006. The presenter employs both a political perspective and scientific perspective (cf., Weiss 1972; Vedung 1998) to explain the rationales behind the promotion strategies of the respective sides in the debate. Finally, the paper analyzes the issue of transferability of randomized evaluation findings—as brought to the table by both “randomistas” and skeptics of randomized experiments. The focus is on “validity”—the balance of internal and external validity—to both ensure rigor and relevance. The paper proposes ways in which this debate could be mediated.

 Return to Evaluation 2010

Add to Custom Program