2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Evaluation Systems for Complex International Programs: Fostering Learning and Innovation While Providing Accountability
Multipaper Session 769 to be held in El Capitan B on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Douglas Horton, Independent Consultant, d.horton@mac.com
Discussant(s):
Jane Maland Cady, McKnight Foundation, jmalandcady@mcknight.org
Abstract: International research and development programs are increasingly complex, with multiple "northern" and "southern" partners working in partnership in highly dynamic settings to achieve such broad goals as poverty reduction, food security, and environmental sustainability. Those who fund and manage such programs look to evaluation to provide evidence of results and impact as well as lessons and insights for improving program design and implementation. The presentations in this session show how three international programs are developing evaluation systems to address these challenges. These are the Collaborative Crop Research Program of the McKnight Foundation, the Andean Change Alliance, and the Initiative for Conservation in the Andean Amazon.
Adaptive Action: Simple Evaluation for a Complex Program
Glenda Eoyang, Human Systems Dynamics Institute, geoyang@hsdinstitute.org
The McKnight Foundation's Collaborative Crop Research Program supports place-based research and development to improve nutrition and livelihood for people in highly vulnerable locales. The program is complex: 65 diverse projects; four regions; a community of practice in each region; partnerships between Northern and Southern scientists; focus on change in agricultural and institutional systems; commitment to capacity development; partnership with Bill and Melinda Gates Foundation; 3 languages; and multiple social and biophysical science disciplines. While the evaluation challenge is complex, the design had to be simple in concept and implementation. The evaluation design involves an iterative, three-step process that supports shared learning and aligned action; integrated monitored, evaluation, and planning; and local, regional, and program-wide capacity development. This paper summarizes the evaluation design and outlines implementation challenges and approaches.
Using Participatory Impact Pathway Analysis to Evaluate Participatory Methods for Rural Innovation and Social Inclusion
Emma Rotondo, PREVAL, rotondoemma@yahoo.com.ar
Rodrigo Paz, Institute for Social and Economic Studies, rodrigopaz@supernet.com
Graham Thiele, International Potato Center, g.thiele@cgiar.org
The Andean Change Alliance is a collaborative regional program operating in Bolivia, Colombia, Ecuador, and Peru that seeks to improve the capacity of national agricultural research systems to promote pro-poor innovation and inclusion in development markets and services, promote collective learning and knowledge sharing with participatory methods, and influence policy formulation with evidence accumulated in an "Arguments Bank." This paper discusses the main challenges that evaluators face in designing and implementing an evaluation system in a complex program like this one. It describes the main features of the evaluation methodology developed, which is based on "participatory impact pathway analysis". It assesses the strengths and weaknesses of the evaluation methodology and formulates lessons for improving the use of participatory impact pathways analysis in different kinds of development programs.
How a "Light" M&E System Worked for a Complex Environmental Program: The Initiative for Conservation in the Andean Amazon Experience
Brenda Bucheli, Initiative for the Conservation, Andean Amazon, brenda_bucheli@yahoo.es
The Initiative for Conservation in the Andean Amazon (ICAA) aims to improve stewardship of the Amazon Basin's globally and nationally important biological diversity and environmental services. This five-year program is supported by US $35 million from USAID and $10 million in counterpart funding. The initiative is implemented by 21 implementing partners organized under four field-based consortia and an ICAA Support Unit (ISU). Work of the consortia is guided by a strategic framework and six shared indicators related to capacity building, policy dialogue and implementation, and leveraging of new resources. A mid-term assessment called for more evidence on the impacts of ICAA. This paper discusses challenges to providing such evidence and summarizes how the shared indicator evaluation system was complemented to address these challenges and meet ICAA's needs for both learning and accountability.

 Return to Evaluation 2011

Add to Custom Program