2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Organizational Learning and Approaches in Calling for, Conducting, and Using Evaluations
Multipaper Session 861 to be held in Conference Room 13 on Saturday, Nov 5, 9:50 AM to 11:20 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Jim Rugh,  RealWorld Evaluation, jimrugh@mindspring.com
Strategies for Improving International Development Evaluation Terms of Reference (TORs) or Requests for Proposals (RFP)
Presenter(s):
Anne Cullen, Western Michigan University, anne.cullen@wmich.edu
Daniela Schroeter, Western Michigan University, daniela.schroeter@wmich.edu
Kelly Robertson, Western Michigan University, kelly.robertson@wmich.edu
Michele Tarsilla, Western Michigan University, michele.tarsilla@wmich.edu
Pedro Mateu, Western Michigan University, pedro.f.mateu@wmich.edu
Abstract: A 2010 study on TORs and RFPs issued by international development organizations found that rigid evaluation terms of reference (TORs) or requests for proposals (RFP) limit evaluators' ability to determine (i) the best approach and methodology for given evaluation problem and (ii) the most feasible procedures for implementing the evaluation under given (and often unknown) timelines and budgets. On the other extreme, TORs are often vague and provide no budgetary guidelines leaving evaluators guessing as to what evaluation commissioners require. This paper presents real world strategies for international development commissioners to improve the quality of their TORs and RFPs by overcoming common mistakes such as short turnaround time for proposals, rushed start dates, short evaluation timeline, strict familiarity/experience requirements, and rigid guidelines for evaluation conduct. Session participants will be encouraged to share their suggestions for successful evaluation TORs and RFPs.
Mission Metrics: One Agency's Effort to Capture Mission Level Results
Presenter(s):
Barbara Willett, Mercy Corps, bwillett@mercycorps.org
Gretchen Shanks, Mercy Corps, gshanks@mercycorps.org
Abstract: For many years Mercy Corps has struggled with a lack of information available that spoke to agency-level performance, not just a collection of independent programs. Efforts to improve M&E helped, but there was still something missing that would elevate information to a higher level, to provide meaning as well as utility to the agency. Mission Metrics is one agency's effort to answer the question that keeps us up at night: How do we know how we are doing? This system aligns a tremendously diverse set of programs with the agency's Mission through a specially designed framework of themes and broad indicators. The framework was developed collaboratively based on the ideas and values implied by Mercy Corps' Mission, and given meaningful and measurable form by Mercy Corps' people. This paper describes the 3-year journey taken to answer a critical question, and some of the things it has learned along the way.
Reflections on the value of self-evaluation of programs in the African Development Bank
Presenter(s):
Foday Turay, African Development Bank, f.turay@afdb.org
James Edwin, African Development Bank, j.edwin@afdb.org
Mampuzhasseril Madhusoodhanan, African Development Bank, m.mampuzhasseril@afdb.org
Mohamed Manai, African Development Bank, m.manai@adb.org
Abstract: In the African Development Bank (AfDB), completed programs are self-evaluated by the Operational Departments for feedback on the project results and to draw lessons for management. The resulting self-evaluation reports (SERs) are reviewed for quality by the AfDB's 'independent' Evaluation Department. This paper presents evaluative reflections on the issue of SER value in the AfDB. Relying on value factors from the literature, it develops an analytical framework which includes SER audience, purpose, timing, quality, format and context. It draws on perceptions on SER value of staff of the AfDB's operational and evaluation departments, and on the results of a review of 149 SERs prepared during 2009-10. Individual interviews are held with all AfDB operational and evaluation staffs who were involved in the 2009-10 SERs. The emerging findings reflect differences in perceived values and point to the coherence with some of the evaluation principles especially credibility and usefulness.
Definitions and Dashboards: Data Quality in an International Non-Profit Education Organization
Presenter(s):
Michael Wallace, Room to Read, michael.wallace@roomtoread.org
Rebecca Dorman, Independent Consultant, rebeccashayne@hotmail.com
Wally Abrazaldo, Room to Read, wally.abrazaldo@roomtoread.org
Abstract: Since 2008, Room to Read's M&E system has included the collection of information on our Global Indicators (GIs)—a combination of quantitative program accomplishments and program performance measures on all active projects that show progress towards our program objectives. This paper describes our experience in collecting, storing, analyzing, and reporting this information during the past three years: 2008: Developing a system for collecting, entering, cleaning, and analyzing indicator data; using multiple channels of communication between our headquarters and field offices. 2009: Getting definitions right; improving field-level ownership of data; streamlining communication with a single headquarters communication channel; explaining GI trends. 2010: Developing dashboards (online tools that show real-time performance and progress on key indicators) for communication of data quality issues; improving accountability for data timeliness and accuracy; comparing our internal program GIs with external data sources. The paper concludes with lessons learned and challenges going forward.
Becoming Learning Organizations: Value and Usability of Evaluations in Bilateral Donor Agencies
Presenter(s):
Winston Allen, United States Agency for International Development, wallen@usaid.gov
Abstract: The value of evaluation as a source of learning has gained recognition among bilateral donor agencies. New evaluation policies have been adopted and structural changes made to enhance the role and function of evaluation. This trend has been fueled by political demand, and a corollary interest to rigorously demonstrate the effectiveness, and impact of development programs. Agencies that have established evaluation policies in the last decade include USAID, NORAD, AUSAID, DANIDA, and DFID. Strengthening organizational evaluation capacity is not an end in itself. The value of evaluation as a learning tool lies in its use to make strategic and evidence-based decisions that will maximize development program effectiveness. This paper presents an analysis of evaluation policies of five bilateral agencies, from the perspective of the value of evaluation as a source of learning. The results demonstrate that credibility of evaluations is an important value for evaluation use.

 Return to Evaluation 2011

Add to Custom Program