| Session Title: Does Aid Evaluation Work?: Reducing World Poverty by Improving Learning, Accountability and Harmonization in Aid Evaluation |
| Multipaper Session 817 to be held in Peale Room on Saturday, November 10, 1:50 PM to 3:20 PM |
| Sponsored by the International and Cross-cultural Evaluation TIG |
| Chair(s): |
| Michael Scriven, Western Michigan University, scriven@aol.com |
| Abstract: This session will discuss some fundamental and deeply imbedded issues in aid evaluation, including relatively low achievement of learning, limitations in accountability and lack of harmonization among donors and between government and donors. Clearly these issues are inter-related, and we see them as not merely technical but deeply structural. The first presentation will examine these issues through a comparative study of nine development project in Africa and Asia, and it will propose a newly refined framework of cost-effectiveness analysis for organizational learning and accountability. The second presentation will focus the structural factors and arrangements that have led serious positive bias and disinterest among stakeholders though a systematic review of 31 evaluation manuals and their application. The third presentation will focus on the harmonization of the current aid practices by reviewing the current tools and their actual uses, and it will propose challenges and opportunities with some possible solutions. |
| Reducing World Poverty by Improving Evaluation of Development Aid |
| Paul Clements, Western Michigan University, paul.clements@wmich.edu |
| Although international development aid bears a heavy burden of learning and accountability, the way evaluation is organized in this field leads to inconsistency and positive bias. This paper first discusses structural weaknesses in aid evaluation. Next it presents evidence of inconsistency and bias from evaluations of several development projects in Africa and India. While this is a limited selection of projects, the form of the inconsistency and bias indicates that the problems are widespread. Third the paper shows how the independence and consistency of evaluations could be enhanced by professionalizing the evaluation function. Members of an appropriately structured association would have both the capacity and the power to provide more helpful evaluations. In order better to support learning and accountability, the independent and consistent evaluations should be carried out using a cost-benefit framework. |
| Lessons Learned from the Embedded Institutional Arrangement in Aid Evaluation |
| Ryoh Sasaki, Western Michigan University, ryoh.sasaki@wmich.edu |
| In the past, several trials of meta-evaluation have been conducted to answer a long-held question: Does aid work? However, the general public still today suspects its effectiveness and asks the same question. One of the reasons why people are still facing this question is that there would be serious flaws in the current aid evaluation practice. In this paper, I will present the issues identified by the systematic review of 31 evaluation guidelines developed by aid agencies (the multilateral and bilateral aid agencies) and related review reports. I can conclude the identified issues are those 'deeply embedded in institutional arrangement' rather than technical issues. They are: (i) dominance of agency's own value criteria under the name of 'mixed-up of all stakeholders' values', (ii) dependency of evaluators under the title of external consultant, (iii) modificationality of evaluation reports, and (iv) logical flaw of aid evaluation. Some fundamental suggestions are made at last. |
| Hope for High Impact Aid: Real Challenges, Real Opportunities and Real Solutions |
| Ronald Scott Visscher, Western Michigan University, ronald.s.visscher@wmich.edu |
| The Paris Declaration demands mutual accountability and harmonization between all parties involved in international aid. The extreme challenge in Afghanistan is one of many examples of why the fate of freedom and democracy now depends on this. Yet the secret is out. Succeeding in international development is tough. Everyone now knows failure is the norm. Evaluators must recognize this situation as a historic opportunity to assume independence, "speak truth to power" and demand support for high quality evaluation. By taking on this stronger role monitoring and evaluation (M&E) will have the opportunity to meet its promise of inspiring real progress. But delivering mutual accountability, learning and coordination will still be required. How will M&E deliver these on these heightened demands? This presentation will help evaluators learn how new and improved M&E tools designed to meet these complex demands can be integrated into real practical solutions for each unique context. |