|
Session Title: Building Capacity to Monitor and Evaluate Development Policies, Programs, and Projects: Everyone Wants to do It, but How Should It Be Done to Ensure High Quality?
|
|
Panel Session 617 to be held in REPUBLIC A on Friday, Nov 12, 1:40 PM to 3:10 PM
|
|
Sponsored by the International and Cross-cultural Evaluation TIG
|
| Chair(s): |
| Linda Morra Imas, World Bank, lmorra@worldbank.org
|
| Abstract:
Developing country governments around the world are planning and trying to implement results-based monitoring and evaluation (M&E) systems. The developed world has recognized the capacity gaps and the Paris Declaration called for donors to increase technical cooperation and build capacity. The 2008 Accra Agenda for Action reinforced the need to improve partner countries’ capacities to monitor and evaluate the performance of public programs. However, despite some effort, a late 2008 report shows that capacities are still weak in many governments--M&E implementation lags planning and quality is generally low. Many are now engaged in renewed efforts to build M&E capacity in the development context. But what works? What capacity-building efforts are needed to yield good quality M&E? This panel explores four types of M&E capacity building efforts, experience with them based on actual cases, their advantages and disadvantages, and factors important for their transfer to behavior change and quality M&E.
|
|
Evaluation Capacity Building: Lessons Learned From the Field in Botswana
|
| Robert Lahey, REL Solutions Inc, relahey@rogers.com
|
|
Bob Lahey presents on hands-on work in Botswana to develop and put in place national M&E capability, identifying factors that worked to support the efforts as well as those that did not, the advantages and disadvantages of such a field-based approach for evaluation capacity building, and lessons for the broader evaluation community on building quality M&E systems. M&E efforts in Botswana are driven by two key initiatives: Vision 2016 and Public Sector Reform which led to desire to increase “results” measurement and reporting. Factors identified and discussed as necessary for good quality monitoring and evaluation development and implementation include: (1) drivers; (2) leadership and commitment; (3) capacity “to do”; (4) capacity “to use”; and (5) oversight. The lessons, however, are different.
|
|
|
Evaluation Capacity Building in the International Context: View From the Academy
|
| Robert Shepherd, Carleton University, robert_p_shepherd@carleton.ca
|
| Susan Phillips, Carleton University, susan_phillips@carleton.ca
|
|
The authors argue that for quality M&E in the development context, we need more than skilled evaluators who are good at measurement and can monitor and evaluate projects. They also need to understand the international development context and be able to evaluate greater levels of complexity, including program, joint program, organizational and country level evaluations. But, even this will not alone build sufficient capacity without building the demand side, that is educating public managers who inculcate an evaluative culture in their organizations, recognize quality evaluations, and demand quality M&E in their programs. Universities can have a major role in building such capacity – in offering appropriate degrees with relevant content and undertaking related research. But relevance, rigor, and reach are critical. The authors explain what this would mean and major challenges to accomplishing it. They illustrate parts of the dream with programs now underway.
| |
|
Building Capacity in Development Evaluation: The International Program in Development Evaluation Training
|
| Linda Morra Imas, World Bank, lmorra@worldbank.org
|
|
This presentation describes the origin and nature of this now10 year old experiment in building capacity in international development evaluation and who it trains. A collaboration of the World Bank and Ottawa’s Carleton University, the program has evolved and grown offshoots in the form of customized local, national, and regional offerings, some of which are now becoming institutionalized themselves. Several thousands have taken IPDET training in Canada or through one of its offshoots. Advantages and disadvantages of the approach are discussed as well as how it is itself evaluated to help ensure provision of high quality M&E training and gain knowledge of how and the extent to which it impacts in contributing to the practice of high quality M&E in developing countries. Features that evaluations to date have found key to the program’s success are identified, in addition to challenges going forward.
| |
|
Regional Centers for Evaluation Capacity Development
|
| Nidhi Khattri, World Bank, nkhattri@worldbank.org
|
|
This session presents efforts to strengthen institutions at the regional level to supply cost-effective, relevant, and demand-driven quality M&E capacity building services through a new initiative supported by multiple international development organizations, called the Regional Centers for Learning on Evaluation and Results (CLEAR). CLEAR stems from high demand for, but limited availability of relevant M&E services, scarce quality programs, and small pool of local experts resulting in countries being dependent on international supply-- which is expensive, untimely, and not necessarily customized to specific needs. The initiative has two components: (1) regional centers that will provide applied in-region training, technical assistance, and evaluation work, and (2) global learning to strengthen practical knowledge-sharing on M&E across regions on what works, what does not, and why. Advantages and disadvantages of the approach are discussed as well as the major challenges going forward and how the quality of M&E capacity-building services will be evaluated.
| |