Return to search form  

Session Title: Studying Process Use on a Large Scale
Multipaper Session 603 to be held in International Ballroom D on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Evaluation Use TIG
Chair(s):
Susan Tucker,  Evaluation and Development Association,  sutucker@sutucker.cnc.net
Evaluation in Post War Countries: Tools and Skills Required
Presenter(s):
Mushtaq Rahim,  ARD Inc,  mrahim@ardinc.com.af
Abstract: Afghanistan, since 2000 has entered in to a new era. The new era also sees the interventions of a lot of donor agencies and NGOs. The approach to evaluation of results has also peaked during the era. However, identification of results has been more than a challenge since one of main sources of data collection is direct interviewing of the beneficiaries. The beneficiaries, due to low literacy rate, are unable to comprehend the purpose of it and therefore; do not provide real data. One the other hand, a handful of funds are being awarded to the NGOs in the country without considering the past experiences. The will is only to spend the money and get the outputs without focusing on the long term. There would not have been even a single ex-post evaluation of any program and project. Hence, the evaluation is rarely used for future project design.
Not by the Books: Models, Impacts and Quality in Ninety Evaluations
Presenter(s):
Verner Denvall,  Lund university,  verner.denvall@soch.lu.se
Abstract: A Swedish metropolitan policy has spent approximately $ 500 000 000 in the purpose of reducing social, ethnical and discriminatory segregation and increasing sustainable growth in 7 municipalities and 24 city neighborhoods. This program has attracted about 90 evaluators from universities and companies. As an outcome of this almost a hundred evaluations have been produced between the years 1999 – 2006. Those evaluations have been analyzed, the evaluators were interviewed and 400 administrators and project leaders have attended a survey in a research project. The paper will focus on evaluation models in use, the impact and the quality of those evaluations. How come quality does not seem to correspond to impact? And how can we understand that the evaluators seem to have adopted a narrative model of their own not anywhere near the models presented in the textbooks?
Learning From Evaluations in National Governments of Developing Countries: The Case for Sub-Saharan African Countries
Presenter(s):
Rosern Rwampororo,  Ministry of Economic Planning and Development,  rwampororor@mepdgov.org
Rhino Mchenga,  Ministry of Economic Planning and Development,  rhinomchenga@yahoo.co.uk
Abstract: Drawing on experiences from Uganda and Malawi, the paper is premised on the assumption that learning is a process that enhances individual and collective capacity within national governments to create the desired results that governments would like to create now and in the future. The paper analyses and uses the level of monitoring and evaluation (M&E) capacity in either country to explain the difference in learning from evaluations, if any, due to the focus paid to the monitoring function at the expense of the evaluative function. The paper goes further to debunk the assumption that learning is a process that enhances collective capacity in order to create the results desired in decision-making and action when in reality it may be individualistic. The paper demonstrates not only the poor culture on use of monitoring information and evaluation findings but also the political influences which affect decisions at various levels.
On the Value-added of the Evaluation Process: Investigating Process Use in a Government Context
Presenter(s):
Courtney Amo,  Social Sciences and Humanities Research Council of Canada,  courtney.amo@sshrc.ca
J Bradley Cousins,  University of Ottawa,  bcousins@uottawa.ca
Abstract: This paper reports on the preliminary results of a study which examined the link between process use and use of evaluation findings in the Canadian Federal Government context. The study involved a pan-Canadian survey of evaluation practitioner in federal government and an in-depth case study of a crown corporation of the Government of Canada. It examined such factors as the context in which an evaluation is conducted; the individuals involved in the evaluation; and the form of systematic inquiry used in the evaluation, and how these factors and characteristics fostered evaluation utilization and in particular, process use. The results of this study help us further our understanding of process use as a worthwhile consequence of evaluation, and as a means of deriving further benefits from the evaluation process.
Search Form