|
Session Title: Reflecting on Practice: Strategies and Lessons Learned
|
|
Panel Session 413 to be held in Liberty Ballroom Section B on Thursday, November 8, 3:35 PM to 5:05 PM
|
|
Sponsored by the Theories of Evaluation TIG
|
| Chair(s): |
| Jeremy Lonsdale,
United Kingdom National Audit Office,
jeremy.lonsdale@nao.gsi.gov.uk
|
| Discussant(s):
|
| Nancy Kingsbury,
United States Government Accountability Office,
kingsburyn@gao.gov
|
| Abstract:
Reflecting on practice is a hallmark of Evaluation practice. Evaluations are improved, defended, and examined for strengths and limitations through systematic metaevaluations. Meeting this standard of practice is inextricably bound to the quality of our work and the subsequent use and influence of evaluation findings. It is also linked to our confidence in the lessons we derive and promulgate for the purpose of improving and better managing programs. Constructive lessons learned are intended to build theory and evaluation capacity, moreover, identified good practices are intended to provide policy makers and practitioners with knowledge and insights that can improve program design, implementation, and effectiveness. This panel presents several different perspectives on how we can productively and critically engage in reflective practice. It will raise issues about criteria for assessing quality, the mechanisms and effects of good practice advice, how to leverage evaluation impact, and benefits, concerns, and questions we should raise.
|
|
The Practice of Metaevaluation: Does Evaluation Practice Measure-up?
|
| Leslie Cooksy,
University of California, Davis,
ljcooksy@ucdavis.edu
|
| Valerie J Caracelli,
United States Government Accountability Office,
caracelliv@gao.gov
|
|
Metaevaluation, a systematic review of an evaluation to determine the quality of its processes and findings, is one of the Program Evaluation Standards. Metaevaluation may be applied to a set of evaluations as a precursor to meta-analysis or evaluation synthesis, but how common is its use in the sense intended by the Program Evaluation Standards -- to ensure that 'stakeholders can closely examine the evaluation's strengths.' After a brief background on the theory of metaevaluation, the paper examines the practice of metaevaluation by the evaluation community. Specific questions that will be addressed are: How many metaevaluations can be readily identified? What criteria of quality are used and how are they applied? The paper will provide information about the range of criteria and approaches to metaevaluation used in practice and raise awareness of the use of metaevaluation as a method for assessing evaluation quality.
|
|
|
Simply the Best? Understanding the Market for Good Practice Advice From Government Research and Evaluation
|
| Elena Bechberger,
London School of Economics and Political Science,
e.k.bechberger@lse.ac.uk
|
|
Many evaluations of government programs carried out by state audit bodies or government institutions are now generating what is termed 'good practice' advice. Whether developing new policies or managing existing programs, there is a wide variety of printed and online guides offering suggestions to public officials about how to do things properly. While there has been some recent research on the use of good practice, the foundation question of how good practices are identified and whether they are in fact good practices has been ignored. This paper examines how compilers of such advice in the United Kingdom identify and evaluate practices to be good and develops a typology of different bases on which such evaluations are formed. It goes on to set out how disseminators of such advice conceive of their audiences, and sheds light on the receptivity to, as well as effects of, such advice.
| |
|
Assessing the Utilization and Influence of Evaluations
|
| Michael Bamberger,
Independent Consultant,
jmichaelbamberger@gmail.com
|
|
There is a widespread concern that evaluations are underutilized. This paper reports on a recent World Bank study that identified a sample of evaluations of development programs and policies where there was plausible evidence that the evaluations had been 'influential' and cost-effective. The cases illustrate the different ways that evaluations can be influential, many of them unanticipated and not all welcome; and describe simple methodologies for attribution analysis and for assessing the cost-effectiveness of an evaluation. Evaluations are never the only factor influencing decision-makers, and a framework is proposed for leveraging evaluation impact through strategic interaction with other sources of influence to which decision-makers are exposed. A mapping exercise is proposed through which an agency could review its evaluations to assess their levels of influence and cost-effectiveness, identify factors determining influence and develop guidelines to enhance utilization.
| |
|
What Questions Should we ask About Lessons Learned?
|
| Martin de Alteriis,
United States Government Accountability Office,
dealteriism@gao.gov
|
|
A wide variety of methods are used to derive lessons learned, from practitioners' anecdotal reports of their experiences in a particular activity, to scholars' broad review of the state of knowledge in a particular field. The variety of methods used, the rigor with which they are utilized, may be a function of the knowledge base from which lessons are drawn. Few people dispute the value of well-supported lessons learned. A range of techniques, if implemented rigorously, could provide policy makers and practitioners with knowledge and insights that would improve program design and implementation. Lessons learned can draw on the experience of a large number of practitioners, as well as experts and researchers. They can link cause to effects in ways that should be beneficial for implementation. However, potential problems can arise and it is important to be able to distinguish bona fide lessons learned from ones conducted with inadequate methodologies.
| |