|
Session Title: Study Quality in Context: Mechanisms to Improve and Assess Evaluations
|
|
Panel Session 715 to be held in Sebastian Section I4 on Saturday, Nov 14, 9:15 AM to 10:45 AM
|
|
Sponsored by the Research on Evaluation TIG
|
| Chair(s): |
| Jeremy Lonsdale, National Audit Office United Kingdom, jeremy.lonsdale@nao.gsi.gov.uk
|
| Discussant(s):
|
| Robert Orwin, Westat, robertorwin@westat.com
|
| Abstract:
Study quality is inherently linked to context yet context also shapes how quality is perceived and the purposes evaluation serves. The first paper reflects on practice. By comparing and contrasting multiple perspectives and contexts, a framework is developed to help situate the definitions and uses of metaevaluation in theory and practice. The second paper looks across evaluation recommendations made by GAO in its oversight role. Context sophistication, a key GAO standard, helps it examine agency performance. GAO reviews have resulted in a variety of recommendations to assist agencies through improving their evaluations and capacity. In the U.K. value-for-money context, the third paper argues for a more sophisticated use of models and methods to evaluate programs with complicated delivery mechanisms situated in complex environments. The last paper refers back to the framework, draws on practice, and examines systematic reviews and associated metaevaluative tasks for assessing or synthesizing evaluations in their context.
|
|
Untangling Definitions and Uses of Metaevaluation
|
| Leslie J Cooksy, University of Delaware, ljcooksy@udel.edu
|
| Valerie J Caracelli, United States Government Accountability Office, caracelliv@gao.gov
|
|
The term "metaevaluation" is used in multiple ways and the action of metaevaluation has multiple purposes. For example, "metaevaluation" has been used to refer to the individual evaluation of an evaluation, on the one hand, and as a synonym for meta-analysis, on the other. Metaevaluation serves as a means of assessing study quality in meta-analysis but the terms are not interchangeable. Metaevaluation, the action, has also been used in evaluation synthesis, assessment of capacity needs, and as one of the Program Evaluation Standards (Joint Committee, 1994). This paper will untangle the definitions and uses of metaevaluation and provide a framework for the concept. It will provide a backdrop for approaches to metaevaluation used in practice illustrated by other papers in the session.
|
|
|
The External Review and Oversight of Evaluation: Some Insights From the Experiences of the US Government Accountability Office (GAO)
|
| Martin de Alteriis, United States Government Accountability Office, dealteriism@gao.gov
|
|
GAO, the oversight body for the United States Congress, promotes evaluation in a variety of ways. It conducts evaluations of federally-funded programs, supports capacity building in the federal government, and examines how federal agencies evaluate their programs. This last oversight role can lead to metaevaluation or associated recommendations about evaluation made to agencies. In the last three years, well over one hundred reports made recommendations about evaluation and are listed in GAO's publication database. A preliminary examination of this database shows that GAO has made recommendations for improvements in the following areas: Considering Evaluation in Program Design and Implementation; Planning for Evaluations; Conducting Impact Evaluations; Strengthening Existing Evaluations; Using the Results of Evaluations; and Improving Agencies' Ability to Evaluate. A subset of reports will be reviewed to determine the issues, researchable questions, and methods used to support recommendations and the implications for improving agency evaluation in the federal government.
| |
|
Evaluating Complex Systems and Contexts: The Use of Operational Modeling in Performance Audit
|
| Elena Bechberger, National Audit Office United Kingdom, elena.bechberger@nao.gsi.gov.uk
|
|
The UK's National Audit Office (NAO) produces 60 reports annually which assess the economy, efficiency and effectiveness with which central government bodies use public money. Evaluating the 'value for money' of programs and organizations is often challenging, particularly where the delivery chain is complex and performance strongly influenced by the social and economic context in which they operate. In order to deal with these challenges, the NAO has made increasing use of innovative methods, including advanced quantitative techniques such as operational modeling. This paper discusses the use of systems analysis frameworks to evaluate complex delivery chains in complex environments. Based on practical examples from NAO studies in the previous two years, it outlines both the value such approaches can add to assessing and improving performance, but also the limitations and difficulties associated with using them in the context of performance audits.
| |
|
Metaevaluative Tasks in Evaluation Synthesis and Evidence-Based Practice Grading Systems
|
| Valerie J Caracelli, United States Government Accountability Office, caracelliv@gao.gov
|
| Leslie J Cooksy, University of Delaware, ljcooksy@udel.edu
|
|
Drawing on aspects of the framework discussed in the first paper, as well as empirical examples from GAO and other contexts, this presentation addresses metaevaluation projects that examine a body of evidence pertaining to specific policy questions. Metaevaluative tasks requiring judgments of quality are frequently a part of such evaluation methodologies as peer review, evaluation synthesis, and meta-analysis. As governments, non-profits and other funders face fiscal constraints, the question of "what works" is one of importance. Systematic reviews in the contexts of health, education, justice, and social welfare are being conducted by agencies and organizations concerned with providing information to policy makers and practitioners to address important social problems. This paper will address the strength and limitations of assessing study quality in the context of such evidence based grading systems. It will also contrast such use with metaevaluations conducted for other purposes, including serving as a standard of evaluation practice.
| |