Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Methodology and Tools for Assessing the Quality of Evidence
Panel Session 517 to be held in Centennial Section F on Friday, Nov 7, 9:15 AM to 10:45 AM
Sponsored by the Research on Evaluation TIG
Chair(s):
Nancy Kingsbury,  United States Government Accountability Office,  kingsburyn@gao.gov
Discussant(s):
Leslie J Cooksy,  University of Delaware,  ljcooksy@udel.edu
Abstract: Evaluators in all contexts are responsible for assuring that evaluation work follows good practice guidelines (e.g. AEA's Guiding Principles, Joint Committee Standards). Evaluators are also called upon to make judgments about the quality of others' studies. This may result in a metaevaluation of a single study, assessing study designs for their capacity to respond to particular policy questions, or, among other reasons, evaluating study quality for inclusion in a meta-analysis to determine the effectiveness of interventions from extant evaluations. Evaluators have devised heuristic tools such as checklists for making such judgments and scorecards to present summary snapshots of quality judgments to time-burdened decisionmakers. The first paper provides an overall framework within which the tools for assessing quality described in the other papers can be situated. The Discussant will focus on the challenges and opportunities evaluators' face when assessing quality through metaevaluation and its associated tasks.
Metaevaluation: A Conceptual Framework for Improving Practice
Valerie Caracelli,  United States Government Accountability Office,  caracelliv@gao.gov
Leslie J Cooksy,  University of Delaware,  ljcooksy@udel.edu
Metaevaluation, a mechanism for improving and informing stakeholders of an evaluation's strengths and weaknesses, is specifically called for in the Joint Committee's Program Evaluation Standards. A research program to clarify the current state of conceptual and technical development in metaevaluation was started in 2007. The Metaevaluation Project (Cooksy & Caracelli) has examined misunderstandings in terminology, purposes served, criteria of quality used, and its range of use in practice. The practice of metaevaluation as defined by the Joint Committee Standards remains limited, yet metaevaluative tasks requiring judgments of quality are frequently a part of other evaluation methodologies such as peer review, research synthesis, and meta-analysis. Drawing on the first phase of the project we will describe the theory and methodology associated with metaevaluation, including the criteria of quality (standards, principles, paradigm specific) used in practice. This paper sets the stage for the metaevaluative tools described in subsequent presentations.
Using Checklists to Assess Design Quality: Applications and Utility
Cindy K Gilbert,  United States Government Accountability Office,  gilbertc@gao.gov
Evaluators at the U.S. Government Accountability Office are often asked to assess the quality of evaluation designs - those of federal agencies as well as other research entities - at times making recommendations for improvement. For example, a recent report outlined the strengths and weaknesses of a Department of Defense (DoD) pilot program evaluation design, highlighting specific changes that could improve the validity of the study. Several types of evaluation checklists are available to guide such efforts, and are used to varying degrees at GAO. This paper will discuss available evaluation checklists, the characteristics these checklists have (and do not have) in common, the extent to which these and other frameworks are used at GAO, and their utility in a government auditing and evaluation setting.
Ensuring Quality in 'Score Card' Methodologies
Martin de Alteriis,  United States Government Accountability Office,  dealteriism@gao.gov
A number of US Federal Government agencies use 'score cards' or 'report cards' to assess key aspects of performance. For example, this methodology can be used to assess whether selected agencies have fully implemented, partially implemented, or taken no steps to implement a set of good practices. While 'score cards' allow for succinct presentations and definitive findings, there are also some potential limitations with this methodology, including possible oversimplification, and the 'shoehorning' of some issues into categories for which they are not a particularly good fit. Based on a review of reports that have used score cards, and input from evaluation specialists both inside and outside of government, this presentation will: 1) discuss the advantages and limitations of the score card methodology; 2) lay out key decisions that need to be made when constructing and using a score card; and 3) provide guidance on how quality can be ensured.
Shaping a Quality Product: A Balanced Approach to Assessing the Quality of Performance Audit in the United Kingdom
Jeremy Lonsdale,  National Audit Office United Kingdom,  jeremy.lonsdale@nao.gsi.gov.uk
Being seen to produce 'high quality' work is essential to the credibility of all evaluators. This paper will examine the quality assurance arrangements of the National Audit Office in the United Kingdom for its performance audit work, a form of evaluation that plays an important role in assessing the success of government programs. In particular, it will consider: How does the NAO define the quality of its products? Where do the criteria it uses come from? How does the NAO measure and monitor quality? And how is it trying to raise the quality of its work? The paper will assess how the NAO's approach to 'quality' has developed over the years, influenced in part by other disciplines. It will draw on a range of evidence, including 15 years of independent quality reviews, and highlight the current 'balance scorecard' arrangements designed to meet the expectations of different audiences.

 Return to Evaluation 2008

Add to Custom Program