2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Supporting Value Judgments in Evaluations in the Public Interest
Panel Session 901 to be held in Pacific A on Saturday, Nov 5, 12:35 PM to 2:05 PM
Sponsored by the Government Evaluation TIG and the Presidential Strand
Chair(s):
George Julnes, University of Baltimore, gjulnes@ubalt.edu
Discussant(s):
Michael Morris, University of New Haven, mmorris@newhaven.edu
Stephanie Shipman, United States Government Accountability Office, shipmans@gao.gov
Abstract: To better understand valuing in the public interest, it is important to encourage a dialogue among evaluators in government and related organizations. This session provides presentations from Francois Dumaine describing Canadian evaluations, Martin Alteriis discussing GAO evaluations, and Christina Christie and Anne Vo presenting a model of the role of evaluators in valuing.
The Evaluator's Role in Valuing: Who and with Whom
Anne Vo, University of California, Los Angeles, annevo@ucla.edu
Christina Christie, University of California, Los Angeles, tina.christie@ucla.edu
Evaluation scholars and practitioners have dedicated much energy and effort to shaping and defining the program evaluation profession. However, careful examination of the program evaluation literature turns up only a few resources that describe and operationalize value judgments, the ways in which they are reached, and who is involved in this aspect of the evaluation process. We argue in this paper that the valuing act may be perceived in many different ways and consider the multiple theoretic perspectives that govern an evaluator's behavior. Based on this analysis, we develop a typology of evaluator valuing roles and suggest that value judgments are typically reached by stakeholders alone, stakeholders and evaluators in consort with each other, or by evaluators only. This heuristic helps us to gain a more explicit understanding of the valuing act and process as it occurs in the context of an evaluation.
Is Playing Hitman the Right Role for Evaluation?
Francois Dumaine, PRA Inc, dumaine@pra.ca
The process is secretive, which naturally fuels the fears of bureaucrats. Simply labeled "Program Review", this initiative from the Canadian government forces each department to assess, on a cyclical basis, all its activities, and identify which ones must go. No way around it: at each review, at least five percent of current spending must be freed up in order to be reinvested. To either guard or target initiatives, evaluation reports have become a prominent tool. And thanks to a revamped evaluation policy, all activities within each department shall be evaluated on a cyclical basis, providing ample evidence when engaging in a program review. Not surprisingly, as its role shifts, and its actions become more consequential, program evaluation is being scrutinized. This presentation uses the program review experiment to explore the set of assumptions about public policy that drive program evaluation, and assess its impact on the function of evaluation.
Using Criteria to Assess Agency Monitoring and Evaluation: Recent Government Accountability Office (GAO) Assessments of U.S. Foreign Assistance Programs
Martin De Alteriis, United States Government Accountability Office, dealteriism@gao.gov
Some recent GAO engagements have examined the ways in which U.S. federal government agencies monitor and evaluate particular programs and activities. For example, in the area of U.S. foreign assistance, GAO has looked at the State Department's evaluation of certain public diplomacy programs, USAID's monitoring and evaluation (M&E) of its Food for Peace program, and USDA's M&E of its McGovern-Dole school feeding program. These engagements examined factors such as the: performance measures used, reporting of results, M&E policies and procedures, staff devoted to M&E, and evaluations conducted. This paper will focus on the types of criteria that were employed in those engagements. For example, GAO has used its own performance measurement standards and AEA Evaluation Policy Task Force 'Road Map' standards. This paper will also discuss some of the challenges that can arise when using these sources of criteria, such as the challenge of operationalizing concepts such as 'Independence.'

 Return to Evaluation 2011

Add to Custom Program