2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Value in Government Evaluation: Multiple Perspectives
Multipaper Session 575 to be held in Huntington B on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Government Evaluation TIG
Chair(s):
David Bernstein,  Westat, davidbernstein@westat.com
Evaluation of Evaluators Who Rate Proposals
Presenter(s):
Randall Schumacker, University of Alabama, rschumacker@ua.edu
Abstract: Federal funding agencies receive thousands of grant proposals each year and distribute millions of dollars for research annually. Each agency solicits professional reviewers across numerous academic disciplines to review the thousands of grant proposals it receives annually. A fundamental requirement should be that the review process is auditable, and conducted in a fair and objective manner by peer review. This paper demonstrates a methodology to achieve accountability in the peer review process of grant proposals. Rasch many-faceted analysis creates an adjustment to the rating scores to yield a fair average that removes reviewer leniency and/or severity in their ratings. This rating adjustment is based on a single administration of proposals for review, does not require that all reviewers rate all proposals, and permits a comparison of summative raw score rankings to a fair average. A heuristic example demonstrates the Rasch many-faceted methodology with a group of proposal reviewers.
Using Interim Findings of a Multi-year National Evaluation to Inform Program Guidance
Presenter(s):
Eileen Chappelle, Centers for Disease Control and Prevention, echappelle@cdc.gov
Lazette Lawton, Centers for Disease Control and Prevention, llawton@cdc.gov
Diane Dunet, Centers for Disease Control and Prevention, ddunet@cdc.gov
Abstract: Waiting until the end of a multi-year evaluation to consider evaluation findings represents missed opportunities to use interim evaluation data for potential program improvements. The Centers for Disease Control and Prevention's Division for Heart Disease and Stroke Prevention is sponsoring a multi-year evaluation to assess the outcomes of the National Heart Disease and Stroke Prevention Program. In this project, CDC evaluators are working in close collaboration with CDC program staff to periodically review interim evaluation findings with the intent of improving CDC's guidance and technical assistance provided to funded programs. Interim findings have also been shared with funded programs to facilitate reflection on where resources and activities are being directed. We will demonstrate how interim evaluation findings can be used to improve programs and support funded programs' ability to reach intended goals.
The V in VFM: Value, Values and Assessing Value for Money
Presenter(s):
Jeremy Lonsdale, National Audit Office, United Kingdom, jeremy.lonsdale@nao.gsi.gov.uk
Abstract: Value for money audit - a variant of performance audit - is a significant evaluative activity in a number of countries, as examined in a recent book, 'Performance audit: contributing to accountability in democratic government' (Lonsdale et al, 2011). It has statutory role in assessing the economy, efficiency and effectiveness with which governments use public resources. The UK National Audit Office has recently given increased attention to how it assesses and communicates whether value for money has been secured on particular programmes and projects. This is of particular interest at a time when major public spending cuts are being introduced and there is concern that public value will be lost. This paper examines what NAO means by 'value for money', and what its approach and philosophy says about what the organisation considers to be important values in the delivery of public services.
We Have a Performance Measurement Framework...So Where's the Data? Let Sleeping Dogs Lie or Just Make the Case for Qualitative Methodologies!
Presenter(s):
Sandra L Bozzo, Ontario Government, sandra.bozzo@ontario.ca
Abstract: This paper examines the challenges and explores viable options for addressing evident data gaps in performance measurement for Aboriginal initiatives in an Ontario government context. Faced with a small number of reliable quantitative data sources, limited willingness to collect data, and programmatic/administrative data lacking population identifiers, government is left with few options on the performance measurement front. The internal challenges are compounded with external community realities of self-determination and the need for self-governance. There is a perceived tension between quantitative versus qualitative methodologies that would appear to be best left unresolved in government. While qualitative methods are often a hard sell in government, inevitably performance measurement in an Aboriginal context necessitates approaches that are consistent with Aboriginal approaches to data collection and traditional ways of knowing.

 Return to Evaluation 2011

Add to Custom Program