2010 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Evaluation and Quality: Examples From Government
Multipaper Session 237 to be held in CROCKETT D on Thursday, Nov 11, 9:15 AM to 10:45 AM
Sponsored by the Government Evaluation TIG
Chair(s):
Sam Held,  Oak Ridge Institute for Science and Education, sam.held@orau.org
The Long and Winding Road: How the Integration of Evaluation Performance Measures and Results Can Lead to Better Quality Evaluations
Presenter(s):
Gale Mentzer, University of Toledo, gale.mentzer@utoledo.edu
Abstract: Evaluation plans and logic models depict inputs, activities, outputs, outcomes, outcome indicators, and their accompanying performance measures. When evaluation models are created, the evaluator uses research and theory, experience, and conjecture to determine the most appropriate indicators and data collection methods or performance measures for the stated outcomes. These designs, based on best practices, have good intentions but often an evaluator cannot predict precisely what the best manifestation of an outcome is and when/where it might occur. This presentation follows the path an evaluation plan of a large, federally funded project took as it attempted to uncover why one outcome was not being achieved when all the inputs suggested it should. It demonstrates that by crossing evaluation findings from other outcomes within the project, the evaluator was able to identify confounding variables and a more effective method of measuring the construct.
The External Reviewer's Role in Helping Promote Evaluation Quality: Examples From the Government Accountability Office's (GAO) Recent Experience
Presenter(s):
Martin de Alteriis, United States Government Accountability Office, dealteriism@gao.gov
Abstract: One way in which the U.S. Government Accountability Office (GAO) improves the quality of federal government evaluation is by reviewing agencies’ evaluation practices, and making recommendations to increase efficiency and effectiveness. While some GAO reviews have been meta-evaluations, most have examined the ways in which the agencies evaluated their own programs or activities. Typically, these reviews took a “good government” perspective, and focused on elements such as evaluation planning, data collection, and the use of results. A few reviews, however, used criteria specific to the evaluation profession; for example, two recent reviews relied on AEA’s Evaluation Policy Taskforce’s (EPTF) criteria for integrating evaluation into program management. The recommendations GAO made were accepted by the majority of the agencies, which subsequently took actions to improve quality. This presentation will discuss and illustrate how GAO reviews agency evaluation practices and the measures GAO recommends to improve quality.
Evaluating Data Quality in the Veterans Health Administration All Employee Survey
Presenter(s):
Katerine Osatuke, United States Department of Veterans Affairs, katerine.osatuke@va.gov
Scott C Moore, United States Department of Veterans Affairs, scott.moore@va.gov
Boris Yanovsky, United States Department of Veterans Affairs, boris.yanovsky@va.gov
Sue R Dyrenforth, United States Department of Veterans Affairs, sue.dyrenforth@va.gov
Abstract: The Veterans Health Administration (VHA) All Employee Survey (AES) is a voluntary annual survey of workplace perceptions (2008: N=164502; 72.8% response rate; 2004: N=107576; 51.75% response rate). AES results are included in action plans at the VHA facilities, at the regional level, and nationally. The dissemination of results and improvement implementation are included in the performance standards for managers and executives. Such broad use of AES results underscores the importance of data quality. We examined data quality issues in two years of the survey. We examined survey response and item nonresponse rates as dependent of respondents’ demographics, selected scores (e.g. satisfaction), and facility-level factors (incentives, organizational complexity). Variation in survey response rates and in rates of unanswered questions and survey breakoffs was unrelated to significant differences in mean survey scores for VHA facilities. Variation in demographics was significantly related to individual-level item nonresponse rates but effect sizes were small.
Employment and Training Administration: Increased Authority and Accountability Could Improve Evaluation and Research Program
Presenter(s):
Kathleen White, United States Government Accountability Office, whitek@gao.gov
Ashanta Williams, United States Government Accountability Office, williamsa@gao.gov
Abstract: This paper presents findings of the U.S. Government Accountability Office’s (GAO) evaluation of the research structure and processes of the Employment and Training Administration’s (ETA) research and evaluation center at the Department of Labor. Using key elements identified in the American Evaluation Association’s (AEA) Roadmap for a More Effective Government and in the National Research Council’s assessments of federal research and evaluation branches, GAO researchers examined: 1) how ETA's organizational structure provided for research independence; 2) what steps were taken by ETA to promote transparency and accountability in its research program; and 3) how ETA ensured that its research is relevant to workforce development policy and practice. Overall, GAO found that ETA's research center lacks independent authority for research, has limitations with regard to transparency and accountability processes, has not routinely involved stakeholders in developing its research agenda, and has been slow to address key policy issues.

 Return to Evaluation 2010

Add to Custom Program