Return to search form  

Session Title: Frameworks of Evaluation Use and Empirical Assessments
Multipaper Session 758 to be held in Calvert Ballroom Salon C on Saturday, November 10, 10:30 AM to 12:00 PM
Sponsored by the Evaluation Use TIG
Chair(s):
Edward McLain,  University of Alaska, Anchorage,  ed@uaa.alaska.edu
Investing Stakeholders in the Process of Generating a Content-specific Evaluation
Presenter(s):
Susan Marshall,  Southern Illinois University, Carbondale,  smarsh@siu.edu
Joel Nadler,  Southern Illinois University, Carbondale,  jnadler@siu.edu
Nicholas Hoffman,  Southern Illinois University, Carbondale,  nghoff@siu.edu
Jack McKillip,  Southern Illinois University, Carbondale,  mckillip@siu.edu
Abstract: Southern Illinois University's School of Music requested Applied Research Consultants' (ARC) services to revise three teacher evaluation forms: Classroom Instruction, Instruction of Applied Music, and Directors of Ensembles. Due to the three specific styles of teaching in the School, one standard evaluation form was not appropriate. ARC revised the original forms with the help of the Director of the School of Music and created four subscales across the three evaluation forms. The forms were further revised based on qualitative web survey responses from faculty and students. It was important for the instructors to be invested in the process of revising the existing forms to ensure the use of the new forms. Tenured faculty administered the revised evaluation forms to their students at the end of the Fall 2006 semester as a pilot test. The data from these evaluations were analyzed for internal consistency, factor analysis, discriminant validity, and reliability.
An Evaluation Use Framework and Empirical Assessment
Presenter(s):
Laura Peck,  Arizona State University,  laura_r_peck@hotmail.com
Lindsey Gorzalski,  Arizona State University,  lindsey.gorzalski@asu.edu
Abstract: This proposed paper addresses the utilization of program evaluation. Substantial literature on evaluation utilization has focused the incorporation of evaluation recommendations into the program design. Prior theoretical research has examined evaluation use and has proposed varied frameworks for understanding the use (or lack thereof) of program evaluation results. Our research focuses on these frameworks and attempts to create a single, integrated framework. But more importantly, we argue, is the extent to which empirical research on evaluation use finds value in this framework. To this end, we rely on prior research regarding categories of utilization, typologies of recommendations, and factors affecting utilization to frame an empirical study of evaluation use. The empirical part of the paper draws on post-evaluation interviews with 19 agencies who have recently engaged in evaluation research. This work be of broad interest to AEA members because of its conceptual, empirical and applied foci.
The Continuous Quality Improvement (CQI) Benchmarking Initiative: Using Performance Measurement and Benchmarking to Support Organizational Learning
Presenter(s):
Brigitte Manteuffel,  Macro International Inc,  bmanteuffel@macrointernational.com
Sylvia Fisher,  United States Department of Health and Human Services,  sylvia.fisher@samhsa.hhs.gov
Gary Blau,  United States Department of Health and Human Services,  gary.blau@samhsa.hhs.gov
Abstract: The Continuous Quality Improvement (CQI) Benchmarking Initiative was implemented in 2004 by the Child, Adolescent and Family Brand of the Center for Mental Health Services to utilize evaluation data to support organizational learning and technical assistance planning for federally funded community-based children's mental health service programs. This presentation will provide an overview of a data-driven tool developed as part of this initiative that incorporates performance measures, benchmarks, a scoring index and a communication feedback process to support program development and improvement. The process and analytic techniques for identifying indicators, benchmarks and for developing the scoring index will be discussed. In addition, performance data will be presented to highlight progress that has been made in achieving program benchmarks. Attendees will learn how evaluation data can be used for performance measurement in a manner that is utility-focused and meets the needs of program administrators and evaluators.
Does Performance Measurement Facilitate Learning?
Presenter(s):
Leanne Kallemeyn,  University of Illinois at Urbana-Champaign,  kallemyn@uiuc.edu
Abstract: A notable expansion of evaluation is the 'performance measurement' movement, which emphasizes local level measurement of specified performance indicators or program outcomes. A defining characteristic of performance measurement systems is that they assess a predefined set of indicators on a routine-basis. Because of the nature of performance measurement systems, in this paper I argue one of the main limitations is that they limit what stakeholders can learn about social and educational programming. Based on past literature on performance measurement, I illustrate limitations on learning, such as through what outcomes are assessed or measured, how the outcomes are assessed or measured, and how results are interpreted and used. Drawing from the work of Lee Cronbach, I then argue that performance measurement systems ought to help 'society learn about itself.' Finally, I use a case example to illustrate how performance measurement systems can be used to facilitate learning.
Search Form