Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Measuring Use and Influence in Large Scale Evaluations
Multipaper Session 608 to be held in Mineral Hall Section A on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Evaluation Use TIG
Chair(s):
Susan Tucker,  E & D Associates LLC,  sutucker@sutucker.cnc.net
Retaining Relevance in Evaluation: Evaluation for Learning or Liability?
Presenter(s):
Laurie Moore,  Mid-continent Research for Education and Learning,  lmoore@mcrel.org
Sheila A Arens,  Mid-Continent Research for Education and Learning,  sarens@mcrel.org
Abstract: Evaluation for accountability has its place in the practice. However, educational evaluation for accountability narrows the rich variety of definitions of evaluation utilized by practitioners to one which speaks only to the evaluands’ success in meeting evaluation outcomes and their worthiness of continued or expanded funding. In this paper, we argue that this narrowing precludes other important outcomes of evaluation, particularly those related to purposes such as guiding and improving educational decision-making, program planning, policy-making, or enhancing reasoning abilities. We propose seeking a balance between evaluation for accountability and evaluation for its other important purposes that ultimately leads to the betterment of social conditions – a balance that benefits funding agencies, the organizations they support, and the beneficiaries of work done by these organizations.
Identifying Factors Associated With Local Use of Large-Scale Evaluations: A Case Study
Presenter(s):
Tania Rempert,  University of Illinois Urbana-Champaign,  trempert@uiuc.edu
Abstract: The primary issues examined in this instrumental case study are: (1) How do local school-level programs use large-scale evaluation processes, information, and findings? and (2) What about evaluation R was useful in particular? Evaluation R is best studied as an instrumental case study, because it is unique in that it is a large-scale evaluation that purposefully aimed to impact the local program implementation while at the same time aligning its methods, strategies, and tools with the federal requirements for evaluation. Several of the methods, strategies, and tools included within Evaluation R are innovative examples of how large-scale evaluations can be useful at the local level. This case study was designed to allow stakeholders at the federal, state, district, and school levels of Evaluation R to voice their intended use of evaluation as well as to collect the perspective of local program implementers regarding usefulness of Evaluation R.
The Development and Validation of Evaluation Use Scales in Large Multi-Site Program Evaluations
Presenter(s):
Kelli Johnson,  University of Minnesota,  johns760@umn.edu
Abstract: Despite widespread agreement in the field relative to the importance of evaluation use and influence, no validated measure has been identified to date. The purpose of this research is to validate a measure for evaluation use and influence in large multi-site program evaluations. The paper describes the development of scales to measure evaluation use from data obtained in an online survey of evaluators and Principal Investigators of four National Science Foundation (NSF) programs. Validity is demonstrated using both theoretical and empirical evidence. This study provides insight in two areas. First, a valid measure of use and influence will benefit the practice of evaluation by identifying factors critical to evaluation use; and second, this study will contribute to research on evaluation use by providing an effective tool for measuring the use and influence of multi-site evaluations.

 Return to Evaluation 2008

Add to Custom Program