2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Voice and Representation in Federal-level Educational Evaluations: An Empirical Sampling
Panel Session 430 to be held in Laguna A on Thursday, Nov 3, 2:50 PM to 4:20 PM
Sponsored by the Research on Evaluation
Chair(s):
Jennifer C Greene, University of Illinois at Urbana-Champaign, jcgreene@illinois.edu
Discussant(s):
Nora Gannon, University of Illinois at Urbana-Champaign, ngannon2@illinois.edu
Abstract: Much of the evaluation of large-scale federal programs in the US is conducted by large research and evaluation companies. This panel reports on a study of a sample of educational evaluations conducted over the past decade by a selection of three of these companies (Abt Associates, SRI, and the Urban Institute). The study addressed the following questions: In what areas of federal education policy do these companies conduct evaluations? What kinds of evaluations are conducted, in terms of key characteristics that include evaluation purpose and audience, methodology, criteria for judging program quality, evaluation thrust (formative, summative, critical), and dissemination? How do the educational evaluations conducted by these large companies map onto the range of evaluation approaches currently available in the field? Which critical dimensions of evaluation and stakeholder standpoint are well represented and which may be left out?
A View From Above: An Overview of Selected Evaluation Studies and Their Location in the Current Theoretical Landscape in Evaluation
Tisa Trask, University of Illinois at Urbana-Champaign, ttrask2@illinois.edu
Jeehae Ahn, University of Illinois at Urbana-Champaign, jahn1@illinois.edu
This presentation will provide a descriptive portrait of the kinds of educational evaluations conducted by selected large research and evaluation companies. Following a brief discussion of our sampling logic and rationale, we will describe the overall character of these selected studies in relation to the educational domains and relevant policy contexts represented. Then we will highlight the key evaluation components of these evaluations, including purpose and audience, key evaluation questions, methodology, criteria for judging program quality and other important dimensions. Building on this descriptive characterization, we will also examine how our selected samples of large-scale educational evaluations map onto the various evaluation approaches available in the field, hence attempting to locate these studies within the broader evaluation community and its current theoretical landscape.
Voice and Values: Stakeholder Representation in Evaluations Conducted by Large Research Firms
Ayesha Boyce, University of Illinois at Urbana-Champaign, boyce3@illinois.edu
Tim Cash, University of Illinois at Urbana-Champaign, tjcash2@illinois.edu
Peter Muhati, University of Illinois at Urbana-Champaign, mmuhati2@illinois.edu
Evaluators aim to surface multiple stakeholder values and give them representation both during the evaluation and in subsequent reports. However, stakeholders with more power are often given more attention. This presentation will examine how large research firms handle stakeholder voice and representation in their evaluations. Specifically, the question of whose interests and values are included, as well as excluded, and how, will be explored. Further, what are the implications of stakeholder exclusion in a democratic society? Raising such questions invites evaluators not only to consider their practice, but also the role and purpose of evaluation in a democracy. Lastly, and possibly most importantly, these questions force us to confront and challenge the many meanings of democracy expressed in evaluation. The sample for this presentation was drawn from educational evaluation reports that were published in the last 10 years by 3 large research firms.
A Close Examination of Policy-Relevant Education Evaluations: Criteria for Judging Quality
Matt Linick, University of Illinois at Urbana-Champaign, mlinic1@illinois.edu
Diane Fusilier-Thompson, University of Illinois at Urbana-Champaign, diat@illinois.edu
Research corporations in the United States perform many publicly-funded and policy-relevant educational program evaluations. Often, these findings are used to inform policy makers who, in turn, make decisions that affect the lives of the recipients of these programs. In this presentation, we examine how research corporations make judgments about the quality and effectiveness of the educational programs they evaluate. We focus on whether or not such criteria are explicitly stated as a basis for quality judgments, and discuss the implicit criteria that surface during the course of our examination. We also examine how research companies judge the quality of their own methodologies and evaluations and seek out the implicit and explicit criteria used to make these judgments. We further probe connections between the quality criteria used to judge the program and the methodology used.

 Return to Evaluation 2011

Add to Custom Program