2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: When to Use Evaluation Findings: A Decision Makers Dilemma - Grappling With the Methodological, Ethical, and Political Implications of Less Than Optimal Evaluation Findings
Multipaper Session 613 to be held in Avila A on Friday, Nov 4, 10:45 AM to 11:30 AM
Sponsored by the Evaluation Use TIG
Chair(s):
John LaVelle, Claremont Graduate University, john.lavelle@cgu.edu
Abstract: Program evaluation can be viewed as a continuum, with formative evaluations helping guide program improvement during the early stages of a program and summative evaluations measuring program impacts of "proud programs". However, when should we consider an evaluation a summative evaluation? When is the appropriate time to use evaluation findings to guide practice? The first paper shares findings from a randomized control trial of a learning community program that aims to increase college student success. While the program was found to have a positive impact on student success in a previous quasi-experimental analysis, the experimental design did not find the same impacts. In fact, when disaggregating the data by social identity groups (i.e., ethnic/racial, income) the program may have had a negative impact on particular subgroups. The second paper utilizes these findings to discuss the methodological, ethical, and political debates that accompany the use of "less than optimal" evaluation findings.
The Learning Community Lottery! Mixed-Methods Randomized Control Trial Measuring the Impact of Participation in a First-Year Learning Community Program
Tarek Azzam, Claremont Graduate University, tarek.azzam@cgu.edu
Learning communities are intended to help freshmen integrate into the college/university campus by engaging students in the classroom, by purposefully designing clusters of students who enroll in the same courses. A first year learning community program at the University of California at Riverside implemented a randomized control trial to understand the program's impact on student retention), units completed, likelihood of passing the entry-level writing requirement, time to major declaration, and grade point average. An additional survey measures the impact on student engagement. The impact of the program is measured by comparing performance measures across the treatment and control groups, then disaggregating the impact by race/ethnicity, gender, and socio-economic status. While the program was found to have a positive impact on several student success measures in a previous quasi-experimental analysis, the experimental design did not find the same impacts. In fact, when disaggregating the data the program may have had a negative impact on particular subgroups.
A Decision Maker's Dilemma
David Farris, University of California Riverside, david.fairris@ucr.edu
Melba Castro, University of California, Riverside, melbac@ucr.edu
The evaluation literature offers a variety of methods for increasing utilization of evaluation findings; however the question of "when" evaluation findings should be used is rarely debated. This second paper focuses on the process of determining the appropriate time for utilization through the use of a case example from a randomized control trial evaluation of a student support program. This case highlights the methodological, political, and ethical debates and implications that often accompany the decision making process. Both the evaluator and the decision maker will present their perspectives on the tension of evaluation utilization in the face of uncertainty and offer suggestions for practicing evaluators who may face similar dilemmas.

 Return to Evaluation 2011

Add to Custom Program