Evaluation 2009 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Contextual Influences on Evaluation Practice
Multipaper Session 399 to be held in Suwannee 16 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Research on Evaluation TIG
Chair(s):
Tarek Azzam,  Claremont Graduate University, tarek.azzam@cgu.edu
Evaluation in California Foundations
Presenter(s):
Susana Bonis, Claremont Graduate University, susana.bonis@cgu.edu
Abstract: This study adds to the growing literature on evaluation in foundations. Twenty of the top foundations in California (in terms of giving) were interviewed using a semi-structured format to determine how program evaluation is practiced in the foundation, what the foundation expects from its grantees with regard to program evaluation, and what actions by the foundation's leader and board of directors influence program evaluation in the foundation. Recent trends in program evaluation in foundations, and their implications, will be discussed.
The Relationships Between Involvement and Use in the Context of Multi-Site Evaluation
Presenter(s):
Frances Lawrenz, University of Minnesota, lawrenz@umn.edu
Jean A King, University of Minnesota, kingx004@umn.edu
Stacie Toal, University of Minnesota, toal0002@umn.edu
Denise Roseland, University of Minnesota, rose0613@umn.edu
Gina Johnson, University of Minnesota, john3673@umn.edu
Kelli Johnson, University of Minnesota, johns706@umn.edu
Abstract: This research project examined involvement in, and use of, evaluation processes and outcomes in four multi-site National Science Foundation (NSF) programs. Although NSF was the primary intended user, this research looked at involvement and use by 'secondary users', i.e., the individual projects comprising the NSF program and people in the evaluation and science, technology, engineering, and mathematics (STEM) education fields. The research used cross case analysis to examine data from surveys of participating projects, interviews with project PIs and evaluators, citation analysis, a survey of the STEM education and evaluation fields, and discussions with evaluators of NSF programs. Several themes emerged that affect the relationship between involvement and use: evaluator credibility, interface with NSF, life cycles, project control, tensions, and community and networking. The interaction of these themes with the relationship between involvement and use is complex and necessitated examination of unintended users.
Understanding How Evaluators Deal With Multiple Stakeholders
Presenter(s):
Michelle Baron, The Evaluation Baron LLC, michelle@evaluationbaron.com
Abstract: This paper explains the implications of a qualitative study on the broader question of what it means for an evaluator to deal with conflicting values among stakeholders, describes what practicing evaluators do when faced with conflicting stakeholder values, examines how current evaluation approaches might clarify what is going on in evaluator practices, and begins working toward a descriptive approach of evaluator-stakeholder interaction. While there is a plethora of literature that links theory and practice (Christie, 2003; Fitzpatrick, 2004; Schwandt, 2005; Shaw & Faulkner, 2006), few advocate for descriptive approaches (Alkin, 1991; Alkin, 2003; Alkin & Ellett, 1985) in terms of documenting how evaluators actually practice evaluation, and thus far prescriptive approaches continue to dominate the evaluation spotlight. This paper provides a foundation for descriptive theory development and expanded evaluator training to provide evaluators at all levels and disciplines timely, accurate, and concrete examples of evaluator roles and decision making processes.

 Return to Evaluation 2009

Add to Custom Program