2010 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Evaluators Thinking Evaluatively About Use: Tips for the Trade
Multipaper Session 811 to be held in Texas E on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Evaluation Use TIG
Chair(s):
Helene Jennings,  ICF Macro, helene.p.jennings@macrointernational.com
Reframing the Goals of an Evaluation During Program Dissolution: What Can Evaluation Offer?
Presenter(s):
Christine Doe, Queen's University at Kingston, christine.doe@queensu.ca
Michelle Searle, Queen's University at Kingston, michellesearle@yahoo.com
Lyn Shulha, Queen's University at Kingston, lyn.shulha@queensu.ca
Abstract: Current evaluation literature, in addition to focusing on judging worth and merit, emphasizes learning from and in evaluative contexts. A central purpose of program evaluation theory and practice is to contribute to a more refined understanding of complex social problems and the programs intended to address these problems. Evaluations can promote learning in many ways; this paper explores what evaluations can offer the process of downsizing a program. In recent literature, this area is relatively undocumented, but regrettably, in some contexts inevitable. This case study explores uses of an evaluation during program dissolution to understand ways to abridge its staff and clientele, while maintaining some key program elements for possible restructuring. Preliminary results suggest that when terminating a program, an evaluation that emphasizes use has potential to maintain and strategically determine the critical structures and processes that allow a program to go dormant but not to disappear.
Using Evaluation Data for Secondary Independent Research: Lessons From Two Studies
Presenter(s):
Kari Nelsestuen, Education Northwest, kari.nelsestuen@educationnorthwest.org
Caitlin Scott, Education Northwest, caitlin.scott@educationnorthwest.org
Theresa Deussen, Education Northwest, theresa.deussen@educationnorthwest.org
Abstract: Evaluation is not synonymous with basic research, although they share similar methods and rules of evidence. Because of shared methods, evaluations often collect data that initially appears ripe for secondary analyses for research purposes. However, using evaluation data for secondary research analyses can be challenging because these data were not collected to answer traditional research questions. The challenges of using evaluation data to address research questions are described in this paper. The paper draws on two research studies based on secondary analyses of survey and interview data from six evaluations. As other evaluators may believe about their evaluation data, the opportunity for secondary analyses was initially very promising and eventually led to publication. However, the nature of the data collected for evaluation purposes led to several challenges and methodological limitations in the secondary research studies.
Evaluator Perceptions of Process Use
Presenter(s):
Lennise Baptiste, Kent State University, lbaptist@kent.edu
Abstract: There are two aspects to the problem of not having a working definition of process use. The first is that evaluators are still unable to say definitively what examples of stakeholder behavior are in fact illustrations of process use. Second, even if evaluators can correctly identify process use, they need to be sensitized about its presence in their work settings, to strengthen the recommendations they make for stakeholders. The presenter will share the findings of a study regarding evaluators’ perceptions of process use after reviewing 33 examples of stakeholder behavior. These examples were taken from reports of evaluations which ranged from low to high stakeholder involvement. These findings can contribute to building the construct validity of process use.
Federal Mandates and Guidelines and How This Impacts Program Evaluation
Presenter(s):
Andrea Wood, Western Michigan University, andrea.s.wood@wmich.edu
Rashell Bowerman, Western Michigan University, 
Gary Miron, Western Michgian University, gary.miron@wmich.edu
Patricia Moore, Western Michigan University, patricia.a.moore@wmich.edu
Abstract: This paper presents the findings from an exploratory study of evaluation practices involving federally funded programs and projects. The study examines the nature of evaluation processes within the context of federally funded grants to determine challenges affiliated with and opportunities for evaluator growth. Surveys and interviews are used to collect data from program and project directors and principal investigators. The key question examined in the study is: What are the positive and negative implications of federal requirements for evaluation? Mandated guidelines for evaluations from federal agencies have the potential to restrict or enhance the work of the evaluator in every stage of evaluation. On the positive side, these guidelines are likely to increase the number of evaluations that are undertaken. On the negative side, these guidelines may narrow the focus of the evaluation and restrict the utility and purpose of evaluations.
Emerging Concepts and Tools for Developing and Assessing Evaluation Capacity of Educational Networks and Partnerships: An Iterative Approach
Presenter(s):
Ed McLain, University of Alaska, Anchorage, afeam1@uaa.alaska.edu
Susan Tucker, Evaluation & Development Associates, sutucker1@mac.com
Abstract: Building the capacity of school-based teams and larger networks to use improvement-oriented evaluation methodologies across diverse contexts, while exhorted by funding agencies, is rarely evaluated. The authors have been engaged in network capacity building since 2004 as part of a federally-funded USDE Teacher Quality Enhancement (TQE) grant. Grounded in the context of nine Alaskan high-need urban and rural districts experiencing a crisis in attracting (and holding) quality teachers, this session will focus on demonstrating methods and tools for sustainable data teaming and evaluation use. Participants will gain a clearer understanding of the indicators of successful teaming and network development between university and districts. Finally, we present a network development evaluation matrix.

 Return to Evaluation 2010

Add to Custom Program