Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Models and Frameworks of Evaluation and Meta-Evaluation
Multipaper Session 634 to be held in Room 113 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Theories of Evaluation TIG
Chair(s):
Rebecca Eddy,  Claremont Graduate University,  rebecca.eddy@cgu.edu
An Emergent Theory of Systems Change and Documenting that Systems Change with the Three I Model
Presenter(s):
Dianna Newman,  University at Albany - State University of New York,  dnewman@uamail.albany.edu
Anna Lobosco,  New York State Developmental Disabilities Planning Council,  alobosco@ddpc.state.ny.us
Abstract: A model for evaluating systems change has emerged that includes Initiation, Implementation, and Impact of systems change efforts. The Three I Model was developed for use in a cross site evaluation in the addictions field; this model has had 7+ years in that original use and replication in human services, education and health programs. Currently, there are 30+ examples of individual program use and five meta-evaluations available for analysis. This has led to identification of common patterns and variables that are indicators of successful change that support a replicable model for documenting change to program and organizational systems. The purpose of this paper is to present that model as it has emerged, to discuss the major areas in which change should be present, to summarize key cycles of change that evaluators should document to meet funder needs and to provide a theoretical basis for evaluation of systemic change efforts.
Metaevaluation: Prescription and Practice
Presenter(s):
Lori Wingate,  Western Michigan University,  lori.wingate@wmich.edu
Arlen Gullickson,  Western Michigan University,  arlen.gullickson@wmich.edu
Abstract: “For all the attention, interest, and advocacy, actual examples of metaevaluation are sparse” (Henry & Mark, 2003). To address this gap in the literature, this presentation will describe multiple metaevaluations of an evaluation of a National Science Foundation program. Over the course of eight years, these metaevaluations included four independent, external metaevaluations (conducted at the request of the lead evaluator), in additional to ongoing, formative metaevaluation by an advisory committee. Their foci included the overall evaluation and some of its component parts. Prescriptions for metaevaluation—-such as those put forth by the Stufflebeam (2000, 2001, 2007), Scriven (2007) and the Joint Committee on Standards for Educational Evaluation (1994) will be discussed in relation to the criteria, methods, findings, and utility of the real-world metaevaluation examples.
Evaluation Routines, Roles, and Responsibilities: A Practitioner’s Perspective of the Evaluation Process
Presenter(s):
Gary Skolits,  University of Tennessee,  gskolits@utk.edu
Jennifer Morrow,  University of Tennessee,  jamorrow@utk.edu
Abstract: The purpose of this presentation is to describe a re-conceptualization of the evaluation process and offer a more complete and realistic model that depicts the broader evaluator roles and responsibilities occurring throughout the stages of a typical evaluation. This re-conceptualized model offers a unique conceptual framework that encompasses and highlights the many evaluator roles established by a typical evaluation as well as associated evaluator competencies that are required. In this model, an evaluation is divided into three phases (pre, during, and post evaluation). The sequence of key evaluation events is reflected as nine processes distributed across the three phases and one additional cross-cutting process applicable to all phases. We will describe these 10 routines and their associated role in detail during the presentation as well as present an example of how this model can be applied to an evaluation project.

 Return to Evaluation 2008

Add to Custom Program