Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Research on Evaluation Influence and System Change Efforts
Multipaper Session 309 to be held in Centennial Section F on Thursday, Nov 6, 1:40 PM to 3:10 PM
Sponsored by the Research on Evaluation TIG
Chair(s):
Chris LS Coryn,  Western Michigan University,  chris.coryn@wmich.edu
Refining the Assessment: Using the Three I's Model for Documenting Systemic Change
Presenter(s):
Anna Lobosco,  New York State Developmental Disabilities Planning Council,  alobosco@ddpc.state.ny.us
Dianna Newman,  University at Albany - State University of New York,  dnewman@uamail.albany.edu
Abstract: Work in developing an evaluation model for evaluating programs whose intent is systems change in education and the human services has been guided by a meta-evaluation of over 100 evaluation case studies using the Three Is model of evaluating programs whose intent is systems change. The Three Is refers to stages of systems change – initiation, implementation, impact. This paper describes use of multiple methods and indicators that have worked at each stage and serve to operationalize systemic change. Many examples are provided showing how four factors - program policies and procedures, infrastructure, design and delivery of services, and expected consumer outcomes/experiences – interact with four concomitant factors – program climate/culture, capacity-building, sustainability and leadership – can document systems change.
An Investigation of the Use of Network Analysis to Assess Evaluation Influence in Large, Multi-site NSF Evaluations
Presenter(s):
Denise L Roseland,  University of Minnesota,  rose0613@umn.edu
Boris Volkov,  University of Minnesota,  volk0057@umn.edu
Jean King,  University of Minnesota,  kingx004@umn.edu
Kelli Johnson,  University of Minnesota,  johns706@umn.edu
Frances Lawrenz,  University of Minnesota,  lawrenz@umn.edu
Stacie A Toal,  University of Minnesota,  toal0002@umn.edu
Lija Greenseid,  University of Minnesota,  gree0573@umn.edu
Abstract: This paper investigates challenges and opportunities of using network analysis from a social perspective to provide one measure of evaluation influence in large, multi-site evaluations. This research is part of a larger project studying the use and influence of evaluations of four NSF-funded programs by examining the long-term influence of the evaluations on project staff, the science, technology, engineering, and mathematics (STEM) community and evaluation community. The paper critiques an innovative way of applying network analysis from a social perspective to research on evaluation influence. The network analysis was used to track the perceived influence of four evaluations, their PIs, and related publications in the STEM education field. This research explores the usefulness of network analysis for helping to ascertain the broad influence of evaluations.
External Evaluation in New Zealand Schooling: Engagement, Influence and Improvement
Presenter(s):
Ro Parsons,  Ministry of Education,  ro.parsons@paradise.net.nz
Abstract: The New Zealand Education Review Office (ERO) reviews and reports on all New Zealand schools on a regular cycle. ERO’s purpose is to provide external evaluation that contributes to high quality education while maintaining accountability functions. The research study investigated how ERO’s approach assisted two schools to improve, and examined the effect of external evaluation over time. The findings show that ERO’s approach has a differential influence in each evaluation context. School evaluation history and the complex interaction between evaluator practice, school conditions and participants during the evaluation process, influenced how participants responded to the evaluation and how the evaluation assisted a school to improve. A tentative theory of education review is proposed that posits the external evaluation as situational, and its influence as socially constructed and mediated through a process of engagement between two organizations with a common purpose. This research expands our understanding of the relationship between external evaluation and school improvement.
Research on Evaluation Practice: Preliminary Findings on Use of Logic Model Approaches in Large Scale Educational Reform Projects
Presenter(s):
Rosalie T Torres,  Torres Consulting Group,  rosalie@torresconsultinggroup.com
Rodney Hopson,  Duquesne University,  hopson@duq.edu
Jill Casey,  Torres Consulting Group,  jill@torresconsultinggroup.com
Abstract: Within the field of evaluation, several tools related to explicating program theory and design are in use. Among these are: logic models, theories of action, theories of change, as well as tools associated with systems approaches. Virtually all major texts and practical evaluation guides include sections on logic model use. Over the last several decades, the topic has been equally visible: in journal articles and monographs; as well as in annual conference and professional development sessions of the American Evaluation Association and the American Educational Research Association. This paper will review preliminary findings on the actual use of these tools from a series of National Science Foundation-funded case studies of large-scale, multi-year, multi-partner education reform projects. The paper will overview emerging findings on key factors influencing the development, design, content, and use of these tools, as well as the benefits and challenges to use. Implications for further research and implications for evaluators and program staff currently using and/or seeking to employ these tools in their practice will be highlighted.

 Return to Evaluation 2008

Add to Custom Program