Evaluation 2009 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Building and Evaluating a System Based Approach to Evaluation Capacity Building
Panel Session 395 to be held in Suwannee 12 on Thursday, Nov 12, 4:30 PM to 6:00 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
William Trochim, Cornell University, wmt1@cornell.edu
Discussant(s):
William Trochim, Cornell University, wmt1@cornell.edu
Abstract: Systems approaches to evaluation capacity building are essential for developing effective evaluation systems. This session describes a multi-year NSF-supported project designed to develop a comprehensive approach to evaluation planning, implementation and utilization that is based on systems approaches and methods. We present the idea of a systems "evaluation partnership" (EP), the social and organizational network that is necessary to sustain such an effort, that emphasizes building consensus, using written agreements and delineating appropriate roles and structures to support evaluation capacity building. At the heart of the EP are: the systems evaluation "protocol," a specific well-designed sequence of steps that any organization can follow to accomplish a high-quality evaluation; and the integrated "cyberinfrastructure" that provides a dynamic web-based system for accomplishing the work and encouraging networking. This session describes the EP, the approaches used to evaluate it, and the results to date and sketches the plans for future development.
Evaluation Partnerships
Monica Hargraves, Cornell University, mjh51@cornell.edu
Thomas Archibald, Cornell University, tga4@cornell.edu
The Evaluation Partnership model provides a framework within which evaluation facilitators collaborate with program staff to share their respective expertise. The results are high-quality evaluation plans that are well-adapted to the local organizational context and program-specific needs and characteristics. Communication and transparency are important in facilitating creative thinking about programs, development of new perspectives, and organizational learning. The most common obstacles to internal evaluation include lack of motivation, time, confidence, and expertise. The Evaluation Partnership approach is designed to mitigate these by allowing for specialization of roles -- program staff draw on what they are expert in, and evaluation facilitators provide evaluation expertise and tools. Organization evaluation capacity grows as the work proceeds. An additional key contribution is to link the evaluation work to other organizational needs, which may include proposal development, strategic planning, and overall reporting mandates. This deliberate contextualization is important in making evaluation sustainable within organizations.
Evaluation Planning Using the Systems Evaluation Protocol
Jane Earl, Cornell University, jce6@cornell.edu
Thomas Archibald, Cornell University, tga4@cornell.edu
The Systems Evaluation Protocol (SEP) uses a systems perspective as a framework for developing evaluation capacity, enhancing evaluation quality and ultimately helping educators improve programs. Systems evaluation is an approach to conducting program evaluation that considers the complex factors that are inherent within a system - including integration across organizational levels and structures (nested systems) that are dynamic and involve multiple stakeholders (perspectives). Systems evaluation provides both a conceptual framework for thinking about evaluation systems and a set of specific methods and tools that enhance our ability to accomplish high-quality evaluation. The SEP divides evaluation into three phases - Planning, Implementation and Utilization. This presentation focuses on the Evaluation Planning phase and describes a series of steps that build high-quality comprehensive evaluation plans. The individual steps are the essential elements; the order in which a team follows them is flexible. Examples of the process will be given.
The Cyber Infrastructure
Claire Hebbard, Cornell University, cer17@cornell.edu
Monica Hargraves, Cornell University, mjh51@cornell.edu
A distinct yet integrated aspect of this evaluation research project has been the development and testing of a cyberinfrastructure titled the "Netway". The Netway provides a dynamic web-based system for accomplishing the work of evaluation and encouraging networking. Specifically, the Netway has features that support logic model development, pathway model development, measure identification, and evaluation planning. Moreover it incorporates dynamic search functions that allow users to immediately see outcomes in other programs that match outcomes they are identifying in their programs, thereby facilitating mutual learning. User-directed search functions facilitate program- and evaluation-focused networking effort that add to this environment of mutual learning and innovation. Evaluation measures can be entered into the Netway and linked to specific program outcomes, further enhancing the quality of evaluation planning and supporting evaluation research. This presentation will describe the Netway, how it is used by program staff and evaluators, and future Netway development plans.
Evaluation of Evaluation Capacity Building
Margaret Johnson, Cornell University, maj35@cornell.edu
Claire Hebbard, Cornell University, cer17@cornell.edu
This presentation will describe the emerging methodology for evaluating the Evaluation Partnership (EP), a multi-year systems-based approach to evaluation capacity building in organizations. This project is currently in the fourth year of development. Using self-report surveys to assess evaluation capacity, rubrics for quality of participant work products such as logic models and evaluation plans, and data on the use of networking tools including the Netway cyberinfrastructure, the evaluation of the EP examines its impact on participants' understanding of their own programs, on their knowledge of basic evaluation concepts, on the quality of the logic model and evaluation plans they develop, and on their level of engagement in the evaluation network created by the Partnership. The design is quasi-experimental, matched pre-test and post-test with multiple measures and two treatment groups.

 Return to Evaluation 2009

Add to Custom Program