2010 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: A Systems Approach to Building and Assessing Evaluation Plan Quality
Panel Session 381 to be held in PRESIDIO B on Thursday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG and the Systems in Evaluation TIG
Chair(s):
Jennifer Urban, Montclair State University, urbanj@mail.montclair.edu
Discussant(s):
William M Trochim, Cornell University, wmt1@cornell.edu
Abstract: The Cornell Office for Research on Evaluation (CORE) uses a systems-based approach to program evaluation and planning that is operationalized through the Systems Evaluation Protocol (SEP). The SEP has been developed, tested, and refined through “Evaluation Partnerships” established with forty education programs in two contexts: Cornell Cooperative Extension, and Outreach Offices in NSF Materials Research, Science and Engineering Centers. Drawing on the SEP, evaluation theory, and experience with these Partnerships, CORE’s concept of evaluation plan quality emphasizes the quality of the program model underlying the plan; how well an evaluation “fits” the program; and the “internal alignment” of the evaluation plan. The panel presents our definition of evaluation plan quality, tools we have developed to begin to assess quality, how we operationalize and observe the development of quality in the Evaluation Partnerships, and education research on the importance of inquiry-based approaches to learning that are embedded in the Evaluation Partnerships.
The Systems Evaluation Protocol and Evaluation Plan Quality: Introduction and Definition
Monica Hargraves, Cornell University, mjh51@cornell.edu
Margaret Johnson, Cornell University, maj35@cornell.edu
The Systems Evaluation Protocol (SEP) brings a particular mix of systems thinking, complexity theory, evolutionary theory and natural selection, developmental theory and evaluation theory to the process of developing program models and evaluation plans. These shape key steps in the Protocol and yield essential elements in the development of a high-quality program model and evaluation plan. The SEP’s definition of evaluation plan quality emphasizes: • consistency with a high-quality program model (grounded in program knowledge, stakeholder perspectives, program boundaries, and underlying program theory); • how well the evaluation questions and evaluation elements “fit” the program (consistent with program context and lifecycle stage, internal and external stakeholder priorities, and priorities yielded by the program theory itself); and • the “internal alignment” of the evaluation plan (the extent to which the measurement, sampling, design, and analysis components of the plan support each other and the stated evaluation purpose and evaluation questions).
Capturing Quality: Rubrics for Logic Models, and Evaluation Plans
Margaret Johnson, Cornell University, maj35@cornell.edu
Wanda Casillas, Cornell University, wdc23@cornell.edu
A notable challenge in evaluation, and particularly systems evaluation, is finding concrete ways to capture and assess quality in program logic models and evaluation plans. This presentation will describe how evaluation quality is measured in the ongoing evaluation of the Evaluation Partnership (EP), a multi-year, systems-based approach to capacity building. The development of logic model and evaluation plan rubrics for assessing quality has been funded by the National Science Foundation as part of a research grant. One of the primary aims of the research is to assess whether the SEP is associated with enhanced logic model and evaluation plan quality. This presentation focuses on how three aspects of quality--richness of program model, fitness of evaluation questions, and alignment of the plan’s evidence framework--are operationalized in our rubrics for logic models and evaluation plans. Ways of capturing the value-added of a systems-based approach to capacity building will be explored.
Inquiry in Evaluation: Connecting Capacity Building to Education Research
Jane Earle, Cornell University, jce6@cornell.edu
Thomas Archibald, Cornell University, tga4@cornell.edu
In building evaluation capacity through the Systems Evaluation Protocol (SEP), a key goal is to teach people how to ask questions. This includes questions about the program as expressed in formal Evaluation Questions, but also foundational questions about program boundaries, stakeholders, and program and evaluation lifecycles that bring people to a deeper understanding of their program and the systems in which they are embedded. In the field of education, “inquiry” refers to a pedagogical method in which students are provided with frequent opportunities to practice posing questions and strategize methods for investigating possible answers. A significant amount of research has been done on how to best facilitate the inquiry process. This presentation will explore the synergy between work in inquiry-based learning and the SEP’s approach to evaluation capacity building. The goal is to establish best practices for helping program implementers become better questioners, investigators, and evaluators.
Early Indications of Process Use Outcomes Associated With Evaluation Planning Through the Systems Evaluation Protocol
Thomas Archibald, Cornell University, tga4@cornell.edu
Jane Earle, Cornell University, jce6@cornell.edu
Monica Hargraves, Cornell University, mjh51@cornell.edu
The Systems Evaluation Protocol (SEP) lays out specific, systems-based steps that internal program evaluators complete as they define and model their programs and develop evaluation plans. Although high-quality evaluation plans are a primary goal, we have found that participants have experienced benefits that they see as valuable even before getting to the evaluation planning step. The steps of the SEP which seem critical are early in the process: the stakeholder analysis, program review, and program boundary analysis steps. These outcomes, or “‘Aha!’ moments,” are valuable in their own right, independent of their potential role in assuring high quality evaluation plans—a novel example of process use. This presentation uses qualitative data to characterize and document “‘Aha!’ moments” among SEP participants. Patton’s enjoinment to consider process use as a sensitizing concept (New Directions for Evaluation volume 116 [2007]) offers a promising framework to help understand and contribute to these outcomes.

 Return to Evaluation 2010

Add to Custom Program