Return to search form  

Session Title: Building Evaluation Capacity in Extension Systems
Panel Session 326 to be held in Calvert Ballroom Salon E on Thursday, November 8, 9:35 AM to 11:05 AM
Sponsored by the Extension Education Evaluation TIG
Chair(s):
William Trochim,  Cornell University,  wmt1@cornell.edu
Discussant(s):
Michael Duttweiler,  Cornell University,  mwd1@cornell.edu
Donald Tobias,  Cornell University,  djt3@cornell.edu
Abstract: Systems approaches to designing evaluation systems for large multi-program organizations like extension requires a balancing of standardization and customization. At Cornell Cooperative Extension in New York State a new approach to systems evaluation has been developed and is being implemented in seven counties. This approach encompasses several key concepts - stakeholder incentive analysis; program life-cycles and relation to evaluation methods; program pathway models, their interconnections, and their relation to the research evidence base; and a decentralized bottom-up, networked approach to evaluation capacity building. The approach incorporates several new tools - an evaluation planning protocol, a standardized evaluation planning format, and a web-based system for managing information for evaluation system planning. This panel presents the systems evaluation approach, discusses management and implementation challenges, and describes an example of an evaluation special project that resulted from it that uses a switching replication randomized experimental design to evaluate a mature well-established program in nutrition education.
Protocols, Plans and Networks: The Nuts-and-Bolts of Systems Evaluation
William Trochim,  Cornell University,  wmt1@cornell.edu
This presentation describes a protocol that is being utilized in seven extension associations throughout New York State to help programs develop evaluation capacity, program models and evaluation plans. The protocol is a series of steps implemented over the course of approximately nine months that includes: describing relevant stakeholders and their motivations and incentives for evaluation; developing a program theory in the form of a pathway logic model that includes articulation of assumptions, contextual issues, and inputs, and describes expected causal connections between activities, outputs and outcomes; classification of programs along a program life-cycle that signals the types and level of evaluation that would be appropriate; development of an evaluation plan for each program; building evaluation capacity through the development of an evaluation network; and the use of a web-based Netway (networked pathway) system for entering and managing all information relevant to program models, evaluation plans, and the relevant research evidence-base.
Motivation and Management in Evaluation
Cath Kane,  Cornell University,  cmk42@cornell.edu
This presentation addresses several key themes surrounding evaluation incentives and management, focusing on year one of an Evaluation Planning Partnership project with Cornell University Cooperative Extension New York City (CUCE NYC). Understanding the motivations and incentives of staff and participants is a critical component of evaluation planning. Issues include: 1) funding: managers increasingly view evaluation systems as a matter of survival; 2) parallel mandates: incentive analysis can identify synergies with outside mandates that can be used to improve evaluation implementation and quality; and, 3) staff participation: identifying internal staff incentives can create unique opportunities for evaluation design. Several management strategies will be reviewed: the development of an effective Memorandum of Understanding; the use of logic models and evaluation plans; the clarification of the merits of descriptive demographics versus outcome measurement; and, the implementation of systems evaluation in a dynamic environment. Examples of these issues are provided from real-world project contexts.
Incorporating Experimental Design into Extension Evaluation: The Switching Replications Waiting List Design
Sarah Hertzog,  Cornell University,  smh77@cornell.edu
For extension programs that are relatively mature - have been implemented consistently and have well-established high-quality outcome measurement in place - it is useful to undertake evaluations that demonstrate effectiveness with controlled comparative designs. A switching replications randomized experimental design is appropriate in waiting list situations where there are more eligible participants than can receive the program at one time. After obtaining informed consent, participants are randomly assigned to early or later program sessions. All participants are measured on three waves of outcomes - prior to the early session, between sessions and after the later session. This presentation describes implementation and data analysis challenges posed by such a design, and considers the advantages and disadvantages of its use in extension evaluation contexts.
Search Form