2010 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Standards of Evidence for Evaluating Extension Programs: A Changing Picture?
Panel Session 128 to be held in CROCKETT A on Wednesday, Nov 10, 4:30 PM to 6:00 PM
Sponsored by the Extension Education Evaluation TIG
Chair(s):
Nicki King, University of California, Davis, njking@ucdavis.edu
Abstract: This panel session will examine a variety of perspectives regarding what types of evidence may be best suited to demonstrate program effectiveness within the Extension system. The question is particularly challenging for Extension because of its numerous partners at the federal and state levels, as well as its varied funding streams for program development and evaluation. The three presentations will each cover a distinct viewpoint, including the use of randomized trials, the challenges presented by Extension’s broad diversity of programs, and the contributions of program logic models to decisions about acceptable evidence. The evolution of Extension at the federal and state levels, in terms of organizational structure and funding patterns, creates the need to examine how determinations are made regarding program outcomes and impacts. Implications for organizational learning and evaluation capacity building will be discussed.
Randomized Trials in Extension Evaluation: How Big a Tool in the Tool Chest?
Marc Braverman, Oregon State University, marc.braverman@oregonstate.edu
Randomized trials (RTs), or experimental designs, are often promoted as the strongest type of evaluation design for establishing causal evidence for a program’s impact, and in recent years several federal agencies (e.g., NIH, SAMHSA, Department of Education) have prioritized RTs in making funding decisions on proposals. Yet most evaluators recognize there are many situations for which RTs might not be the best choice for assessing impact, due to expense, implementation challenges, ethical considerations, or other reasons. At present, the use of RTs is fairly rare in Extension evaluations, but with the recent reorganization of Extension at the federal level and the increased emphasis on competitive funding and multi-agency collaborations, the reliance on experimental designs might grow. This presentation will examine the current use of RTs in Extension, the suitability and characteristic challenges of RTs for a variety of Extension settings, and the implications for organizational capacity building.
Cooperative Extension Programs’ Capacity to Produce Evidence of Effectiveness
Suzanne Le Menestrel, United States Department of Agriculture, slemenestrel@nifa.usda.gov
Funders of programs are increasingly looking for evidence that programs are effective in producing their intended outcomes. But not all programs are capable of producing the type of evidence needed to document their effectiveness to potential funders. This is particularly true for many widely replicated programs that have popular support but only modest evidence of their efficacy. Similarly, many small scale interventions that are shown to work in highly-controlled settings have limited potential to be replicated because of the significant resources needed to replicate the intervention with high levels of fidelity. In her presentation, Dr. Le Menestrel will discuss this conundrum in detail and offer her perspectives on achieving balance between these seemingly polar extremes.
The Role of Evidence in Building Mature Program Theory
Roger Rennekamp, Oregon State University, roger.rennekamp@oregonstate.edu
Program logic models are a graphic representation of a program’s underlying theory. The linkages between inputs, outputs, and various levels of outcomes are affirmed by empirical studies, evaluation projects, experience, and intuition. The confidence one has in these linkages depends largely on the type of evidence that individual needs to establish “truth” in their own minds. Some individuals are comfortable with the truths established from their own personal experience. Others need the results of empirical studies to be convinced of truth. To what degree is epistemological pluralism valued in building assuredness that program models are sound and replicable. Dr. Rennekamp will offer intriguing insights into how “truths” might be considered as a more relative phenomenon.

 Return to Evaluation 2010

Add to Custom Program