Return to search form  

Session Title: Building Evaluation Capacity Within Organizations
Multipaper Session 121 to be held in Calvert Ballroom Salon E on Wednesday, November 7, 4:30 PM to 6:00 PM
Sponsored by the Extension Education Evaluation TIG
Chair(s):
Mary Arnold,  Oregon State University,  mary.arnold@oregonstate.edu
Reporting Extension Program Impacts: Slaying the Dragon of Resistance
Presenter(s):
Nancy Franz,  Virginia Cooperative Extension,  nfranz@vt.edu
Abstract: Virginia Cooperative Extension has responded to today's environment of enhanced accountability by improving the organization's program impact reporting. Successful strategies to enhance the quality and quantity of program impact reports include new hires, training faculty and administration, individual and small group technical assistance, development of reporting tools, and tying impact reporting to performance and recognition. This holistic approach resulted in enhanced reporting and use of program impacts as well as improved program design and evaluation. In this session, learn how Virginia Cooperative Extension put this approach into place, what was learned, and faculty perceptions of the approach.
Do Workshops Work for Building Evaluation Capacity Among Cooperative Extension Service Faculty?
Presenter(s):
Kathleen Kelsey,  Oklahoma State University,  kathleen.kelsey@okstate.edu
Abstract: The need for institutional accountability is essential for land-grant institutions. Program evaluation has long been in the land-grant university's tool box for ensuring accountability for programs delivered through the Cooperative Extension Service (CES). However, many CES faculty lack evaluation skills to conduct and report evaluation research. To fill the gap for building evaluation capacity, many CES faculty engage in self-study and workshops. It has been reported that self-study and workshops work best as supplemental learning and cannot substitute for in-depth study in program evaluation. Self-efficacy has been used as a variable to determine an individual's intention toward action. Thus, building evaluation capacity within the land-grant university depends on robust training, continuing education, and high self-efficacy toward applying lessons learned. Using survey design methods, this paper will report on a study that explored the impact of an evaluation workshop to build evaluation skills and self-efficacy among CES faculty.
A Framework for Evaluating 4-H National Initiatives
Presenter(s):
Benjamin Silliman,  North Carolina State University,  ben_silliman@ncsu.edu
Abstract: A framework is presented for evaluation of 4-H National Initiatives in Science, Engineering, and Technology (SET), Youth in Governance (YIG), and Healthy Living (HL). Four components are addressed: 1) description of the organizational context, or assets for program planning and reporting; 2) discussion of research on critical youth outcomes and program quality standards in each initiative area; 3) recommendations for targeted indicators of youth outcomes and program quality; and 4) implications of evaluating Initiatives for Extension organizational change, program planning patterns, partnerships, data collection, and marketing. Finally, the framework offers recommendations on use of management-oriented, objectives-oriented, and participant-oriented evaluation to guide National Initiative teams. Following the presentation there will be opportunities for discussion.
Search Form