2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: The Value of Organizational Modeling of Evaluation Protocols and Standards From the State and National Level in Extension
Multipaper Session 902 to be held in Pacific B on Saturday, Nov 5, 12:35 PM to 2:05 PM
Sponsored by the Extension Education Evaluation TIG
Chair(s):
Karen Ballard,  University of Arkansas, kballard@uaex.edu
Understanding the Practice of Evaluation in Extension: Test of a Causal Model
Presenter(s):
Alexa Lamm, University of Florida, alamm@ufl.edu
Glenn D Israel, University of Florida, gdisrael@ufl.edu
Abstract: Extension funding comes from local, state, and federal dollars; therefore the primary driver for evaluation is accountability for public funds. Historically, evaluation has been considered a necessary component in Extension rather than a priority. As public budgets get cut, the need for Extension to demonstrate programmatic public value is increasing. The ability to provide credible information depends on evaluation activities. The purpose of this research was to examine how organizational Extension evaluation structures directly and indirectly influence evaluation behaviors of Extension professionals. Data was collected from Extension professionals in eight states to examine how their perceptions of organizational evaluation factors influenced their evaluation behaviors. The results show changes at multiple levels can affect evaluation behavior. Extension leaders can impact evaluation practices by changing their own behavior, establishing a social culture within the system supportive of evaluation, and placing an emphasis on skill training in evaluation.
Evaluating for Value: A New Direction for Youth Program Evaluation
Presenter(s):
Mary Arnold, Oregon State University, mary.arnold@oregonstate.edu
Melissa Cater, Louisiana State University AgCenter, mcater@agcenter.lsu.edu
Abstract: Evaluating youth development programs, such as 4-H, has received considerable attention in the past 10 years. Establishing best practices for youth program evaluation, especially for the evaluation of small local programs, remains a perennial and multifaceted concern. This paper presents a brief history of youth program evaluation, and concludes that many youth-serving programs lack the resources to conduct comprehensive, rigorous, experimental studies. Then, drawing on recent advances in the literature related to youth program evaluation the authors argue for focusing more on three current areas of promising youth program evaluation practice: 1) the evaluation of program implementation and program quality; 2) building the evaluation capacity of program staff that are often charged with conducted evaluations; and 3) engaging youth in the evaluation of programs that affect them through youth participatory evaluation.
Evaluating Impact at the Systems-Level: Implementing the First Cross-Site Evaluation Within the Children, Youth, and Families At Risk (CYFAR) Initiative
Presenter(s):
Lynne Borden, University of Arizona, bordenl@ag.arizona.edu
Christine Bracamonte Wiggs, University of Arizona, cbmonte@email.arizona.edu
Amy Schaller, University of Arizona, aschalle@email.arizona.edu
Abstract: Programs currently exist in an atmosphere where they need to remain relevant and responsive to their target populations, funders, and policy makers. In an effort to document performance accountability, demonstrate impact, and promote sustainability, many funders require their programs to complete a common measure (cross-site) assessment. This paper will highlight the implementation of a common set of measures within Children, Youth, and Families At Risk (CYFAR) system, funded by the National Institute of Food and Agriculture (NIFA), United States Department of Agriculture (USDA). This paper will discuss the common evaluation measures used to collect aggregate-level data, the process of collecting cross-site data, and preliminary findings from the first round of data collection. The paper will also address key considerations for building systems-level evaluation capacity, including how the incorporation of technology can support these efforts.

 Return to Evaluation 2011

Add to Custom Program