Evaluation 2009 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Evaluation Capacity Building Strategies to Promote Organizational Success
Multipaper Session 848 to be held in Sebastian Section L3 on Saturday, Nov 14, 1:40 PM to 3:10 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Duane House, Centers for Disease Control and Prevention, lhouse1@cdc.gov
Discussant(s):
Abraham Wandersman, University of South Carolina, wanderah@gwm.sc.edu
David Fetterman, Fetterman & Associates, fettermanassociates@gmail.com
Abstract: The increase in public accountability for agencies has increased the need for organizations to be able to monitor and evaluate their own activities. Evaluation capacity building (ECB) has been recognized as a process for enabling organizations and agencies to develop the mechanisms and structure to facilitate evaluation to meet organizational goals and accountability requirements. ECB has been conceptually defined as "a context-dependent, intentional action system of guided processes and practices for bringing about and sustaining a state of affairs in which quality program evaluation and its appropriate uses are ordinary and ongoing practices within and/or between one or more organizations/programs/sites" (Stockdill, Baizerman, & Compton, 2002, p. 8). This panel will demonstrate three strategies for building evaluation capacity in different contexts. Although each strategy draws from a different framework, each shares common ground in empowerment evaluation principles.
Building General and Innovation-Specific Capacities for Evidence-based Prevention Programs in Schools
Paul Flaspohler, Miami University, flaspopd@muohio.edu
Kate Keller, Health Foundation of Greater Cincinnati, kkeller@healthfoundation.org
Dawna-Cricket-Martita Meehan, Miami University, meehandc@muohio.edu
Schools are in a unique position to significantly impact the health and well-being of youth through evidence-based prevention programs and services. Given the well-documented problems in introducing new ideas to schools and sustaining innovative practices, it is critical that attention be given to understanding barriers and facilitators of the adoption and implementation of evidence-based practices (Flaspohler et al., 2006). Recently, increased attention has been focused on understanding and assessing readiness and capacity to adopt and implement research-based innovations (i.e., EBPs). Research on implementation and readiness for change suggests that inattention to forces and factors that impact adoption seriously jeopardizes any project seeking to introduce a new idea into an organization. Readiness and capacity, therefore, become a crucial planning and surveillance activity. This presentation provides an overview of systematic efforts to assess and build readiness and evaluation capacity for evidence-based prevention programs and services in schools.
Connecting the Dots: Building Evaluation Capacity in Schools
Melissa Maras, University of Missouri, marasme@missouri.edu
Schools have become the context of choice for delivering a variety of interventions, programs, and services. These activities are diverse, ranging from school-wide behavior support systems to classroom academic curriculum to individual mental health interventions. Schools demonstrate varying levels of competency in planning, implementing and evaluating these activities within buildings and across districts, but there is a general disconnect between various activities and evaluation processes, as well as rich data resources that could support these efforts. Using a case study example, this presentation will focus on the unique challenges and opportunities of building evaluation capacity within the school context. It will highlight the benefits of building evaluation capacity by layering efforts around existing activities and resources in schools to help schools connect the dots between their various initiatives and build general evaluation capacity. Results supporting the positive impact of the unique approach used in this case example will be presented.
Training of Technical Assistance Providers on Evaluation (TOTAP-E) Capacity Building
Catherine A Lesesne, Centers for Disease Control and Prevention, clesesne@cdc.gov
Christine Zahniser, GEARS Inc, czahniser@cdc.gov
Jennifer L Duffy, University of South Carolina, jenduffy@sc.edu
Abraham Wandersman, University of South Carolina, wandersman@sc.edu
Mary Martha Wilson, Healthy Teen Network, marymartha@healthteennetwork.org
Gina Desiderio, Healthy Teen Network, gina@healthyteennetwork.org
There is often a need to train para-professionals to provide basic training and technical assistance (T/TA) on evaluation. This presentation will describe a two-day training curriculum developed for and used with T/TA providers and evaluators to further develop the core/technical skills for evaluation capacity building (ECB) with community partners. The training was informed by published/unpublished evaluation capacity approaches and learning theory but required substantial new developments. For example, new tools were created for: Conducting individual and team self-assessment of ECB core and technical skills; assessing the evaluation capacity of a local partner in order to create a T/TA plan; evaluation summary narratives that motivate partners because they self-derived, and hands-on learning labs on selected topics. The pre-post evaluation of the TOTAP-E demonstrated significant gains in confidence providing ECB (Mpre=3.52, SE=0.61; Mpost=4.09, SE=0.49) (t(36)= -7.83, p<.01). The training and training evaluation will be fully described.

 Return to Evaluation 2009

Add to Custom Program