2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Evaluation Capacity Building Tools for the Practitioner
Multipaper Session 113 to be held in Capistrano A on Wednesday, Nov 2, 4:30 PM to 6:00 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building
Chair(s):
Kimberly Leeks,  Centers for Disease Control and Prevention, kleeks@cdc.gov
Benchmarking Capacity Building through Levels of Use
Presenter(s):
Kimberly Cowley, Edvantia, kim.cowley@edvantia.org
Kimberly Good, Edvantia, kimberly.good@edvantia.org
Sharon Harsh, Edvantia, sharon.harsh@edvantia.org
Abstract: Benchmarking began more than 20 years ago as a business and industry management tool (Stauffer, 2003). Unlike industry benchmarking, which compares performance and enablers across departments or organizations, the Appalachia Regional Comprehensive Center (ARCC) at Edvantia benchmarking process compares the performance of an individual with a research-based theory that describes the stages or levels of capacity development. The Concerns Based Adoption Model (CBAM) Levels of Use (LoU) (Hall & Hord, 2005, 2010) is used in this benchmarking process to determine the changes in capacity from the beginning to the end of an initiative, and the gap analysis, conducted through a calibration process, determines how close the capacity comes to meeting the organization's vision of the desired performance level. The technical assistance initiative, infused with customized ARCC support, is viewed as the enabler and catalyst for capacity development.
The Value of using Self-Assessment tools to Increase Capacity for Evaluation
Presenter(s):
Rashell Bowerman, Western Michigan University, rashell.l.bowerman@wmich.edu
June Gothberg, Western Michigan University, june.gothberg@wmich.edu
Paula Kohler, Western Michigan University, paula.kohler@wmich.edu
Jennifer Coyle, Western Michigan University, jennifer.coyle@wmich.edu
Abstract: Building capacity for evaluation is often difficult when working with people who have numerous organizational roles. In many situations, the ones called to conduct evaluation have little formal training. Upon examination of yearly plans (n = 220) created at strategic planning institutes using a logic model process, evaluators found little correlation exists between participant created goals, activities, outputs, outcomes, and indicators. What results from this lack of correlation is shallow fidelity of evaluation, hence evaluation results don't answer whether teams met their goals and intended outcomes. This session presents results from implementing a self-assessment tool to measure both the fidelity of their yearly plans and evaluations of those plans. Outlined is the use of this self-assessment tool as a two step project; evaluating plan integrity through inter-rater reliability and improvement in later plan development. Preliminary data supports the use of self-assessment process to increase capacity for the evaluation process.
Evaluating Capacity Development for Commissioners of Evaluations in South Africa
Presenter(s):
Rita Sonko-Najjemba, Pact Inc, rsonko@pactworld.org
Ana Coghlan, Pact Inc, acoghlan@pactworld.org
Abstract: African Civil society organisations (CSOs) are inundated with numerous capacity and performance challenges related to insufficient skills in designing, planning and managing evaluations. The lack of access to training often results in CSOs being unable to improve their program designs as well as their ability to track key results. Pact South Africa is a not-for-profit implementing one of the largest USAID HIV/AIDS program in Africa. Pact provides grantees with short-term training and mentorship on M&E and most recently, a new focus on developing basic evaluation capacity has been initiated through a five days training intervention. The program aims to enhance evaluation design, planning and management skills for commissioners of evaluations among CSOs. Evaluation methods targeting the 15 participating organisations include an online survey, interviews as well as a review of program management documents to discern the extent to which the program is influencing change in practice within targeted CSOs. Lessons learnt include the fact that in such settings, interventions like these are essential in improving organisational performance and learning and may thus contribute to sustainability.

 Return to Evaluation 2011

Add to Custom Program