Evaluation 2009 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Involving Stakeholders in Evaluations: Alternative Views
Multipaper Session 484 to be held in Wekiwa 5 on Friday, Nov 13, 9:15 AM to 10:45 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Wes Martz,  Kadant Inc, wes.martz@gmail.com
The Influence of Context and Rationale on Stakeholder Selection in Participatory Evaluation
Presenter(s):
Randi Nelson, University of Minnesota, palmfam@comcast.net
Abstract: This paper presents dissertation research on stakeholder selection in participatory evaluation, based on interviews with 17 practicing evaluators in the U.S. and Canada. Results indicated that stakeholder selection was influenced by the evaluator's rationales for conducting participatory evaluation and on multiple aspects of program and evaluation context. Evaluators were motivated by multiple, rather than single rationales for stakeholder participation, including pragmatic, utilization, empowerment, and transformative goals. In this study, evaluators with pragmatic rationales, including utilization, were more likely to restrict stakeholder selection to program staff and managers. Evaluators with empowerment or transformative rationales also included program beneficiaries and community members on evaluation teams. In spite of these general patterns, the variability in evaluation team composition among cases with identical rationales indicates that context factors also influenced stakeholder selection. The paper identifies ten context factors that interacted with rationale to influence stakeholder selection, including organizational goals and culture, evaluation resources, and stakeholder attributes.
Using Participatory Impact Pathway Approach in the Context of Water Use Management Projects in Colombia
Presenter(s):
Diana Cordoba, International Center for Tropical Agriculture, d.cordoba@cgiar.org
Boru Douthwaite, International Center for Tropical Agriculture, b.douthwaite@cgiar.org
Sophie Alvarez, International Center for Tropical Agriculture, b.sophie.alvarez@gmail.com
Abstract: Evaluating water use management projects requires a holistic approach that takes into account the vision of the actors involved as well as the context in which they perform. Participatory Impact Pathway Approach (PIPA) starts with a participatory workshop in which different stakeholders make explicit their program theory by describing their impact pathways. In this paper we compare and contrast the use of PIPA in the evaluations of two Challenge Program on Water and Food (CPWF) projects in Colombia. In both evaluations the initial impact pathways derived from PIPA workshops are compared with actual outcomes. This paper argues that the use of PIPA facilitates rapid and well organized evaluation, contributing to: a) understanding the contexts in which projects operate and their influence on project outcomes, and b) empowerment of stakeholders.
Improving Stakeholders' Capacity to Track Their Own Program Data and Create Surveys: Building Capacity By Creating Useful Tools for Stakeholders
Presenter(s):
Kelci Price, Chicago Public Schools, kprice1@cps.k12.il.us
Abstract: Stakeholders tend to have a diverse set of needs which evaluators may be expected to assist with, but in practice evaluators seldom have all the resources (whether people or fiscal) to support stakeholders across all areas. This presentation provides concrete examples of tools developed by the Chicago Public Schools' Department of Program Evaluation to help stakeholders build their capacity to track program data and monitor program performance. The discussion addresses the context which led to the need for these tools, how the tools were developed, the specific content of the tools, and how the tools are integrated into evaluation services in a way designed to build stakeholder and organizational capacity.
How Analogies Can Save the Day When it Comes to Explaining Difficult Statistical Concepts To Stakeholders
Presenter(s):
Pablo Olmos, Mental Health Center of Denver, antonio.olmos@mhcd.org
Kathryn DeRoche, Mental Health Center of Denver, kathryn.deroche@mhcd.org
Christopher McKinney, Mental Health Center of Denver, christopher.mckinney@mhcd.org
Abstract: This presentation describes our experiences in presenting complex statistical terms to stakeholders with minimal background in statistics. We describe the importance of relating statistical terms/concepts to some terms that the stakeholders can understand (like IQ) and provide specific examples of how we have used some of these analogies to describe very complex concepts in statistics and measurement. We describe some of the graphical approaches (schemas, diagrams, charts) and simple tables we have created to describe some of the outcomes of our evaluation efforts, and describe our experiences with those tools. Finally, we describe some of our successes and our experiences with analogies that can be taken too far.
Building Program Capacity for Evaluating Tuberculosis Surveillance Data Accuracy
Presenter(s):
Lakshmy Menon, Centers for Disease Control and Prevention, hua2@cdc.gov
Kai Young, Centers for Disease Control and Prevention, deq0@cdc.gov
Lori Armstrong, Centers for Disease Control and Prevention, lra0@cdc.gov
Valerie Robison, Centers for Disease Control and Prevention, vcr6@cdc.gov
Abstract: Data accuracy is essential for effective program management. The goal of this evaluation is to gain insight into factors affecting the accuracy of tuberculosis data collection at the local and state program level, and to build program capacity for on-going monitoring and evaluation of their data collection processes. In-depth interviews and discussions with local stakeholders (data entry, data collection and clinical personnel) will be conducted to document data collection activities from local and state programs, and data transfer to the national surveillance system. This step establishes a common understanding of the data collection process for all participants, and provides the foundation for evaluation. Hands-on data abstraction from primary data sources with program staff and evaluators provides opportunities for collaboration and increases buy-in. This paper will share lessons learned and demonstrates how this process engages stakeholders in evaluation and builds their capacity for on-going data quality improvement at the local level.

 Return to Evaluation 2009

Add to Custom Program