| Session Title: The Essence of Evaluation Capacity Building: Conceptualization, Application, and Measurement |
| Multipaper Session 304 to be held in Centennial Section A on Thursday, Nov 6, 1:40 PM to 3:10 PM |
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG |
| Chair(s): |
| Yolanda Suarez-Balcazar, University of Illinois Chicago, ysuarez@uic.edu |
| Discussant(s): |
| Abraham Wandersman, University of South Carolina, wandersman@sc.edu |
| Abstract: Evaluation capacity building has become an important process for organizations to respond to demands for accountability from stakeholders. Although there is a large volume of literature on capacity building, only a few studies have experimentally examined the conceptualization and measurement of evaluation capacity. This multipaper panel will address these issues. The first paper by Maras et al., describes a participatory approach to building evaluation capacity with a school district and addresses implications for informing policies, programs and practices. The second paper by Hunter et al., describes a quasi-experimental study of evaluation capacity building in two substance abuse prevention coalitions. The third paper by Suarez et al., addresses the conceptualization of capacity building with community organizations. The authors propose model of key factors that impact evaluation at the organizational level. Finally, Taylor-Ritzler and colleagues present the experimental validation of the model. All presenters will discuss implications for research and practice. |
| No Data Left Behind: Defining, Building, and Measuring Evaluation Capacity in Schools |
| Melissa A Maras, Yale University, melissa. maras@yale.edu |
| Paul Flaspohler, Miami University of Ohio, flaspopd@muohio.edu |
| Schools are increasingly held accountable for providing quality curricula and services that address academic and non-academic barriers to learning. They are collecting rich data that can be used for ongoing school improvement efforts focused on demonstrating outcomes. Unfortunately, schools often lack the capacity to use data to inform and evaluate their policies, programs, and practices. Building evaluation capacity in schools has been identified as one way that schools can meet accountability demands. Evaluation capacity is a complicated construct that has not been clearly defined; however, innovative approaches for building and measuring capacity in schools are emerging. The purpose of this paper is to explore definitions of evaluation capacity and describe a participatory approach to building evaluation capacity. Results from semi-structured participant interviews will be presented to further elucidate dimensions of evaluation capacity. Next steps for research and practice in the area of evaluation capacity in school contexts will be discussed. |
| Conducting Evaluation Capacity Building: Lessons Learned from a GTO Demonstration |
| Sarah B Hunter, RAND Corporation, shunter@rand.org |
| Patricia Ebener, RAND Corporation, pateb@rand.org |
| Matthew Chinman, RAND Corporation, chinman@rand.org |
| The Getting To Outcomes Demonstration project, a quasi-experimental study of evaluation capacity building, was conducted in two substance abuse prevention community coalitions. Six programs received manuals, training, and technical assistance (TA) over two years. TA was assessed with logs of the mode, amount, and type of TA delivered and with focus groups and interviews with participating program staff. These data indicated the evaluation capacity areas in which the prevention coalitions received the most assistance and the type of TA activities undertaken. Examples of process and outcome evaluation capacity building will be presented. Challenges the coalition staff reported at the end of the demonstration e.g., how to sustain gains in evaluation capacity are discussed. The demonstration suggests several lessons learned on ways to improve evaluation capacity (e.g., TA providers ought to motivate, troubleshoot, and provide structure; assess evaluation needs early on; continuously document progress) and sustain it after TA has ended. |
| Evaluation Capacity Building: An Analysis of Individual, Organizational and Contextual variables |
| Yolanda Suarez-Balcazar, University of Illinois Chicago, ysuarez@uic.edu |
| Tina Taylor Ritzler, University of Illinois Chicago, tritzler@uic.edu |
| Edurne Garcia, University of Illinois Chicago, edurne21@yahoo.com |
| The purpose of this presentation is to discuss an interactive model of evaluation called The Evaluation Capacity Building Contextual Model. Based on our collective work with diverse CBOs, and on reviews of evaluation, cultural competence, and capacity building literatures, we have identified a number of organizational and individual factors that facilitate optimal evaluation capacity building. These factors can lead to or detract from efforts to institutionalize and mainstream evaluation practices and use evaluation findings within an organization. In addition, we discuss the role of contextual and cultural factors of the organization and the community which can facilitate or impede capacity building for evaluation. Evaluation capacity building is an important process for community organizations experiencing pressure for accountability from various stakeholders. In this presentation we will discuss implications for future research and practice. |
| Validation of an Evaluation Capacity Building Conceptual Model |
| Tina Taylor Ritzler, University of Illinois Chicago, tritzler@uic.edu |
| Yolanda Suarez-Balcazar, University of Illinois Chicago, ysuarez@uic.edu |
| Edurne Garcia, University of Illinois Chicago, edurne21@yahoo.com |
| Most of the literature on evaluation capacity building has attended to process issues in building evaluation capacity (how to do it). Very little attention has been paid to issues of measurement (how to assess it). In terms of measurement, there are no published examples of validated instruments or systems for assessing evaluation capacity. The few published articles that have addressed measurement have reported only on evaluation products agencies generate (e.g., reports to funders), agencies' satisfaction with training and/or the evaluator's report that capacity was built. In this presentation, we will share the results of a validation study of our multi-factor and multi-method system for measuring evaluation capacity. Specifically, we will present results related to assessing individual, organizational, cultural and contextual factors that serve as a critical infrastructure for evaluation capacity, as well as the evaluation capacity building outcomes of use, mainstreaming and institutionalization of evaluation practices. |