2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Mirror, Mirror on the Wall: Strategies to Enhance Evaluation Use and Influence Across Stakeholders
Multipaper Session 415 to be held in Avila A on Thursday, Nov 3, 2:50 PM to 4:20 PM
Sponsored by the Evaluation Use TIG and the Government Evaluation TIG
Chair(s):
Susan Tucker,  Evaluation & Development Associates LLC, sutucker1@mac.com
Discussant(s):
Susan Tucker,  Evaluation & Development Associates LLC, sutucker1@mac.com
Applying Social Media and On-Line Engagement for Greater Evaluation Use and Influence at The World Bank's Independent Evaluation Group
Presenter(s):
Bahar Salimova, The World Bank, bsalimova@worldbank.org
Nik Harvey, The World Bank, nharvey@worldbank.org
Alex McKenzie, The World Bank, amckenzie@worldbank.org
Abstract: This paper will takes stock of the systematic application of social media and on-line engagement approaches during the past year by the World Bank's Independent Evaluation Group (IEG), for enhancing communication and knowledge-sharing practices that seek greater use of findings, recommendations, and lessons from its evaluations. The past twelve months have been a period of learning and experimentation. Not only are social media channels and engagement approaches fairly new, they also not often seen applied in support of the evaluation discipline, or in a more specialized business context (as opposed to consumer markets). The paper will reflect on what worked well, and what didn't work as expected. The analysis will be based on data collected during recent engagements.
Differences in Stakeholder and Evaluators Views of Evaluation Use and Influence
Presenter(s):
Andy Thompson, Carleton University, arocznik@connect.carleton.ca
Shevaun Nadin, Carleton University, snadin@connect.carleton.ca
Bernadette Campbell, Carleton University, bernadette_campbell@carleton.ca
Abstract: Evaluators want to the see the results of their work influence decisions about programs and policies; however, too often the knowledge generated by evaluations has little influence on the day-to-day operations of the program. Although efforts to understand reasons for lack of use dominate the evaluation literature, these reports are mainly based on evaluator impressions of stakeholder behaviour. Missing from this literature is research that documents stakeholders' own accounts of the reasons for use and non-use of evaluations. To address this gap in the literature interviews were conducted with both evaluators and program officers of National Science Foundation (NSF) sponsored programs. Comparison of evaluator and stakeholder responses revealed similarities as well as differences in their assessments of the reasons evaluation findings were used or not used. Implications for practice and future research are also discussed.
Using Evaluation During Program Implementation: The Data-to-Action Framework
Presenter(s):
Ronda Zakocs, Independent Consultant, rzakocs@bu.edu
Jessica Hill, Centers for Disease Control and Prevention, hss9@cdc.gov
Pamela Brown, Centers for Disease Control and Prevention, pbrown8@cdc.gov
Abstract: Although utility is one standard of high quality evaluations, the challenge remains how evaluators can generate actionable data useful to stakeholders. The Data-to-Action Framework aims to produce relevant information during program implementation that enables decision-makers to: gain insights into the implementation process; make program improvements; and communicate progress with stakeholders. The Framework was developed and pilot-tested with DELTA PREP, a three-year national initiative seeking to build the capacity of 19 statewide coalitions to prevent domestic violence. The Framework outlines a step-by-step process collaboratively implemented among evaluators and program decision-makers to identify implementation questions and data needs, efficiently collect data, quickly generate data synthesis memos, discuss implications, make usage decisions, and take action. The presentation will describe how DELTA PREP implemented the Framework; demonstrate various ways decision-makers used data from 18 synthesis memos generated over a 2 year period; and share the benefits and challenges of using the Framework.
Dueling Federal Funder Preferences and Program Evaluation Needs: Challenges with Conflicting Interests and Evaluator Roles
Presenter(s):
Holly Downs, University of North Carolina, Greensboro, hadowns@uncg.edu
Abstract: A growing tension exists between federal funder preferences for an 'external' evaluator and the evaluation needs of a program. This paper will explore this dilemma encountered when a federally-funded undergraduate science and mathematics program at a large university shifted from an external to an internal evaluation team (i.e., a team at the university but outside the program departments). While the funder encouraged an external evaluator in the RFP, a shift by the program coordinators, who wanted better onsite data collection to improve use and Institutional Review Board coordination, trumped this funder preference post-award. These dueling dynamics can cause a chasm between what is valued and encouraged by funding agencies and the needs of the actual program. The goal of this paper is twofold: to discuss the impact of federal funders' preferences of external evaluators on this program and to consider the implications of these issues on the field of evaluation.

 Return to Evaluation 2011

Add to Custom Program