Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Evaluation Capacity Building: Tools Emerging From Practice
Multipaper Session 602 to be held in Centennial Section A on Friday, Nov 7, 1:35 PM to 3:05 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG and the Government Evaluation TIG
Chair(s):
Maria Jimenez,  University of Illinois Urbana-Champaign,  mjimene2@uiuc.edu
Discussant(s):
Jennifer Martineau,  Center for Creative Leadership,  martineauj@ccl.org
Building Culturally Competent Evaluation Capacity in California's Tobacco Control Programs
Presenter(s):
Jeanette Treiber,  University of California Davis,  jtreiber@ucdavis.edu
Robin Kipke,  University of California Davis,  rakipke@ucdavis.edu
Abstract: The California Department of Public Health focuses its resources for tobacco control work on local smoke-free policy adoptions, for instance in multi-unit housing, outdoor areas, events, etc. Local programs use process and outcome evaluation methods to inform local campaigns and measure success. One of the greatest challenges facing tobacco control program evaluation in California is the diversity of the state's population that renders one-size-fits-all evaluation approaches ineffective. Therefore, the UC Davis Tobacco Control Evaluation Center, which serves 100 local California tobacco control programs, has been developing tools for culturally competent evaluation that help these programs gain access to culturally specific groups, develop data collection instruments, and analyze results. This paper presents the strategies used in strengthening local organizations’ evaluation capacity. The newly developed series of tools will be of use to local tobacco control programs as well as other programs providing social and health promotion services for diverse populations nationwide.
Feasibility of Obtaining Outcome Data From Informal Science Education Projects
Presenter(s):
Gary Silverstein,  Westat,  silverg1@westat.com
John Wells,  Westat,  johnwells@westat,com
Abstract: The National Science Foundation’s Informal Science Education (ISE) program supports projects designed to increase public interest in, understanding of, and engagement with science, technology, engineering, and mathematics. This session will examine the process by which the ISE program has shifted its emphasis from documenting outputs to measuring outcomes. Of particular interest will be the opportunities for obtaining outcome-oriented results that program officers can use to identify promising practices. We will also focus on the challenges that the initial cohort of respondents encountered in specifying and measuring progress toward audience outcomes—including difficulty (1) articulating valid and measurable outcomes that occur after exposure to an ISE event, (2) documenting project outcomes within the grant period, and (3) developing an effective and rigorous evaluation strategy. The presentation will also describe the range of technical assistance provided during the collection required to help projects devise and measure valid and measurable ISE outcomes.
Improved Evaluation through Enhanced Policies and Better Capacity Building
Presenter(s):
Andy Fourney,  Network for a Healthy California,  andy.fourney@cdph.ca.gov
Barbara Mknelly,  Network for a Healthy California,  barbara.mknelly@cdph.ca.gov
Sharon Sugerman,  Network for a Healthy California,  sharon.sugerman@cdph.ca.gov
Abstract: The Network for a Healthy California (Network) contracts agencies and institutions (contractors) throughout California to provide nutrition education to Food Stamp Eligible populations. Contractors participate in an evaluation governed by policies that were created to standardize methods, increase rigor and maximize intervention impact. These policies, while largely successful, have limitations. For example, contractors feel limited by the requirement that they must use a validated survey to assess change. To address this, capacity building strategies were implemented to help contractors match nutrition education activities with determinants of behavior. This fit was used to accurately match interventions with surveys and refine interventions. Strategies also prepared contractors to report qualitative data that capture successes not measured by the surveys. Discordance between policies and field capacity can lead to an incomplete picture of program impact and inaccurate interpretation of results. The implication for evaluation is that capacity building and policy making are iterative.

 Return to Evaluation 2008

Add to Custom Program