2011

Return to search form  

Session Title: Inside the Outdoors: Field Trip to Outdoor Environmental Education Site
Demonstration Session 650 to be held in Off Site on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Environmental Program Evaluation TIG
Presenter(s):
Kara Crohn, Research Into Action, karac@researchintoaction.com
Abstract: This demonstration session offers an opportunity to learn about Inside the Outdoors, a K-12 outdoor science education program. The program is independently funded and administered by the Orange County Department of Education. It serves 152,000 students annually, providing scholarships to 55,000 low-income students. Its curriculum is aligned with and assesses student learning according to California Science and Social Science standards. The session includes a site visit to an Inside the Outdoors program location: Irvine Regional Park or Great Park. The tour will be lead by the Program Administrator. Discussions will cover program goals and activities, ways the program incorporates core values into the curriculum and assessment of state standards, and the program's use of assessment, monitoring, and evaluation. We will engage in a group brainstorming session to help the program further develop evaluation and assessment ideas that demonstrate the effectiveness of key values. Space for this field trip is limited.

Session Title: Fuzzy Logic Models: Embracing and Navigating Complexity
Demonstration Session 651 to be held in Avalon A on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Presenter(s):
Matt Keene, United States Environmental Protection Agency, mattkeene222@gmail.com
Chris Metzner, ChrisMetzner Data Visualization and Graphic Design, chrismetzner@gmail.com
Abstract: Evaluation teams use logic models for program description, communications and designing evaluation questions and methods. The development and use of logic models is often limited to the immediate stakeholders of a clearly defined program during the initial stages of the evaluation process. A fuzzy logic model embraces fluid and approximate reasoning and varied context to improve the capacity of logic models to navigate non-linearity, feedback loops, and other key concepts of complexity. Integrating web 2.0, graphic design and data visualization with traditional logic models creates opportunities to account for complexity and expands access and use of the evaluation process to a greater diversity of stakeholders over a longer period of time. In this demonstration, facilitators will introduce an example of a fuzzy logic model (www.paintstewardshipprogram.com) and use a draft wiki-style online handbook to walk session participants through the steps and resources necessary to create a fuzzy logic model.

Session Title: Designing and Implementing Culturally Sensitive Approaches to Evaluation
Multipaper Session 652 to be held in Avalon B on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Dominica McBride,  The HELP Institute Inc, dmcbride@thehelpinstitute.org
Valuing Stakeholders: Integrating the Community Based Participatory Research Process (CBPR) into Evaluating Interventions for Culturally Diverse Populations
Presenter(s):
Wendy Wolfersteig, Arizona State University, wendy.wolfersteig@asu.edu
Crista Johnson, Arizona State University, cejohn11@asu.edu
Holly Lewis, Arizona State University, htlewis@mainex1.asu.edu
Aimee Sitzler, Arizona State University, aimee.sitzler@asu.edu
Nicholet Deschine, Arizona State University, nicholet.deshine@asu.edu
Abstract: Community based participatory research (CBPR) was used to integrate clients' and stakeholders' values and needs into adapting and evaluating interventions serving four different culturally diverse groups. The process and evaluations involved community advisory boards, stakeholders and participants starting with local needs assessments. Highly culturally sensitive approaches were called for in serving these diverse populations: (1) Somali refugee women on breast cancer education: (2) urban American Indians on parent education; (3) suicide prevention efforts targeting LGBTQ youth, and (4) rural communities building substance abuse coalitions. Using the CBPR process, community leaders worked with researchers/evaluators to identify interventions that were then adapted - and evaluations were designed alongside. The inclusion of community facilitated understanding of needed program elements while eliciting buy-in to and meaningful collection of evaluation data. Evaluators learned values and processes for broad community involvement in designing and implementing strategies that resonate and instill confidence with community and participants.
Using Research and Evaluation as Tools to Design and Implement Culturally-Appropriate Interventions
Presenter(s):
Louise Palmer, KDH Research & Communication Inc, lpalmer@kdhrc.com
Kimberly Stringer, KDH Research & Communication Inc, kstringer@kdhrc.com
Kristen Holtz, KDH Research & Communication Inc, kholtz@kdhrc.com
Eric Twombly, KDH Research & Communication Inc, etwombly@kdhrc.com
Abstract: This paper describes the use of research and evaluation to determine culturally-appropriate program content and implementation methods for En Familia, an intergenerational Latino health literacy program. Formative research consisted of 12 interviews with key Latino health stakeholders across the country to explore program content and implementation methods. Findings showed common health issues across Latino subgroups, but that program implementation methods must be community-specific. A second formative research study used focus group methodology to identify culturally-appropriate program content and format among a Mexican American US border community. Analysis of the findings informed our program development. The feasibility evaluation used a pretest/post-test quasi-experimental design with 34 families in a Mexican American US border community to explore the efficacy of En Familia in changing knowledge and attitudes about health and improving health literacy skills. Findings suggest En Familia increases knowledge, positive attitudes towards healthy behaviors, and health literacy skills.

Session Title: Causal Loop Diagrams: Design and Applications
Skill-Building Workshop 653 to be held in California A on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Systems in Evaluation TIG
Presenter(s):
Jeffrey Wasbes, Research Works Inc, jwasbes@researchworks.org
Abstract: Much work has been done recently on the application of systems concepts to evaluation practice. It is often difficult to operationalize systems theory in evaluation practice because many methods for diagramming system structure and process rely on linear cause effect assumptions and representations. A particularly useful notation for explicating complex system structure is the Causal Loop diagram. Participants in this skills-building workshop will learn the concepts of Causal Loop Diagramming through developing diagrams of a problem of their own choice. While exploring simple-to-use diagramming tools, participants will be introduced to other concepts like endogenous causality, link and loop polarity, and feedback effects. The session will address how extending traditional Causal Loop diagrams to include stock and flow structure can improve the utility and clarity of their diagrams. Participants will leave the session with a start on a diagram germane to their own work and knowledge other resources for further exploration.

Session Title: Collaborative Evaluations
Skill-Building Workshop 654 to be held in California B on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Liliana Rodriguez-Campos, University of South Florida, liliana@usf.edu
Rigoberto Rincones-Rodriguez, MDC Inc, rrincones@achievingthedream.org
Abstract: This highly interactive skill-building workshop is for those evaluators who want to engage and succeed in collaborative evaluations. In clear and simple language, the presenter outlines key concepts and effective tools/methods to help master the mechanics of collaboration in the evaluation environment. Specifically, the presenter is going to blend theoretical grounding with the application of the Model for Collaborative Evaluations (MCE) to real-life evaluations, with a special emphasis on those factors that facilitate and inhibit stakeholders' participation. The presenter shares her experience and insights regarding this subject in a precise, easy to understand fashion, so that participants can use the information learned from this workshop immediately.

Session Title: Success is Giving a Fish: Valuing Indigenous Values
Innovative Format Session 655 to be held in Pacific A on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Presidential Strand
Presenter(s):
Kas Aruskevich, Evaluation Research Associates LLC, kas@evaluationresearch.us
James Johnson III, Evaluation Research Associates LLC, james@evaluationresearch.us
Sandy Kerr, Brown Research, smbkerr@xtra.co.nz
Odin Peter-Raboff, Yahdii Media, odin@yahdiimedia.com
Abstract: ‘Giving a fish’ was the cultural metaphor used to define success in one evaluation in an indigenous community in New Zealand. The accompanying ‘fish giving’ story was embedded with cultural values able to provide multiple criteria for evaluation. For Alaskan Native communities, stories of fish giving are also richly layered with cultural metaphors and values associated with family, community, environment, spirituality, health, education and economics to name but a few. Evaluators from Alaska and New Zealand juxtapose storytelling, text and videography to show how ‘fish giving’ and other cultural stories are replete with important cultural values and perspectives that are not often represented in evaluation. The audience is challenged to explore the politics of representation in evaluation, through the practical demonstration of the power of creatively presented indigenous stories to engage our reasoning and emotions towards a holistic understanding of justice and equity issues for indigenous peoples.

Session Title: Shaping Evaluation Functions: The Canadian Federal Policy on Evaluation and its Interpretation in Federal Departments and Agencies
Panel Session 656 to be held in Pacific B on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Evaluation Policy TIG
Chair(s):
Anne Routhier, Treasury Board of Canada Secretariat, anne.routhier@tbs-sct.gc.ca
Abstract: The Government of Canada adopted its first federal-level policy on evaluation in 1977. This policy was intended to guide evaluation practice in federal departments/agencies. Over nearly three and a half decades, this federal-level policy has continued to evolve and, in keeping with policy trends in many jurisdictions, has focused increasingly on 'enabling' government departments/agencies. Some departments/agencies have, in turn, begun to formalize their evaluation functions through the development and adoption of organization-level evaluation policies, charters or systems. In this presentation, the Treasury Board of Canada Secretariat's Centre of Excellence for Evaluation will provide an overview of how the federal policy balances 'principle-' and 'directive-'based elements to create space for departments/agencies to actualize their own evaluation functions. Presenters from the Public Health Agency of Canada will then provide details on how they have built on the federal policy to shape their evaluation function.
Creating an Enabling Federal-level Policy on Evaluation
Brian Moo Sang, Treasury Board of Canada Secretariat, brian.moosang@tbs-sct.gc.ca
Anne Routhier, Treasury Board of Canada Secretariat, anne.routhier@tbs-sct.gc.ca
The renewed Canadian Federal Policy on Evaluation (2009), was under development between 2006 and 2009, a period when the AEA and academics began to expand the level of dialogue and thinking about evaluation policies. In this introductory presentation, the Treasury Board of Canada Secretariat's Centre of Excellence for Evaluation - which is the policy centre that developed the federal policy on evaluation - will examine its own policy - in part using the work of Cooksy, L.J. Mark, M.M, and Trochim W.M.K in '"Policy Evaluation and Practice: Where do we go from here?" (New Directions in Evaluation 123, 103-109, 2009) - to explore how the policy supports federal departments and agencies in both producing quality evaluations and using them for a variety of purposes. In doing so, lessons learned to date by the Centre that may inform the development of evaluation policy in other jurisdictions will be discussed.
Implementing the Policy on Evaluation the Public Health Agency of Canada: Leverage and Challenges
Paule-Anny Pierre, Public Health Agency of Canada, paule-anny.pierre@phac-aspc.gc.ca
Nancy Porteous, Public Health Agency of Canada, nancy.porteous@phac-aspc.gc.ca
In Canada, public health is a responsibility that is shared by the three levels of government, the private sector, nongovernment organizations, health professionals and the public. The Public Health Agency of Canada (PHAC) was created within the federal Health Portfolio to help protect and improve the health and safety of all Canadians and to contribute to strengthening the health care system. The Agency's activities focus on preventing and controlling chronic and infectious diseases, preventing injuries and preparing for and responding to public health emergencies. As an organization subject to the Policy on Evaluation, the Agency has undertaken actions to comply with the requirements, including important steps and structural changes to strengthen the evaluation function and increase its independence. Successes and challenges in implementing the policy at PHAC will be discussed in this presentation.

Session Title: Implementing Survival Analysis to Increase Evaluation Value
Multipaper Session 657 to be held in Pacific C on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Guili Zhang,  East Carolina University, zhangg@ecu.edu
Teacher Turnover and Retention in Los Angeles Urban Schools: Two-Level Discrete-Time Survival Analysis
Presenter(s):
Xiaoxia Newton, University of California Berkeley, xnewton@berkeley.edu
Rosario Rivero, University of California, Berkeley, rosario.rivero.c@gmail.com
Abstract: Teacher retention has attracted increasing attention in the research and policy community. While prior empirical literature has provided valuable insights into the factors that shape teacher turnover, there are conceptual and methodological limitations. Our study examined what individual background characteristics and organizational factors are related to teacher retention in urban schools. We conducted two-level discrete-time survival analysis, using a 7-year panel data from the Los Angeles Unified School District (LAUSD). Our findings on factors related to who, at what career stage, teacher quality, subject assignment, and what school context have important implications for policy formulations on teacher retention and for the policy push of using value-added methods for teacher accountability, especially among teachers in Charter Schools. In addition, our analysis provided some evidence on how effort at the LAUSD to improve school facilities might be associated with teacher stability at those schools.
Using Nonparametric Survival Analysis in Longitudinal Evaluation of Dynamic Activities
Presenter(s):
Guili Zhang, East Carolina University, zhangg@ecu.edu
Abstract: Time is an important factor in longitudinal evaluation of dynamic activities. Traditional quantitative data analysis techniques have serious limitations when it comes to addressing the time factor in longitudinal evaluation. This proposal illustrates the usefulness and advantages of using nonparametric survival analysis to evaluate undergraduate engineering student retention in the past two decades in the southeastern United States. A large longitudinal database that includes 100,179 engineering students from nine universities and spans 19 years was used, and the nonparametric survival analysis was adopted to obtain nonparametric estimates of survival and associated hazard functions, and complete rank tests for the association of variables. The results of this evaluation study support using survival analysis to better understand factors that affect student success since student retention is a dynamic problem. Survival analysis allows characteristics such as risk to be evaluated by semester, giving insight to when interventions might be most effective.

Roundtable: Responsive Approaches to Strengthening Monitoring, Learning and Evaluation Capacity: A Case Study, Shack/Slum Dwellers International (SDI) and Rockefeller Foundation
Roundtable Presentation 658 to be held in Conference Room 1 on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Advocacy and Policy Change TIG
Presenter(s):
Suman Sureshbabu, Rockefeller Foundation, ssureshbabu@rockfound.org
Sheela Patel, Slum/Shack Dwellers International, sparcssns@gmail.com
Abstract: Slum/Shack Dwellers International (SDI) is an alliance of organizations of the urban poor from 33 countries in Africa, Asia and Latin America. Their mission is to link poor urban communities to mobilize, advocate and put forth problem solving strategies to ensure cities integrate the interests of slum dwellers into urban development. Recently the Rockefeller Foundation and SDI partnered to help strengthen SDI's internal systems for monitoring, learning and evaluation (MLE). SDI was interested in how they can better utilize MLE to describe their own work, measure their successes and scale up their efforts. The Foundation was interested in how to better support grantees to improve their own MLE systems to better articulate their challenges and successes to donors and policy makers. This paper concludes with how the process of building partnerships to support MLE in urban poor networks can go beyond addressing accountability to actually help strengthen social movements.

Roundtable: Towards a Framework for Not-for-Profit and For-Profit Partnerships in Educational Evaluation
Roundtable Presentation 659 to be held in Conference Room 12 on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Alma Boutin-Martinez, University of California, asm188@hotmail.com
John Yun, University of California, jyun@education.ucsb.edu
Prashant Rajvaidya, Mosaic Network Inc, prash@mosaic-network.com
Abstract: This roundtable will work towards a framework for collaborations between not-for-profit and for-profit organizations by discussing and identifying the strengths and challenges associated with these types of evaluation collaborations. Drawing from the experiences of two evaluation organizations, Mosaic Network Inc. (for-profit) and the University of California Educational Evaluation Center (UCEC) (public-not-for-profit), this discussion will examine the ways in which Mosaic and UCEC support a common goal of advancing educational evaluation while contributing to the goal in unique ways. More specifically, this roundtable will allow for the discussion of examples of how these organizations are benefitting in this partnership by building on each other's strengths. In addition to these positive experiences, the organizations will also share specific issues that have arisen around the work cultures and internal organizational needs that have proven challenging for both groups. Given these examples, we hope to provide guidance for other groups contemplating such work.

Session Title: Using Data Flow Diagram in the International Development Context
Demonstration Session 660 to be held in Conference Room 13 on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Chung Lai, International Relief & Development, clai@ird-dc.org
Abstract: Data flow diagrams (DFDs) are commonly used to visualize the flow of data in a system, which can also be used to manage international development projects. DFDs can be used for illustrating project systems and processes, for example DFD can be used to visualize project M&E systems for project users and customers. This session will use international development case studies to highlight DFD use as a management tool. In my case studies, DFDs are used as a graphic representation of the M&E system to understand the system as a whole, highlight data quality points, and understand the "data flow". Within this context, "data flow" refers to data collection, entry (transcription), processing (transformation), synthesis, analysis, use and storage. In fact, these are the key data quality check points that can effect overall data quality, whose value and function becomes compromised if the check points are weak or not in compliant.

Session Title: New Tools for Cost Benefit Analysis Derived From old Much-Neglected Concepts and Equations
Panel Session 662 to be held in Avila A on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
Chair(s):
Georg Matt, San Diego State University, gmatt@sciences.sdsu.edu
Abstract: This short panel will present some new ideas derived from the seven decades old equation of Brogden in estimating the monetary impact of an intervention. Unfortunately the equation could not be applied until Schmidt and Hunter proposed an estimate of the standard deviation of productivity. We added two new ideas: 1) How to compute the effect size at the break-even point and estimate ROIs capitalizing on effects from evidence-based program an from meta-analysis 2) How to compute the necessary effect size to outperform the ROI of a competing program. These tools also help in reducing many anxieties related to evaluation.
The Effect Size at the Break-Even Point and the Estimation of the Return on Investment: A Powerful Tool for Evaluating Programs Ex Ante and Ex Post
Werner Wittmann, University of Mannheim, wittmann@tnt.psychologie.uni-mannheim.de
Andres Steffanowski, University of Mannheim, andres@steffanowski.de
The Brogden-Cronbach-Gleser cost benefit equation popularized by Hunter and Schmidt can be used to compute the effect size of a program at the break-even point, where cost and benefit are on par. Such an effect is the minimum an intervention must demonstrate. It can be computed ex ante, long before a program is to be implemented. A real effect size from an intervention can be used and related to the break-even effect to estimate the return on investment. When we know the estimated ROI from a competing intervention one can ex-ante compute from such an ROI and the break-even effect the minimal real effect size an intervention must surpass to outperform a competitor. These tools are not well known despite their developments being already seven decades old. We applied them to selected areas and concluded that the economic impacts of many psychological interventions are heavily underestimated by all of us
Why we Should not be Afraid of "Virginia Woolf": Oops we Mean Cost-Benefit Implications of Psychological Interventions!
Andres Steffanowski, University of Mannheim, andres@steffanowski.de
Werner Wittmann, University of Mannheim, wittmann@tnt.psychologie.uni-mannheim.de
Psychological interventions have a shorter history than those from many others areas especially those from the hard sciences. Psychology is often under the threat of being labeled as a soft science. Many believe that investments in programs derived from evidence of the soft sciences are not competitive in terms of the ROI. We use information from large scale program evaluation studies in stationary and ambulatory psychotherapy out come studies we did in Germany to demonstrate what a myth such a statement is and that not investing in tehm leads to dramatic opportunity costs.

Session Title: Designing and Using Data Dashboards for Monitoring and Evaluation: A Case Study
Demonstration Session 663 to be held in Avila B on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Data Visualization and Reporting TIG
Presenter(s):
Veronica Smith, data2insight LLC, veronicasmith@data2insight.com
Veronica Thomas, Howard University, vthomas@howard.edu
Abstract: Data dashboards are visual displays of the most important information needed to achieve one or more objectives. If utilized effectively, they can provide a unique and powerful means of communicating important information at a glance. Evaluators are increasingly using dashboards to track program performance and impact over time and to evaluate outcomes against aims. This demonstration will provide: 1) An examination of steps involved in engaging stakeholders in identifying and prioritizing dashboard metrics in an evaluation of a large, complex clinical and translational science project (CTSA); 2) A protocol for design and management of dashboards using Microsoft Excel; and 3) How dashboard development and use can help evaluators meet the Joint Commission's standards of utility, feasibility, propriety and accuracy. Participants will leave the demo with lessons learned from the CTSA case study and better informed about how to use data dashboards to improve your evaluation practice.

Roundtable: Evaluating Academic Outcomes and Higher Order Thinking Skills in an Educational Setting
Roundtable Presentation 664 to be held in Balboa A on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
George Chitiyo, Tennessee Technological University, gkchitiyo@yahoo.com
Lisa Zagumny, Tennessee Technological University, lzagumny@tntech.edu
Deborah Setliff, Tennessee Technological University, dsetliff@tntech.edu
Abstract: An ambitious quasi-experiment was developed that seeks to assess if there are benefits in terms of academic achievement and higher order thinking skills if the mathematics curriculum in middle school is modified to include chess playing. If chess as an instructional heuristic, rather than an extracurricular after-school program, impacts academic outcomes and critical thinking, the model can be implemented on a larger scale in order to reach a greater number of students. This presentation seeks to generate a discussion among evaluators concerning their experience and challenges in the evaluation process of this, and similar projects, including trying to make sense of the oxymoron that they know part of the outcomes are in the long run, and yet the circumstances allow them to only assess short term outcomes.

Session Title: Using a Research and Development Approach to Maximize Learning, Replication, and Scaling in the non-profit Sector
Demonstration Session 666 to be held in Capistrano A on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building
Presenter(s):
Peter York, TCC Group, pyork@tccgrp.com
PeiYao Chen, TCC Group, pchen@tccgrp.com
Abstract: Investors in the private sector expect product designers to use R&D methods to determine which combination of product element and delivery strategies work, for whom, and under what conditions. This demonstration argues that it's time for philanthropic and nonprofit leaders to adapt an R&D approach to maximize learning and continuously improve the detailed program elements and practices for target audiences. Our data from nearly 2000 nonprofits nationwide indicates that nonprofits that engage in R&D practices are 2.5 times more sustainable than those that do not. This session will share best practices of using R&D methods to help organizations reach their ultimate goals faster, with greater stakeholder engagement, and for less money. It will highlight why an R&D approach empowers the nonprofit sector to bring new perspectives, strategies, and learning to the table. It will also discuss the core principals and key steps of engaging key stakeholders in the R&D process.

Session Title: Involving Project Staff in Qualitative Data Analysis Using a Thematic Checklist
Panel Session 667 to be held in Capistrano B on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Internal Evaluation TIG
Chair(s):
Cynthia Olney, National Network of Libraries of Medicine Resource Center, olneyc@coevaluation.com
Abstract: From a capacity building perspective, project staff's involvement in data analysis provides a unique opportunity for them to learn about their programs. Qualitative data can be particularly compelling and interesting because of its detail and often story-like quality. However, qualitative data can be a challenge for the most seasoned evaluators to summarize and analyze, so strategies must be employed to help engage project staff members who are usually over-extended with responsibilities and often not evaluation oriented. This panel will feature presentations about evaluations conducted at two separate organizations in which a similar strategy - a thematic checklist - was used to involve project staff in qualitative data analysis. After a brief overview of the method by the panel chair, presenters will describe their application of the strategy, some of the insights gained by staff members about their projects, and lessons learned from involving project staff in qualitative analysis.
Project Staff Participation in Analysis of Data From a Community Preparedness Day Pilot Project
Susan Barnes, National Network of Libraries of Medicine Resource Center, sjbarnes@u.washington.edu
The National Library of Medicine (NLM), part of the National Institutes of Health, provided funding to libraries in three US cities to hold Community Preparedness Days. Similar to health fairs, these events brought together local and state organizations involved in emergency response to introduce the public to their services through exhibits and demonstrations. The NLM initiated this project to help libraries raise their profiles as important partners in community disaster response - not only with the public, but also among emergency preparedness and response organizations. Because this was a pilot project, qualitative methods provided the best approach for gathering information about event implementation and collecting recommendations from pilot teams for libraries that might hold similar events in the future. The NLM staff overseeing the awards used the thematic checklist to assist with analysis of interview data conducted with the pilot-site librarians and emergency responders who planned and held the events.
Project Staff's Analysis of Learner Interview Data in a Community-based Advocacy Training Program for Medical Students and Residents
Judith Livingston, University of Texas Health Science Center, San Antonio, livingstonj@uthscsa.edu
The University of Texas Health Sciences Center at San Antonio's Regional Academic Health Center offers an elective rotation in which medical students and residents spend four weeks living and working in one of the poorest regions of the US - the Texas Lower Rio Grande Valley (LRGV). Learners work in partnership with LRGV community-based organizations (CBO) on advocacy projects identified as needed by both CBO and learners. Each rotation hosts between 4-6 learners whose experiences vary based on the current environment of the border region, their own personal learning goals and community needs. The project's most compelling evaluation data come from post-experience interviews conducted by project faculty - one at completion of each rotation and one conducted six months after the rotation. Project staff developed a thematic checklist to analyze interview summaries to assess the experience's effect on learners' perceptions, attitudes, and commitments toward future advocacy work, particularly in underserved communities.

Session Title: The Alpha and Omega of Mental Health Program Evaluation
Skill-Building Workshop 668 to be held in Carmel on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Presenter(s):
Heather Wallace, Centerstone Research Institute, heather.wallace@centerstone.org
Sarah Suiter, Centerstone Research Institute, sarah.suiter@centerstone.org
Krista Davis, Centerstone Research Institute, krista.davis@centerstone.org
Abstract: Systems of Care evaluations are large-scale evaluation projects that require the participatory engagement of multiple and diverse stakeholders, employ a broad array of methods, and aim to create sustainable mental health systems change through practice, policy, and data use. Like all evaluation projects, beginning well and ending strong is imperative to achieving evaluation goals. Projecting and preparing for longitudinal outcomes requires a brand of preparation, foresight, and relationship building that is particular to community-based mental health initiatives. In this skill-building workshop, the authors will present specific practices, approaches, and lessons-learned to conduct a successful evaluation in this complex service-provision environment. Attendees will have the opportunity to engage with one another, examine the relationship between program and personal values, share their experiences, and apply the tools presented in this session to their own contexts.

Session Title: En"gendering" Evaluation: Gaining a Deeper Understanding of Feminist Evaluation from Feminist Evaluators
Expert Lecture Session 669 to be held in  Coronado on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Feminist Issues in Evaluation TIG
Chair(s):
Linda Thurston, National Science Foundation, lthursto@nsf.gov
Presenter(s):
Divya Bheda, University of Oregon, dbheda@uoregon.edu
Abstract: A lack of clarity continues to cloud the evaluation community around what constitutes feminist evaluation. It is seen as gender-analysis, or evaluation focused on women's issues. It is also seen as evaluation that works towards social justice goals exploring intersectional identity aspects that differentially affect program impact and outcomes. This paper explores the information gained through interviewing 15 key feminist evaluators to understand and represent their perspectives on (a) what their understanding of feminist evaluation is; (b) the possible reasons behind why many evaluators choose not to identify as 'Feminist' in their practice and scholarship, even though their work draws strongly from feminist principles and research); and (c) the gains and losses personal-professionally, and for the evaluation field as a whole when feminist evaluators choose not to locate themselves as feminist evaluators. Alternate terms used for entry into evaluation endeavors to promote social justice goals are discussed.

Session Title: An Ounce of Prevention: Practical Approaches to Evaluating Community Health Initiatives
Panel Session 670 to be held in El Capitan A on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Jill Lipski Cain, The Improve Group, jill@theimprovegroup.com
Abstract: In this session, two evaluators from the Improve Group, Inc. will share instruments and evaluation processes they developed to evaluate community-wide health prevention programs funded by Minnesota's Statewide Health Improvement Program, designed by the State in 2008 to reduce the leading causes of death and chronic disease: obesity and tobacco use. Presenters will describe the goals and parameters of the evaluations, the tools and methods we designed, and lessons learned about the methods. In addition, the kinds of analysis and findings that these tools can support will also be explored. Presentation and discussion will focus on the opportunities, challenges, considerations and limitations associated with these methods. Attendees will be invited to share their own experiences, observations and suggestions in evaluating health prevention programs through a facilitated panel discussion.
Using Case Studies to Develop a Broader Understanding of Community Impact
Stacy Johnson, The Improve Group, stacyj@theimprovegroup.com
This session explores how case studies can be used to document outcomes in health prevention programs. The presenter will share her experiences selecting case study sites, designing instruments, collecting data and creating case study reports. The case studies were used in the evaluations of Minnesota's Statewide Health Improvement Program grants in rural Faribault, Martin and Watonwan Counties and in the City of Minneapolis. From 2009 to 2011, the agencies worked with schools, community organizations, healthcare facilities and worksites to make systems, policies and environmental changes to improve the health of citizens. In this two-year period, some organizations made huge strides while others faced significant challenges getting work off the ground. The agencies embarked on the case study process to document program successes and lessons learned, and inform other agencies doing similar work across the nation. Presenters will share data collection tools created for the project, and final case study reports.
Evaluating Bike Racks, One Biker at a Time: Measuring Health Behavior Changes That Follow Systems, Policy and Environment Changes
Elizabeth Radel Freeman, The Improve Group, lizf@theimprovegroup.com
By making changes to health policies, systems and environments, public health advocates aspire to make the healthy choice the easy choice for citizens. Evaluators are charged with showing whether people actually do make the "healthy choice." This presentation shares techniques for gathering health behavior data from communities that have experienced a change in policy, systems, and/or environments to promote healthy eating and physical activity. Attendees will review several surveys and interview protocols created to capture behavior changes for those using community gardens, food shelves, bike racks, and other health amenities. The methods presented were developed for the evaluation of suburban Dakota County's Statewide Health Improvement Program, an intervention aimed at helping promoting policy, system and environmental changes to reduce the burden of chronic disease in Minnesota.

Session Title: How to Ask Latinos? Understanding Mexican Culture to Improve the Interview Experience with Latinos
Skill-Building Workshop 671 to be held in El Capitan B on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Efrain Gutierrez, FSG Social Impact Consultants, efrain.gutierrez@fsg.org
Abstract: This workshop will discuss some of the most important characteristics of Mexican culture and provide specific recommendations to improve the quality of interviews/focus groups with Latinos. The size and relevance of the Latino community in the US is growing. For that reason, evaluators need cultural awareness skills to help them capture the opinions and experiences of this community. The session will explain different aspects of Mexican culture using insights from Mexican Nobel laureate Octavio Paz. Cultural considerations will be explained in the context of the interview/focus group process. Finally, the presenter will provide techniques that can improve participant's interview experience with Latinos. The awareness created during the training will allow participants to identify cultural differences and respond accordingly. At the end of the presentation there will be a few minutes devoted to discuss different resources for those who want to learn more about Latino culture and cultural competency.

Roundtable: Ethnographic Evaluation: A Realistic Choice or a Contradiction in Terms?
Roundtable Presentation 672 to be held in Exec. Board Room on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Qualitative Methods TIG
Presenter(s):
MaryLynn Quartaroli, Northern Arizona University, marylynn.quartaroli@nau.edu
Frances Julia Riemer, Northern Arizona University, frances.riemer@nau.edu
Abstract: Evaluation is an act of determining the value or worth of programs, personnel, products, materials, and/or policies. The selection of a particular methodology ultimately reflects the values of the evaluators, the stakeholders, and the audiences of our work. An evaluator with an intuitionist/pluralist perspective seeks the widest representation of values from diverse populations. One methodological option is ethnography. A fundamental question is whether ethnography is appropriate for conducting evaluations. Doing ethnographic evaluation is more than selecting particular ethnographic data collection methods from a cafeteria of choices and applying these to evaluation projects. It requires evaluators to spend significant amounts of time interacting with all stakeholders in order to fully understand a program from both emic (insider) and etic (outsider) perspectives. Only through this process can diverse values be uncovered. The session will provide examples of successful ethnographic evaluation projects in Botswana, Nicaragua, and the United States.

Session Title: A Web Tool for Collecting County-level Program Implementation Data for State-level Evaluation
Demonstration Session 673 to be held in Huntington A on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Integrating Technology Into Evaluation
Presenter(s):
Alyssa Wechsler, University of Wyoming, alywex@uwyo.edu
Humphrey Costello, University of Wyoming, hcostell@uwyo.edu
Abstract: Many state programs are implemented at the county or district level. While outcome data may be available for counties or districts, program implementation data is harder to attain. Without reliable implementation data, evaluators are challenged to attribute variation in outcomes across counties or districts to variation in implementation. In this demonstration, we present a Web tool for collecting county-level program implementation data. The tool was designed to facilitate evaluation, serve administrative needs, and support program development. Given these multiple demands, we sought to develop a solution that measures activities consistently and yields useful indicators. At the same time, we tried to make the tool simple enough to use to ease the reporting burden on county-level program coordinators.

Session Title: Statistical Approaches to Multisite Evaluations
Multipaper Session 675 to be held in Huntington C on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Rene Lavinghouze,  Centers for Disease Control and Prevention, shl3@cdc.gov
Intraclass Correlation Coefficients and Effect Sizes for Planning School-Randomized Experiments
Presenter(s):
Paul R Brandon, University of Hawaii, Manoa, brandon@hawaii.edu
George Harrison, University of Hawaii, Manoa, georgeha@hawaii.edu
Brian Lawton, Independent Consultant, blawton@hawaii.edu
Jerald Plett, Hawaii Department of Education, jerald_plett/sas/hidoe@notes.k12.hi.us
Abstract: The power analyses conducted for group-randomized experiments require estimating minimum detectable effect sizes (MDES) and intraclass correlation coefficients (ICC), among other statistics. The estimates should be based on previous empirical studies. In the first part of our study, we present ICCs and MDESs for the Hawaii State Assessment reading test and mathematics test for seven grades over nine years. The results provide an empirical basis for planning school-randomized experiments in Hawai'i and elsewhere. In the second part of the study, we present distributions of Hawai'i public-school reading and mathematics between-school effect sizes over the seven grades and nine years. The distributions help show the likelihood that researchers can achieve the MDESs that we calculate in the first part of the study. We provide the SAS macros developed to conduct the analyses. Our methods can be adopted by any educational jurisdiction, thereby helping guide evaluation studies nationwide.
Longitudinal Multivariate Analysis of Ecological Theory to Increase Highway Safey and Reduce Fatalities and Serious Injuries
Presenter(s):
Robert Seufert, Miami University, seuferrl@muohio.edu
Abstract: To obtain federal highway funds, states must annually collect observational data on seat belt use. Also, states annually compile crash report data indicating that fatalities and serious injuries are directly or indirectly influenced by factors, including vehicle miles traveled, roadway type, seat belt use, speed, alcohol impairment, vehicle type, motorcycle and helmet use, cell phone use, and other distractions. This research analyzes data from both databases for 2006 through 2010 and tests an ecological model on the interrelationship between variables within multiple environments. The theory depicts 'a nested structure of environments' with a complex web of causation and context for implementing effective interventions to prevent highway fatalities and serious injuries. The theory is tested with statewide data samples, each containing approximately 20,000 vehicle occupants, and complete annual crash report data. In summary, the ecological theory and multivariate analysis clarifies opportunities for effective interventions to prevent highway fatalities and serious injuries.

Session Title: GEAR UP and Talent Search College Access Outcomes
Multipaper Session 676 to be held in La Jolla on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the College Access Programs TIG
Chair(s):
Johnavae Campbell,  University of North Carolina, johnavae@email.unc.edu
An Evaluation of GEAR UP on College Readiness Outcomes
Presenter(s):
Megan France, James Madison University, francemk@jmu.edu
Jennifer Bausmith, The College Board, jbausmith@collegeboard.org
Abstract: This study evaluated the impact of the Gaining Early Awareness and Readiness for Undergraduate Program (GEAR UP) on college readiness outcomes using an innovative quasi-experimental design. GEAR UP is designed to increase the number of low-income students who are prepared to enter and succeed in postsecondary education. A method of identifying comparable schools was developed by creating a school-level composite score of SAT, PSAT/MNSQT (PN), and AP participation and performance. Using these composite scores, 173 comparison schools were matched to 173 schools that had implemented GEAR UP. Given the goals of GEAR UP, statistically significant increases in participation and performance were expected in the GEAR UP school cohorts after the implementation of GEAR UP when compared to the comparable school cohorts. The results were encouraging with regard to GEAR UP, suggesting the program increases the number of low-income students who are prepared to enter and succeed in postsecondary education.
Visualizing Postsecondary Education Outcomes: Using Path Code Analysis to Map Six Year Enrollment and Completion Patterns of Gear Up and Talent Search Participants
Presenter(s):
Laura Massell, Vermont Student Assistance, massell@vsac.org
Abstract: This session presents an adaptive use of Robinson's (2004) path code analysis to illuminate individual postsecondary pathways within a cohort of GEAR UP and Talent Search students, two federal programs intended to promote college readiness and access to economically disadvantaged youth. Each student's semester by semester college going behavior was assigned a numerical code, and then charted over a six year period. The result allows for a nuanced illustration of when and where students enter college, transition from one institution to another, complete a degree(s), and/or depart the postsecondary education system. The visual representation of pathway data makes the complex phenomenon of student enrollment patterns comprehensible to a variety of stakeholders. Moreover, the result is a descriptive-sequential-temporal representation which can be sorted in various ways visually, or statistically, to represent a number of enrollment phenomena, including how pre-college student characteristics or institutional characteristics influence postsecondary outcomes.

Session Title: Emerging Perspectives in Evaluation Theory
Multipaper Session 677 to be held in Laguna A on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Theories of Evaluation TIG
Chair(s):
Bianca Montrosse,  Western Carolina University, bemontrosse@wcu.edu
Credible judgment in program evaluation: evaluators encounter stakeholders
Presenter(s):
Marthe Hurteau, University of Quebec, Monteral, hurteau.marthe@uqam.ca
David Williams, Brigham Young University, david_williams@byu.edu
Marie-Pier Marchand, University of Quebec, Montreal, mariepiermarchand@hotmail.com
Abstract: For an evaluation to successfully make credible judgments of the quality of a program, it should be valid and believable to stakeholders. This presentation summarizes research conducted to develop a coherent model of required elements and processes in evaluations to generate such credible judgments. The first presenter will briefly summarize this model and associated research results, focusing on the essential encounter between the evaluator and stakeholders. The specific contributions of the stakeholders will be highlighted. The second presenter will illustrate the model and results using a case story. Both presenters will emphasize the importance of understanding and responding to multiple stakeholders' values in facilitating valid and credible judgments by them rather than imposing a judgment based only on values of external evaluators. Planned steps for future research will be offered and discussion invited.
Emergent, Investigative Evaluation: Theory, Development, and Use in Evaluation Practice
Presenter(s):
Nick Smith, Syracuse University, nlsmith@syr.edu
Abstract: The prevalent view of evaluation as the use of pre-ordinate, confirmatory studies to assess program impact overshadows a powerful alternative, the use of emergent, investigative evaluations in natural settings to uncover unknown influences, hidden value, and overlooked alternatives. Further, emergent, investigative approaches are better suited to the evaluation of dynamic, developmental, or largely unknown programs. This paper reviews the current state of work on emergent, investigative evaluation approaches, addressing issues of (1) conditions of use, (2) types of designs, (3) the emergent design process and relevant decision-rules, (4) the nature of evidence produced, and (5) criteria for judging the quality of emergent, investigative evaluations. Prior work in evaluation is reviewed as well as related approaches employed outside evaluation and characterized variously as 'developmental research', 'design research', and 'design-based research'. Emergent, investigative evaluation approaches are also contrasted with somewhat related approaches such as formative assessment and developmental evaluation.

Session Title: Strategic Planning: Building Capacity for Evaluation During the Planning Process
Demonstration Session 678 to be held in Laguna B on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Disabilities and Other Vulnerable Populations
Presenter(s):
Paula Kohler, Western Michigan University, paula.kohler@wmich.edu
June Gothberg, Western Michigan University, june.gothberg@wmich.edu
Rashell Bowerman, Western Michigan University, rashell.l.bowerman@wmich.edu
Jennifer Coyle, Western Michigan University, jennifer.coyle@wmich.edu
Abstract: This session will demonstrate the National Secondary Transition Technical Assistance Center's (NSTTAC) capacity-building strategic planning process and open source tools. NSTTAC provides capacity-building technical assistance to state and local education agencies and has conducted strategic planning with over 500 teams. In our work, we found teams that do not plan for evaluation during the strategic planning process, most often do not conduct evaluations or evaluations are conducted without enough intensity to identify the degree to which teams met their anticipated outcomes. Participants of this session will gain knowledge in how to lead stakeholders in a strategic planning meeting through data-based self-assessment, needs identification, and goal setting with an emphasis on evaluating outcomes. We will demonstrate how to empower stakeholder to ask themselves the questions that lead to increased sound evaluation practices. Discussion is encouraged to share challenges preparing and empowering individuals new to the idea of evaluation.

Roundtable: Determining the Level of Collaboration of Community Transition Teams
Roundtable Presentation 679 to be held in Lido A on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Disabilities and Other Vulnerable Populations
Presenter(s):
Patricia Noonan, University of Kansas, pnoonan@ku.edu
Amy Gaumer Erickson, University of Kansas, agaumer@ku.edu
Abstract: Community transition teams (also referred to as transition councils) are typified as being composed of local, community-level representatives of schools, disability-related agencies and community organizations, families and students, and other stakeholders who join together to improve local transition services for youths with disabilities. The theory behind this model is that community transition teams will help students and families secure resources to accomplish their transition plan through improving the capacity of schools and communities to deliver better services through collaboration (Benz, Lindstrom, & Halpern, 1995; deFur, 1999; Noonan, Morningstar and Gaumer, 2008). In order to measure collaboration within the Community transition team, Noonan (2011) developed a 15 question survey (reliability .88) targeting research-based collaborative behaviors identified in prior research (Noonan, Morningstar and Gaumer, 2008). Survey results feed directly into a rubric for scoring team level of implementation of collaboration. This session will review these measures and discuss measurement of collaboration.

Session Title: CAPTURE: An Evaluation Platform for Health Promotion and Chronic Disease Prevention
Demonstration Session 680 to be held in Lido C on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Health Evaluation TIG
Presenter(s):
Marla Steinberg, The CAPTURE Project, marlas@sfu.ca
Andi Cuddington, The CAPTURE Project, andi.cuddington@thecaptureproject.ca
Dayna Albert, The CAPTURE Project, daynaalbert@rogers.com
Abstract: The CAPTURE Project is creating a web-based platform to evaluate, inform and share chronic disease prevention programs. CAPTURE allows users to spend less time planning evaluations and managing data, and more time using the findings to improve programs This demonstration will illustrate the many features of CAPTURE that support program planning and evaluation. Practitioners and evaluators can search for information on chronic disease prevention programs and tacit knowledge on implementing and evaluating programs. CAPTURE guides users through engaging stakeholders, focusing their evaluations, developing logic models, and selecting data collection tools. On-line data collection, data management, and data analysis features offer users a one-stop shop for all evaluation needs. While such shared measurement platforms are becoming increasing common, their ability to automate this inherently human process is unclear. A series of questions will be posed in order to generate discussion on the value of a platform for supplementing evaluation practice.

Session Title: Improving Peer Review for high Risk and Center-based Research
Multipaper Session 681 to be held in Malibu on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Israel Lederhendler,  National Institutes of Health, lederhei@od.nih.gov
Strategies and Lessons Learned from Implementing External Peer Review Panels Online: A Case Example from a National Research Center
Presenter(s):
Daniela Schroeter, Western Michigan University, daniela.schroeter@wmich.edu
Kelly Robertson, Western Michigan University, kelly.robertson@wmich.edu
Chris Coryn, Western Michigan University, chris.coryn@wmich.edu
Richard Zinser, Western Michigan University, richard.zinser@wmich.edu
Abstract: As part of evaluating a national research center's effectiveness and performance on Government Performance and Results Act (GPRA) measures, peer review panels are conducted annually. The purpose of the panel studies is to assess (a) the relevance of the research to practice and (b) the quality of disseminated products. Traditionally, peer review panels are implemented in face-to-face environments; however, to increase the feasibility of the annual study for the sponsors, panelists, and evaluators, the panels are implemented using synchronous and asynchronous means of communicating online. Strategies used and lessons learned over three iterations of the panel studies are the focus of this presentation with specific attention to (a) training, calibrating, and preparing panelists for each study; (b) asynchronous independent rating procedures; and (c) effective synchronous deliberation procedures.
Can Traditional Research and Development Evaluation Methods Be Used for Evaluating High-Risk, High-Reward Research Programs?
Presenter(s):
Mary Beth Hughes, Science and Technology Policy Institute, m.hughes@gmail.com
Elizabeth Lee, Science and Technology Policy Institute, elee@ida.org
Abstract: Over the last several years, the scientific community has seen a growth of non-traditional research programs that aim to fund scientists and projects of a 'high-risk, high-reward' nature. To date, evaluations of these programs have continued to use standard evaluation methods such as expert review and bibliometrics. Use of these standard methods, however, is predicated on a set of assumptions that may not be valid in the context of high-risk, high-reward programs. This paper presents the logic underlying several standard evaluation methods, describes a typology of high-risk, high-reward research programs, and critically assesses where standard methods are applicable to high-risk, high-reward research programs and where they fail. Examples where applicable are drawn from recent evaluations of high-risk, high-reward research programs at the National Institutes of Health and the National Science Foundation.

Session Title: A Two-Year External Evaluation of a Federal Emergency Management Agency-Funded and State-Led Disaster Case Management Pilot Project in Response to Hurricane Ike
Multipaper Session 682 to be held in Manhattan on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Disaster and Emergency Management Evaluation
Chair(s):
Scott Cummings, Texas AgriLife Extension Service, s-cummings@tamu.edu
Abstract: This session will provide evaluators with information and best practices for evaluating recovery from a large-scale natural disaster. Disasters such as hurricanes, wildfires, tornados, flooding, drought, earthquakes, and tsunamis are a reality and are seemingly encountered more frequently. Although the media covers these disasters as they occur, and occasionally in the aftermath, there is relatively little known about the long-term impact of disasters on people, beyond the immediate impact reported in the media. Questions such as, how long does it really take to recover, how are people impacted after the event, and what service are needed to help people recover are rarely publicized. It is important to evaluate the human impact and learn from these events. Thus this session will identify best practices applicable for evaluating mitigation, response, and recovery efforts related to the previously noted disasters. Additionally, the complexities of developing, contracting, and implementing such efforts will be discussed.
The Storm After the Hurricane: A Multifaceted Approach to Evaluating the Recovery of Those Impacted by Hurricane Ike - The Federal Emergency Management Agency's Disaster Case Management Pilot Project
Billy McKim, Texas AgriLife Extension Service, bmckim@aged.tamu.edu
Scott Cummings, Texas AgriLife Extension Service, s-cummings@tamu.edu
Paul Pope, Texas AgriLife Extension Service, ppope@aged.tamu.edu
Shannon Degenhart, Texas AgriLife Extension Service, sdegenhart@aged.tamu.edu
In May 2009, a FEMA-funded and state-led disaster case management pilot (DCMP) project was launched and began providing long-term disaster case management (DCM) services to more than 30,000 individuals impacted by Hurricane Ike. An external evaluation team was contracted to determine the effectiveness of the DCMP project over a two-year period. Their evaluation consisted of a three-tiered process- and outcome-based evaluation: 1) quantitative analyses of program data from the Coordinated Assistance Network (CAN) and Tracking-at-a-Glance (TAAG) databases, 2) multiple focus groups and more than 800 face-to-face interviews of disaster case managers, and 3) a mail-survey sent to 15,000 DCM clients, to determine client perceptions of the usefulness, practicality, and success of the DCMP project. Because FEMA often collaborates with multiple government and non-government agencies to facilitate successful recovery efforts, sharing the design and implementation of this evaluation may contribute to future evaluations of disasters or emergency incidents.
Challenges and Successes of a Multifaceted Evaluation of the Federal Emergency Management Agency's Disaster Case Management Pilot Project in Response to Hurricane Ike
Scott Cummings, Texas AgriLife Extension Service, s-cummings@tamu.edu
Billy McKim, Texas AgriLife Extension Service, bmckim@aged.tamu.edu
Paul Pope, Texas AgriLife Extension Service, ppope@aged.tamu.edu
Shannon Degenhart, Texas AgriLife Extension Service, sdegenhart@aged.tamu.edu
The purpose of the disaster case management pilot (DCMP) project was to provide disaster case management to those impacted by Hurricane Ike. Thus, a multi-tiered evaluation of the DCMP project was implemented to determine the effectiveness of the project. Many challenges were related to decisions made by DCMP administrators prior to the onset of the project, including development of a project strategy without an evaluation strategy or evaluation team, not establishing a criterion measure of project success at the onset of the project, and failure to identify standardized variables and measures prior to project implementation. Successes included a multi-tiered evaluation, establishing and maintaining a communication network across more than 15 autonomous organizations, overcoming federal privacy restriction barriers, and aggregation of data from multiple databases with relational files. The presenters will share successes and challenges encountered throughout the project along with recommendations to improve future evaluations of disasters or emergency events.

Session Title: Ongoing Short-term Performance Measurement as a Predictor of Long-term Objectives Achievement by Government Programs: A Case Study
Expert Lecture Session 683 to be held in  Monterey on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Government Evaluation TIG
Chair(s):
Nathalie Roy, Public Service Commission of Canada, nathalie.roy@psc-cfp.gc.ca
Presenter(s):
Maria Barrados, Public Service Commission of Canada, maria.barrados@psc-cfp.gc.ca
Abstract: There are great benefits to be derived from considering ongoing operational performance measurement as an important complement to the periodic evaluation function; designed to include bench-mark indicators and short-term feedback linked to long-term program success. Such predictors can be regarded as lead indicators: if the problems they identify are left untreated, there will be a longer-term cumulative effect by the time the program is evaluated. Such lead indicators can be updated annually and form data input on a cumulative basis to a program's evaluation. Henceforth, neither evaluation nor performance measurement would be considered separately. Rather, they would function as an integrated whole. Benefits involved would include: - Annual feedback from lead indicators would serve as an early warning system; - More robust management accountability systems, based on timely assessment of performance; - Findings from the in-depth evaluation would provide a feedback loop to fine-tune the ongoing operational performance lead indicators.

Session Title: A Qualitative Study of the Effects of Performance Indicators on the Psychological Well-being of Public Employees: An Interview and Discussion With the Researcher
Panel Session 684 to be held in Oceanside on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Qualitative Methods TIG
Chair(s):
Leslie Goodyear, National Science Foundation, lgoodyea@nsf.gov
Abstract: Performance measurement systems have proliferated in recent years. Organizations of all kinds have implemented performance indicator systems to document worker performance, program outcomes and progress toward goals. The study presented in this alternative format session - a study of how teachers of Danish as a second language feel about performance measurement - endeavors to contribute to an understanding of the constitutive effects of performance management systems upon public employees, their practices, their definition of good work, and their well-being. In addition, this study takes a creative approach to understanding these quantitative systems by using qualitative inquiry to study their effects. In an interview format that will engage the audience, the findings of the study and the methodological implications will be addressed along with a discussion of the ways in which values are surfaced in this study context.
Public Employees’ Perceptions of Performance Measurement: What Values Surface When We Investigate Quantitative Measurement Systems Using Qualitative Methods?
Peter Dahler-Larsen, Syddansk University, pdl@sam.sdu.dk
Peter Dahler-Larsen will make a brief presentation about his work researching how teachers of Danish as a second language feel about performance measurement. Then, Leslie Goodyear will interview Peter about his work, the values inherent in it, the interesting methodological issues surfaced by using qualitative inquiry to investigate quantitative data collection systems, and what we, as evaluators, can learn from the findings of this study.
What Can We Learn About Performance Systems, Values, and Methods From This Study of Performance Measurement Systems?
Leslie Goodyear, National Science Foundation, lgoodyea@nsf.gov
After Peter Dahler-Larsen's brief presentation of the methods and findings of his study of how teachers of Danish as a second language feel about performance measurement, Leslie Goodyear will interview Peter about his work, the values inherent in it, the interesting methodological issues surfaced by investigating quantitative data collection systems using qualitative methods, and what we, as evaluators, can learn from the findings of this study. In the interview, she will bring her experience in evaluating Federal government programs, developing and supervising monitoring systems and her expertise in qualitative methods to bear on questions about values, methods and the effects of these performance measurement systems on people who participate in them.

Session Title: A Discussion on Measuring the Black Box of Services in Human Services Evaluation
Think Tank Session 685 to be held in Palisades on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Human Services Evaluation TIG
Presenter(s):
Julie Murphy, Human Services Research Institute, jmurphy@hsri.org
Discussant(s):
Linda Newton-Curtis, Human Services Research Institute, lnewton@hsri.org
Abstract: Human service evaluators have successfully conducted many studies examining the development, implementation and impact of programs on families. However, gaining a comprehensive picture of services and supports that clients receive may be difficult when parent agencies refer to outside resources. This think tank provides a forum to discuss how to define and measure the 'black box' of external services considered a key contributor to the success of family outcomes. First, efforts to collect data on externally referred services are briefly described, this is followed by breakout discussions that allow attendees to share their own experiences in identifying key services, developing measures, and collecting services data. The goal of the session is to facilitate a discussion of the values inherent in deciding what and how services data should be collected as well coming away from the session with strategies that could enhance future evaluation efforts in this area.

Session Title: Innovative Approaches to Environmental Education Evaluation
Multipaper Session 686 to be held in Palos Verdes A on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Environmental Program Evaluation TIG
Chair(s):
Cynthia Parry,  Cynthia F Parry Associates, cfparry@msn.com
Teachers' Values and Perspectives From Empowerment Evaluation of Environmental Education in Primary Schools in Southern Mexico
Presenter(s):
Karla E Atoche-Rodriguez, Deakin University, keat@deakin.edu.au
Edith Cisneros-Cohernour, University of Yucatan, Merida, cchacon@uady.mx
Maria Dolores Viga De Alva, National Polytechnic Institute, dviga@mda.cinvestav.mx
Abstract: Environmental education (EE) in Mexico claims to enhance a strong relationship between school, home, and communitarian context for transforming current environmental problems that they face. An important step to achieve this purpose, it is the development of a culture of evaluation within EE practice. However, it is not required any evaluation, but it is sought empowerment evaluation which helps to participants understand critically their own represented values in empowerment evaluation, and at the same time, whether those are congruent with values promoted within the development of environmental education. In this sense, this paper addresses values and perspectives emerged from empowerment evaluation of a model in environmental education in Yucatan, Mexico conducted by teachers in primary education. I argue that this presentation is both a collaborative and innovative work because it will allow sharing knowledge in environmental education field and its evaluation practice, coming particularly from the Mexican context.
Meeting the Challenge of Environmental Education Evaluation
Presenter(s):
Valerie Williams, University Corporation for Atmospheric Research, vwilliam@ucar.edu
Abstract: The May 2010 issue of Evaluation and Program Planning reflected on the state of evaluation in environmental education and identified some significant challenges. Chief among them are the lack of clear program objectives and an inability to conceptualize how the program works. Both can lead to evaluations that make claims that are difficult to substantiate, such as significant changes in student achievement levels or behavioral changes based on acquisition of knowledge. Though many of these challenges can be addressed by establishing the program theory and developing a logic model, claims of impact on larger societal outcomes are difficult to attribute solely to program activities. Consequently, contribution analysis may be a promising method for conceptualizing and demonstrating the impact environmental education programs have on societal outcomes. In this paper, the author reviews the feasibility of using contribution analysis as a way of evaluating the impact of an environmental science and education program.

Session Title: Moving Foundations Forward on Movement Building: The Role of Philanthropy
Think Tank Session 687 to be held in Palos Verdes B on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Hanh Cao Yu, Social Policy Research Associates, hanh_cao_yu@spra.com
Discussant(s):
Barbara Masters, MastersPolicyConsulting, barbara@masterspolicy.com
Vivian Eng, Twenty-First Century Foundation, veng@21cf.org
Abstract: With increased interest in the role of philanthropy in building movements, a new surge of foundation investments have spawned growth in movement building work. In this think tank session, we will lay a framework for discussion based on evaluations of several foundation initiatives focusing on infrastructure building, community organizing, and policy advocacy. These initiatives include the work of The California Endowment, Ford Foundation, 21st Century Foundation, and NoVo Foundation. After the initial theories and frameworks are laid out, we will share examples, raise issues, and invite participants to share their ideas and evaluation work to address the following questions: What are the key components of foundations' involvement in movement building? How should foundations be involved movement building? What are promising practices by foundations that transcend traditional program silos, equalize power relationships, and maintain a steady focus over many years? What are key indicators of effective philanthropic practices in movement building?

Session Title: Online Learning and Technology-supported Instruction: What Works?
Multipaper Session 688 to be held in Redondo on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Distance Ed. & Other Educational Technologies TIG
Chair(s):
Talbot Bielefeldt,  International Society for Technology in Education, talbot@iste.org
Discussant(s):
Christine Paulsen,  Concord Evaluation Group, cpaulsen@concordevaluation.com
Technology Supported Instruction: Best Practices for At-Risk and Underserved Learners
Presenter(s):
Kevin Murphy, University at Albany, SUNY, km989754@albany.edu
Meghan Deyoe, University at Albany, SUNY, morris.mlm@gmail.com
Kathy Gullie, University at Albany, SUNY, kp9854@albany.edu
Dianna Newman, University at Albany, SUNY, dnewman@uamail.albany.edu
Dean Spaulding, The College of Saint Rose, spauldid@strose.edu
Victoria Coyle, University at Albany/SUNY, 
Holly Meredith, University at Albany/SUNY, 
Tim Julio McLaughlin, University at Albany/SUNY, 
Abstract: This paper summarizes best practices in evaluation and programmatic findings identified via a cross site analysis of programs for multiple levels of learners supported by the use of technology. Programs reviewed included adult ESL learners, elementary minority youth, academically failing urban middle school students, and high need STEM learners. The paper will address the value of common and unique performance indicators that drill down from State indicators to local and programmatic outcomes. Fidelity of assessment as well as cross site analysis techniques will be addressed. A summary of cross site results, as well as best practices identified across the projects, also will be presented.
Face-to-face Training Versus Online Learning: A Comparative Case Study With California Community Organizations
Presenter(s):
Jeanette Treiber, University of California, Davis, jtreiber@ucdavis.edu
Travis Satterlund, University of California, Davis, tdsatter@ucdavis.edu
Abstract: According to conventional wisdom face-to-face training events are more effective for community organizations than training delivered via e-learning modes such as webinars. Conversely, it also suggests that webinars are less expensive than face-to-face training. However, research gives evidence that both assumptions may need to be re-evaluated. Like many statewide service providers, the Center for Evaluation and Research (CER) at UC Davis is concerned about effective training delivery to the more than 100 local organizations it serves. In May 2011 TCEC will conduct a statewide training on survey design for project directors and evaluators of health departments and community-based organizations. Three face-to-face training events will be compared to an e-learning event with the same content. A pre-and post knowledge test, a training satisfaction survey, key informant interviews with a participant sample as well as a cost comparison will be performed. The paper will make recommendations based on results.

Session Title: Best Practices Analysis of Learning Environments
Demonstration Session 689 to be held in Salinas on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Assessment in Higher Education TIG
Presenter(s):
Rhoda Risner, United States Army, rhoda.risner@us.army.mil
Thomas Ward II, United States Army, thomas.wardii@cgsc.edu
Abstract: "How well are we doing?" This is the question instructors, accrediting bodies and institutional leaders (stakeholders) want answered. Having an empirical method to answer that question is of indescribable value to each of these stakeholders. The Quality Assurance Office along with the Faculty Development Division of the US Army Command and General Staff College researched, developed, and codified eight adult learning environment best practice areas that encompass the College's philosophies of adult learning environments, critical thinking, and collaboration. This presentation demonstrates using adult learning environment best practices to triangulate data collected from all learning environments (classrooms) across the college. This effort allows the College to do Meta-analysis of its learning environments.

Session Title: Implementing the Bellwether Methodology: Lessons From Two Fast-Paced, Real-Time Advocacy Evaluations
Demonstration Session 690 to be held in San Clemente on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Advocacy and Policy Change TIG
Presenter(s):
Steve Mumford, Organizational Research Services, smumford@organizationalresearch.com
Joelle Cook, Organizational Research Services, jcook@organizationalresearch.com
Abstract: Through the Bellwether Methodology, developed by the Harvard Family Research Project, evaluators conduct structured interviews with individuals in the public and private sectors whose positions require that they track a broad range of policy issues. To help elicit authentic and unprompted responses regarding policy priorities, bellwethers are not informed in advance of the interview's specific policy focus. This approach helps determine where an issue stands on the policy agenda, providing actionable data to advocates and funders to inform messaging and advocacy strategies as well as data on changes in political will over time. ORS applied the methodology to real-time evaluations of two advocacy campaigns examining political will for public library funding and education reform among local and state policymakers, respectively. This session will share lessons related to sampling, protocol design, interview scheduling and set-up, reporting, use of the method at multiple time points, and important considerations for the overall approach.

Session Title: Wrestling With Clients to Strengthen Our Evaluation Practice
Multipaper Session 691 to be held in San Simeon A on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Independent Consulting TIG
Chair(s):
Loretta Kelley,  Kelley, Petterson, and Associates, lkelley@kpacm.org
Using Client-Centered Feedback to Assist in the Process of Evaluator-Centered Meta-Evaluation
Presenter(s):
Rebecca Eddy, Cobblestone Applied Research & Evaluation Inc, rebecca.eddy@cobblestoneeval.com
Namrata Mahajan, Cobblestone Applied Research & Evaluation Inc, namrata.mahajan@cobblestoneeval.com
Todd Ruitman, Cobblestone Applied Research & Evaluation Inc, todd.ruitman@cobblestoneeval.com
Abstract: Given that the evaluation profession in the United States continues to be largely unregulated, it is important for evaluators to produce evidence of conducting high-quality work in the field. It is of primary importance that we have our own work evaluated to satisfy clients' needs but also to maintain integrity in the profession. Reineke and Welch (1986) emphasize that meta-evaluation has two functions: 1) help evaluators improve their practice (evaluator-centered); 2) help clients make better use of evaluation information (client-centered). Using criteria established in Scriven's revised Meta-Evaluation Checklist (2011), we embarked on our own study of an evaluator-centered meta-evaluation process for the primary purpose of improving our own practice. Clients served as participants and were asked to provide feedback on their evaluation/ research experience including communication, study process, reporting, etc. We will discuss the importance of systematically evaluating one's own work using the client perspective as one source of data.
Overcoming Objections to Evaluation: It is No Longer a Luxury
Presenter(s):
Sheri Jones, Measurement Resources, scjones@measurementresourcesco.com
Abstract: Increasing the evaluation capacity of mission-driven organizations has become a popular topic among academics and practitioners. Yet, many organizations are hesitant to invest in evaluation because they lack the time, money, or staff expertise. This presentation provides background information on using evaluation results to create measurement cultures and examples of how these efforts help overcome barriers to evaluation in mission-driven organizations. Next, it presents results of a preliminary study examining the extent government and nonprofit organizations are implementing these measurement cultures. The significant relationships between measurement culture strength and organizational outcomes such as revenues, organization efficiencies, internal and external relations, and successful organizational change will be presented. Specifically, results indicate that organizations with stronger measurement culture are 60 percent more likely to report experiencing positive organizational outcomes. Discussion will focus on the implication of these results for internal and external evaluators as well as the need for more in-depth research.

Session Title: Bang for the Buck: Cost-Effectiveness Evaluations in K-12 Education
Demonstration Session 692 to be held in San Simeon B on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Agata Jose-Ivanina, ICF Macro, ajose-ivanina@icfi.com
Michael Long, ICF Macro, mlong3@icfi.com
Abstract: In this session, the presenters will offer a set of principles to guide successful cost-effectiveness analyses. An emphasis will be placed on making this content practical and concrete, and all principles will be illustrated using real-life examples of educational studies the presenters have completed. Specific topics to be covered include: - The difference between cost-benefit and cost-effectiveness analysis, and when each is appropriate - In what situations cost-effectiveness studies are most likely to be successful - Which costs and benefits should be considered in an analysis - How to calculate costs and benefits - Strategies for conducting cost-effectiveness studies on a limited budget Participants will leave with an understanding of the theoretical framework of cost-effectiveness analysis and the knowledge of practical issues they need to consider before conducting such a study.

Roundtable: Analyses That (Almost) Write Themselves: Using Crosswalks, Matrices And Schematics to Guide Multi-Method Evaluations and Present Study Findings
Roundtable Presentation 693 to be held in Santa Barbara on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Graduate Student and New Evaluator TIG
Presenter(s):
Eden Segal, Westat, edensegal@westat.com
Abstract: This session will address the use of crosswalks, matrices, and other schematic data displays to organize and exhibit evaluation components. Experienced evaluators routinely use schematic approaches to compare and portray complex relationships and display them, particularly when conclusions rely on qualitative or mixed-method multi-source data. However, novices can find the methodological importance of such visuals hard to grasp because examples are difficult to locate (e.g., Miles & Huberman, 1994; Tufte, 2001). This session will demonstrate how these visual devices can be used to organize evaluation questions, data elements, instrument questions, variables or indicators, and, ultimately, the ease of writing and presenting study results. Participants will review examples, share other sources, and discuss the pros and cons of using schematics for a range of evaluation purposes and with various audiences.

Session Title: Meta-analysis as a Valuable Tool in Driving Quality Evaluations
Multipaper Session 694 to be held in Santa Monica on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Karen Larwin,  Youngstown State University, khlarwin@ysu.com
The Value of Meta-Analytic Research When Formulating an Evaluation Plan: Implications for Return on Investment in a Federally Funded Wellness Initiative
Presenter(s):
Karen Larwin, Youngstown State University, drklarwin@yahoo.com
David Larwin, Kent State University at Salem, dlarwin@yahoo.com
Abstract: This paper demonstrates the value of conducting a thorough evaluation of existent research when developing a plan to establish return on investment (ROI) of program activities. For this project, evaluators used meta-analysis to synthesize the available research examining ROI for corporate-based wellness initiatives. The results from this meta-analytic investigation were used to develop a comprehensive ROI strategy. The resultant multi-level approach to ROI demonstrated to stakeholders that the ROI for the original initiative had systemic implications and was sustainable. The presentation of the results from the meta-analysis encouraged the stakeholders to agree to incorporate more indicators of ROI than what were originally proposed, and improved the credibility and value of the evaluation plan.
A Powerful Method for the Evaluator's Toolbox: Using Meta-Analysis with Small Sample Sizes
Presenter(s):
Stephanie Beane, Southeast AIDS Training and Education Center, sbeane@emory.edu
Shenee Reid, Southeast AIDS Training and Education Center, sreid4@emory.edu
Rebecca Culyba, Southeast AIDS Training and Education Center, rculyba@emory.edu
Abstract: The purpose of this presentation is to illuminate novel ways to apply effect size and meta-analytic techniques when facing limitations in evaluation data analysis. The presenters will address why significance tests are often inappropriate for small samples and suitable applications of meta-analysis for primary data. This presentation is appropriate for novice learners and will detail uses of meta-analysis with nested data sets of small sample sizes. While the focus of this presentation is on the analysis method used in a longitudinal evaluation, presenters will highlight training intervention effects for rural Ryan White clinic staff serving patients with HIV/AIDS. Results of a pre and post-training knowledge and skills assessment will be shared to demonstrate the utility of effect sizes, meta-analysis applications with primary data, random versus fixed effects models, forest plots, and Comprehensive Meta-Analysis software.

Session Title: Challenges in Evaluating Development Cooperation Programs and Community Development Initiatives
Multipaper Session 695 to be held in Sunset on Friday, Nov 4, 1:35 PM to 2:20 PM
Sponsored by the Business and Industry TIG
Chair(s):
Kate Rohrbaugh,  Independent Project Analysis, krohrbaugh@ipaglobal.com
Ex-ante Evaluation of the Program "Skills Development For Climate and Environment Business: Green Jobs" In South Africa
Presenter(s):
Stefanie Krapp, German International Cooperation, stefanie.krapp@giz.de
Abstract: The evaluation unit of the German development cooperation has applied an ex-ante evaluation of the new program 'Skills Development for Climate and Environmental Business - Green Jobs' in South Africa as a new instrument within its evaluation system. The objectives of this evaluation which has been carried out by an independent institute in close cooperation with the appraisal mission were: provide information regarding the needs structure of the involved stakeholders of the program (needs analysis), conceptualize the design of the program by establishing a results framework and developing the results chain (conceptual design), assess the effectiveness of the program and the sustainability of its impacts (impact assessment), document the current situation in the impact fields of the program (baseline) and develop a framework for continuous results monitoring (M&E concept). The presentation will discuss the experiences in applying the different tasks and the added value of the instrument regarding the better planning of a programme.
The Social License to Operate: Evidence From Extractive Industries Evaluations
Presenter(s):
Cherian Samuel, International Finance Corporation, csamuel@ifc.org
Abstract: In the case of Extractive Industries(EI) projects, Corporate Social Responsibility (CSR) initiatives focused on community development are fundamental for getting and keeping the social license to operate. This paper presents evidence regarding social license aspects of EI projects, based on recent evaluations of four private sector projects that are broadly representative of the portfolio, by the Independent Evaluation Group, International Finance Corporation (IEG-IFC). The four projects are: Kupol Gold (Russia 2010), Marlin Gold (Guatemala 2009), Chad Cameroon Pipeline (Chad Cameroon 2009), Cairn Energy (India 2009), with the first two being gold mining projects and the last two, oil development projects. In all four projects, IFC provided technical assistance(Advisory Services) grants to project communities that ultimately enhanced IFC's reach and development impact.

Return to Evaluation 2011
Search Results for All Sessions