2011

Return to search form  

Session Title: Identifying, Articulating and Incorporating Values in a Program Theory
Demonstration Session 301 to be held in Avalon A on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Presenter(s):
Sue Funnell, Performance Improvement, funn@bigpond.com
Abstract: Increasingly program evaluations are being driven by program theories but program theories can be constructive or unhelpful or even destructive for evaluation. The processes used to conceptualize and portray a program theory can influence whether it gives rise to useful evaluation questions and can affect whose voices are heard when identifying evaluation criteria and making judgments about a program's success and its wider effects. This session will show how workshops can be used alongside other techniques to develop a program theory that incorporates a range of value perspectives and poses useful evaluation questions. It will demonstrate questions that can be used, how to arrange the answers into an outcomes chain, the Ideas Writing technique for identifying different perspectives on what constitutes success, how to deal with divergent views and how to incorporate unintended outcomes.

Session Title: Extreme Genuine Evaluation Makeovers (XGEMs)
Demonstration Session 303 to be held in California A on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Evaluation Managers and Supervisors TIG
Presenter(s):
Jane Davidson, Real Evaluation Ltd, jane@realevaluation.com
Abstract: "Value for money" applies not just to programs and policies, but to evaluations themselves. This demonstration will show how evaluations can be designed and implemented in ways that deliver real value for money for their clients. The demonstration uses humor and metaphor to guide the audience through the main species of waste-of-money evaluation, their natural habitats and distinguishing features. These evaluations frequently lack incisive evaluation questions; get lost in the details; skip over the crucial 'values' step; uncritically accept stated objectives as the only evaluative criteria; fail to adequately triangulate and weave sources of evidence; and toss causation into the 'too hard basket'. The session will demonstrate some practical guidelines for doing Extreme Genuine Evaluation Makeovers (XGEMs). The emphasis is on being realistic and humble about what is feasible, but resisting the urge to do non-genuine evaluation when the needs and the constraints are challenging.

Session Title: Modern Western Evaluation Imaginary Meets the Pascua Yaqui: An Interview With Fileberto Reynaldo Lopez - By Peter Dahler-Larsen
Expert Lecture Session 304 to be held in  California B on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Peter Dahler-Larsen, University of Southern Denmark, pdl@sam.sdu.dk
Presenter(s):
Fileberto Reynaldo Lopez, University of Arizona, lopezf1@email.arizona.edu
Abstract: In the modern Western world, most evaluation is carried out based on a fundamental world view or "evaluation imaginary" (Schwandt) that is difficult for most evaluators to see, discuss, or thematize, simply because this world view is to increasing extent defining what evaluation is and can be and how it should be carried out. Interesting views or reflections that allow us to thematize the modern evaluation imaginary may be found if we step outside the beaten path defined by the modern evaluation imaginary itself. I have had the privilege to meet one individual from the Pasqua Yaqui of the Sonoran Desert. He is with us today. His name is Dr. Fileberto Lopez. Fileberto has accepted to be interviewed, and we hope that through the interview, we will all learn a bit about the Modern Western Evaluation Imaginary, especially as seen from the outside.

Session Title: Exploring the Value of Careers in Evaluation
Expert Lecture Session 305 to be held in  California C on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Graduate Student and New Evaluator TIG
Chair(s):
John LaVelle, Claremont Graduate University, john.lavelle@cgu.edu
Presenter(s):
Stewart Donaldson, Claremont Graduate University, stewart.donaldson@cgu.edu
Discussant(s):
Christina Christie, University of California, Los Angeles, tina.christie@ucla.edu
Abstract: The demand for evaluation and evaluation services has increased dramatically over the past decade. As evaluation practice has blossomed world-wide, universities, evaluation professional associations, government agencies, foundations, non-profit and for-profit organizations have become actively involved in providing professional development workshops, certificates and professional designations, and masters and doctoral degrees in evaluation. What is the value of the careers that can result from this advanced training and education in evaluation? Professor Stewart Donaldson will provide answers to this question as well as explore a range of career development issues that are directly relevant for graduate students and working professionals interested in working in the diverse transdisciplinary field of evaluation.

Session Title: Evaluation Use and Knowledge Translation: An Exchange for the Future
Expert Lecture Session 306 to be held in  Pacific A on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Presidential Strand
Chair(s):
Gail Barrington, Barrington Research Group Inc, gbarrington@barringtonresearchgrp.com
Presenter(s):
Melanie Barwick, The Hospital for Sick Children, melanie.barwick@sickkids.ca
Discussant(s):
Daniel Stufflebeam, Western Michigan University, dlstfbm@aol.com
Abstract: Evaluators have long valued evaluation use, both by decision makers and practitioners in the contexts at hand. But what about the broader community? The concept of knowledge translation addresses the steps between the creation of new knowledge and its application to outcomes including benefits for citizens, effective services and products, and strengthened social systems. It explores the exchange of knowledge between researchers and users to accelerate the knowledge-to-action process. Dr. Melanie Barwick is a Registered Psychologist and Health Systems Research Scientist at The Hospital for Sick Children in Toronto, Ontario. Since 2001 she has implemented an outcome measure in 117 children's mental health service provider organizations and providing training to over 5,000 practitioners. In this practice context she studies innovative health knowledge translation strategies and has developed the Scientist Knowledge Translation Training Program. She currently leads a 5-year innovative project in Knowledge Translation for Child and Youth Mental Health.

Session Title: Using PhotoVoice for Participatory Community Evaluation
Demonstration Session 307 to be held in Pacific B on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Qualitative Methods TIG and the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Amanda Purington, Cornell University, ald17@cornell.edu
Jacqueline Davis-Manigaulte, Cornell University, jad23@cornell.edu
Jane Powers, Cornell University, jlp5@cornell.edu
Abstract: This demonstration will provide an overview of how the New York State ACT for Youth Center of Excellence utilizes PhotoVoice methodology for evaluation with youth and adult staff from programs designed to reduce sexually transmitted infections, HIV infection, and unintended pregnancy among youth while promoting their optimal sexual. Youth and adult staff are using PhotoVoice to evaluate issues in their communities impacting youth sexual health. Youth leaders, in collaboration with adult program staff, are charged with conducting this community evaluation, interpreting findings, and developing action plans to address issues highlighted by this evaluation. Engagement in the evaluation process ensures youth perspective is included and valued in these assessments and helps youth see their potential roles as catalysts for change. Participants in this demonstration will have the opportunity to learn about the PhotoVoice process and explore its potential use for evaluation within their work settings.

Session Title: Structural Equation Modeling as a Valuable Tool in Evaluation
Multipaper Session 308 to be held in Pacific C on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Frederick L Newman,  Florida International University, newmanf@fiu.edu
Structural Equation Modeling with Cross-lagged Paths to Evaluate Alcoholics Anonymous' Effect on Drinking
Presenter(s):
Stephen Magura, Western Michigan University, stephen.magura@wmich.edu
Charles M Cleland, New York University School of Nursing, chuck.cleland@nyu.edu
Abstract: Evaluation studies consistently report correlations between Alcoholics Anonymous (AA) participation and less drinking or abstinence. Randomization of alcoholics to AA or non-AA is impractical and difficult. Unfortunately, non-randomization studies are susceptible to artifacts due to endogeneity bias, where variables assumed to be exogenous ('independent variables') may actually be endogenous ('dependent variables'). A common artifact is reverse causation, where reduced drinking leads to increased AA participation, the opposite of what is typically assumed. The paper will present a secondary analysis of a national alcoholism treatment data set, Project MATCH, which consists of multi-wave data on AA participation and severity of drinking over a 15 month period (3 month intervals). An autoregressive cross-lagged model was formulated and indicated the predominance of AA effects on reduction of drinking, not the reverse. The presentation will be accessible to evaluators without advanced statistical training. Supported by R21 AA017906.
A Tool for Model Selection: Assessing the relative goodness of fit of an evaluation model using the Akaike Information Criterion (AIC)
Presenter(s):
Shelly Engelman, The Findings Group LLC, shelly@thefindingsgroup.com
Tom McKlin, The Findings Group LLC, tom@thefindingsgroup.com
Abstract: As the use of statistical models to assess how well program activities predict outcomes increases among evaluators, the need to detect best fitting models grows in importance. A 'good' statistical model not only strengthens an evaluation by identifying variables that have the strongest impact on outcomes, but also has the potential to add theoretical value to other programs and evaluation contexts. R-square is a commonly used statistic to evaluate model fit in multiple regression analysis. Adding additional variables to a regression model often increases the R-square; however, one consequence of choosing the best model because it has an incrementally higher R-square is that it often lacks parsimony and is difficult to replicate. Using a biology-based learning intervention as an example, we highlight the AIC (Akaike Information Criterion) as a tool for comparing models and selecting the most robust, parsimonious model. The advantage of using AIC is that it penalizes attempts at over-fitting a model and allows evaluators to compare multiple models before selecting the best model.

Session Title: Evaluation Coaches: Designing and Evaluating for the Future
Panel Session 309 to be held in Pacific D on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Charles Gasper, Missouri Foundation for Health, cgasper@mffh.org
Discussant(s):
Katrina Bledsoe, Education Development Center Inc, katrina.bledsoe@gmail.com
Abstract: Evaluation Coaching is an evolved approach to evaluation derived from the sprit and lessons learned from Empowerment, Participatory, and Collaborative Evaluation. The approach directly supports the needs of nonprofits and funders to be educated and engage in evaluation. With pressures on organizations to provide results from rigorous evaluation, funders have resorted to engagement of external evaluation to assess the performance and process of their initiatives and portfolios. Internal evaluators have been tarnished with the view that their work could be tainted by their relationship with their organization. Evaluation Coaching differs from traditional evaluation approaches as it is focused on organizational evaluation capacity building. The critical friendship between the Evaluation Coach and the organization spreads beyond one or two projects and the educational methods are highly participatory. The presenters explain this evolutionary approach focusing on development of internal evaluation capacity, rigor of evaluation design and implementation, and use by organizations; a community-based non-profit evaluation coach will discuss the implications for the work.
Desperately Seeking an Effective Capacity Building Model for Useful Evaluation
Charles Gasper, Missouri Foundation for Health, cgasper@mffh.org
Evaluation Coaching is an evolutionary extension of Empowerment Evaluation. It is the result of over four years of work in experimenting with improving evaluation capacity in the Foundation as well as with nonprofits and uses the framework of program theory based evaluation to support the process. This presentation covers the development of Evaluation Coaching and rationale for its development, along with discussion of the process ranging from program design through evaluation. We share evidence of change in the quality of evaluation related information provided to the Foundation by its grantees as well as early evidence of expansion of evaluation use beyond the individual funded program within these grantees. Future directions and implementation of this approach to evaluation will be discussed.
Desperately Seeking an Outcome-based Evaluation Design That has Required Rigor
Jack Galmiche, Nine Network of Public Media, jgalmiche@ketc.org
To meet internal needs to understand whether a nonprofit affects change, organizations need to focus on outcomes. This presentation covers the history of the Nine Network's engagement with evaluation and its quest to develop a process that allows for organization-wide outcome assessment. We share the process of development of the "Outcomes Coach" position, its relationship with the organization as well as the work done to develop internal evaluation capacity. We also discuss the impact of implementation of an Outcomes Coaching framework on organizational program design, decision making, and assessment of outcomes tied to program and organizational improvement.

Roundtable: The Use of Coaching, Co-authorship, and Mixed Media to Structure and Support Data Use Within a Teacher Induction Program
Roundtable Presentation 310 to be held in Conference Room 1 on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Paul St Roseman, DataUse Consulting Group, paul@mydatause.com
Rachelle Rogers-Ard, Teach Tomorrow in Oakland, rachelle.rogers-ard@ousd.k12.ca.us
Abstract: This demonstration presents approaches that support 'data use' within the values framework, operation apparatus, and decision making processes of a teacher induction program located in Oakland California. This case example will demonstrate how: (1) administrative coaching supports efforts to develop, interpret, and utilize evaluation products; (2) co-authorship and presentations are utilized as a process for data analysis, and (3) mixed media and web-based resources are utilized to facilitate collaboration. This presentation is most appropriate for evaluation practitioners who collaborate with administrators and their staff to design, implement, sustain and utilize evaluation products.

Roundtable: Challenges and Solutions Associated With Participant Recalibration or Reprioritization of Self-Report Data
Roundtable Presentation 311 to be held in Conference Room 12 on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Health Evaluation TIG
Presenter(s):
Cady Berkel, Arizona State University, cady.berkel@asu.edu
Angelica Tovar-Huffman, Phoenix Children's Hospital, atovarhuffman@phoenixchildrens.com
Abstract: Often programs aimed at improving health-related behaviors change not only the targeted program outcomes, but also create response shifts in terms of participants' perspectives of those outcomes. In such cases, self-report data may appear to indicate null or iatrogenic program effects, due to the fact that participants' gains in understanding of the construct equal or outweigh their gains in the construct itself. For example, in the CareConnect AZ program at Phoenix Children's Hospital, participants initially reported high levels of communication with health providers, which declined once they were exposed to new skills for communicating with providers. This is a major problem for evaluators who must untangle 'true' change from apparent change due to recalibration or reconceptualization. In the proposed roundtable session, we will discuss the challenges we faced with recalibration of self-report responses. Attendees will be invited to share similar challenges and discuss different strategies for dealing with this problem.

Session Title: Evaluating International Trafficking Programs: The Role of Evaluability Assessments in Determining Program Readiness and Documenting Program Strategy Evolution
Panel Session 312 to be held in Conference Room 13 on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Beth Rabinovich, Westat, bethrabinovich@westat.com
Discussant(s):
Casey Branchini, United States Department of State, BranchiniCA@State.Gov
Abstract: Evaluability assessment (EA) has traditionally been used to determine the logical basis of a program; its readiness for implementation, outcome, or impact evaluation; the changes needed to increase its readiness; and evaluation approaches most suitable for measuring program performance and outcomes. In this presentation we discuss how EAs can help track the initiation of new programs (often using models borrowed from other disciplines) in international trafficking as well as document the development and evolution of program strategies. The presentations in this session will focus on recent EAs of international trafficking programs sponsored by the Department of State, Office to Combat Trafficking in Persons (G/TIP) and how the three-pronged program strategy - prosecuting trafficking offenders, protecting victims, and preventing trafficking - is being implemented or the changes required for implementation. Case studies (countries identified by region only) discussing the program challenges and successes will be presented.
Introduction and Description of the Methodological Approach Used for the Evaluability Assessments
Beth Rabinovich, Westat, bethrabinovich@westat.com
Beth A. Rabinovich, Ph.D. is a Westat Senior Study Director with more than 25 years of experience conducting evaluations of programs for children, adolescents, and older adults. She currently directs a project for the U.S. State Department's Office to Combat Trafficking in Persons (G/TIP) that includes one impact evaluation and two evaluability assessments of anti-trafficking projects. She also directed a project that included three evaluability assessments of G/TIP anti-trafficking projects, and provided technical assistance. Dr. Rabinovich directed an evaluation of Department of Labor's child labor technical cooperation program. She has provided technical assistance to local agencies in the U.S. on performance measurement approaches to determine the effectiveness of program operations, appropriate targeting, and self-reported outcomes. Dr. Rabinovich has a doctorate in Human Development from the University of Maryland and in addition to her research has taught as an adjunct at the University of Maryland, University College for 20 years.
Assessing Trafficking Issues Associated with Foreign Domestic Workers in the Middle East and Northern Africa Region
Frances Gragg, Westat Consultant, francesgragg24@yahoo.com
Frances Gragg, M.A., is a consultant with more than 25 years experience in research and evaluation design and methodology. She conducted an evaluability assessment of one of the State Department programs in the Middle East/Northern Africa (MENA) region. This program focused on trafficking involving domestic migrant workers. She examined data availability (existence of baseline data; accessibility to court records, determinations of trafficking, visas, employment contracts; ongoing employment conditions; outcome data following repatriation), and available local resources to conduct process and/or outcome evaluations, developed the site assessment, provided technical assistance on data collection and the use of current data, and prepared cross-site protocols and technical assistance documents. She has recently provided technical assistance to other Federal program grantees on developing sound evaluations and presenting evaluation findings that can be used both for accountability and marketing.
Assessing Asian Programs Addressing Forced Labor and Child Sex Trafficking
Tamara Daly, Westat, tamaradaly@westat.com
Tamara Daley, Ph.D. is a Senior Study Director at Westat, with a background in special education and cross-cultural health and mental health issues. Her research and evaluation background includes a range of methodologies, from ethnographic fieldwork to large scale surveys with nationally representative samples of children. In 2010, Dr. Daley conducted evaluability assessments of two grantees of the State Department G/TIP program, both located in Asia. One project focused on bonded laborers; the other focused on child sex trafficking. Each evaluability assessment involved a site visit, extensive document review and a comprehensive report, including recommendations for possible evaluation and ways to improve performance indicators already in place. Dr. Daley is also co-project investigator of a project in India, under which she is currently evaluating a parent training program for parents of children with autism.
Assessing Prevention Programs in Central America by Changing Norms Through Peer-to-Peer Training
Jessica Harrell, Westat, jessicaharrell@westat.com
Jessica Harrell has 6 years of experience conducting and supporting human services research in the areas of human trafficking, child labor, and child welfare. She conducted an evaluability assessment for one of G/TIP's international anti-trafficking programs in Central America. This program focused on prevention by way of changing cultural attitudes to reduce the demand for prostitution among young men. For the evaluability assessment, she created a logic model of the program, reviewed program documents and reports, interviewed program staff, observed the program in action, and developed the site assessment report. She also created a technical assistance document on how to create and use logic models for G/TIP.

Session Title: Evaluators as Partners in Technology Program Design
Multipaper Session 313 to be held in Conference Room 14 on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Mary Beth Hughes,  Science and Technology Policy Institute, m.hughes@gmail.com
The Logical Framework Approach to Drafting Proposals for Government Technology Programs: The Case of Taiwan
Presenter(s):
Ling-Chu Lee, Science & Technology Policy Research and Information Center, lingchulee@yahoo.com.tw
Shan Shan Li, Science & Technology Policy Research and Information Center, ssli@stpi.narl.org.tw
Abstract: In the knowledge economy, new technology development requires a demand-driven, innovation-oriented model. The Taiwanese government uses integrated technology development programs as a policy tool in response to major social and economic issues. However, the government is increasingly demanding budget accountability, and so the assessment of program benefits is a vital issue. The 2010 Survey of Government Strategy for Technology Development found that about 53.7% of respondents believed there were problems with the objectives, indicators or planning of technology development programs. In order to improve the practice of program design in Taiwan, this study introduces a logical framework approach which incorporates planning tools commonly used in Taiwan. There are three advantages to the planning process. 1. It is simple and uses existing tools, making adoption easy. 2. It includes strategy design and performance indicator design. 3. It replaces a technology-oriented approach with a problem-oriented approach.
The Role of Evaluation within Nanoscale Science and Engineering Education Research: Differential Use, Application and Benefits of Evaluation
Presenter(s):
Jennifer Nielsen, Manhattan Strategy Group, jnielsen@manhattanstrategy.com
Andrew Herrmann, Manhattan Strategy Group, aherrmann@manhattanstrategy.com
Amara Okoroafor, Manhattan Strategy Group, aokoroafor@manhattanstrategy.com
Taimur Amjad, Manhattan Strategy Group, tamjad@manhattanstrategy.com
Shezad Habib, Manhattan Strategy Group, shabib@manhattanstrategy.com
Abstract: Directorates and Divisions within the National Science Foundation (NSF) co-funded several research projects to enhance Nanoscale Science and Engineering (NSE) Education through the development of educational resources for grades 7-12, and the general public. As the field of NSE is rapidly advancing and difficult to understand, these projects required collaboration between researchers and educators. The role of evaluation within one Division's research projects varied tremendously. Within the projects, four of ten included an evaluator as project senior personnel working 160+ hours on the project. These four projects, plus two others, listed an evaluator as an organizational partner. Analysis of proposals, reviews, and reports revealed both differential use/inclusion of the evaluation perspective, and differential application of the evaluation perspective within the projects. This paper will explore the differential benefits, both tangible and perceived, that were associated with the varying role of the evaluation perspective in these efforts.

Session Title: Even Teenagers Value Evaluation!: How Service Recipients Use Outcome and Evaluation Data at the Latin-American Youth Center
Expert Lecture Session 314 to be held in  Avila A on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Evaluation Use TIG and the Human Services Evaluation TIG
Chair(s):
Isaac Castillo, Latin American Youth Center, isaac@layc-dc.org
Presenter(s):
Isaac Castillo, Latin American Youth Center, isaac@layc-dc.org
Abstract: The Latin American Youth Center (LAYC) is a multi-service nonprofit in Washington, DC that evaluates each of its 71 programs internally. Results from these evaluations have been used to improve programming for years. More recently, LAYC has worked with high-risk and high-need service recipients to empower them in the use of the evaluation data in their daily lives. These youth and young adults share personal level outcomes (and in some instances program level outcomes) with judges, probation officers, and teachers to demonstrate how they have turned their lives around. This session will share LAYC's internal evaluation work, how the outcomes are shared with service recipients, and how youth and young adults use this information to prove to others (and themselves) that they are improving their lives.

Session Title: Using Data Visualization Software to Engage Stakeholders in Evaluation
Demonstration Session 315 to be held in Avila B on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Data Visualization and Reporting TIG
Presenter(s):
Bryn Sadownik, Vancity Community Foundation, bryn_sadownik@vancity.com
Abstract: Demonstrating Value Initiative has been exploring whether data visualization software could promote the use of evaluation results and performance monitoring data in organizations and programs with limited technical capacity. We were interested in particular whether, * this format could improve decision-making and more clearly communicate the learning from evaluation and performance monitoring activities; * there are low-cost software programs available that can be used with minimal technical skill. We found that this software is a powerful tool and is possible within the skill set of most evaluators and small organizations. In this session, I will demonstrate how to apply one software - SAP Crystal Presentation Design - to develop a simple interactive report including how to model relationships and set up 'what-if' scenarios. SAP Crystal design builds directly on Excel and produces Flash files that can be incorporated into websites, PowerPoint presentation and PDF files.

Roundtable: Introducing a Practical Guidebook for Values-Engaged, Educative Evaluation
Roundtable Presentation 316 to be held in Balboa A on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Theories of Evaluation TIG
Presenter(s):
Jeehae Ahn, University of Illinois at Urbana-Champaign, jahn1@illinois.edu
Ayesha Boyce, University of Illinois at Urbana-Champaign, boyce3@illinois.edu
Jennifer C Greene, University of Illinois at Urbana-Champaign, jcgreene@illinois.edu
Abstract: This presentation showcases a newly developed practical guidebook for a values-engaged, educative approach to evaluating science, technology, engineering, and mathematics (STEM) and other education programs. This evaluation approach is anchored in our dual commitments to (a) active engagement with values of diversity and equity, and (b) being educative in our work, that is, conducting evaluations that advance meaningful learning about the program being evaluated. Our guidebook foregrounds the distinctive role of the values-engaged, educative evaluator, and features step-by-step guidelines for the practice of values-engaged, educative evaluation, along with multiple illustrations of these guidelines from our varied field tests of this approach. In presenting this guidebook, we invite other education program evaluators to our roundtable to share their thoughts in an informal, interactive format that can further inform and enlighten our thinking about values engagement in evaluation practice.

Session Title: Evaluation Model for a Capacity Building Website to Assess Use and Information Transfer
Demonstration Session 317 to be held in Balboa C on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Andria Zaverl, AIDS Project Los Angeles, azaverl@apla.org
Oscar Marquez, AIDS Project Los Angeles, omarquez@apla.org
Abstract: Shared Action (SA), a non-profit capacity building assistance (CBA) program, developed a system to evaluate its online (website) capacity building services including information transfer. SA evaluation is a mixed methods model that assesses "use" of acquired knowledge and skills as well as effectiveness of knowledge transfer. Data collected online includes items downloaded (e.g. recorded webinars and educational materials), type of user and frequency, and interactions with blogs and forums. Preliminary findings showed: pages that clients use frequently, where they access the information, and content of feedback provided by clients. The service was setup for organizations in the USA, however, the evaluation showed SA impacted other countries: 5% of users were international, Germany being most frequent. Lesson learned: online evaluations are possible, but need to be creative, use mixed methods, and always be prepared for unexpected results. The demonstration will present the model, the findings, analysis of data, and the application.

Session Title: Toward Integration-Organizational Learning From Within and Among the Network of Funded Partners
Demonstration Session 318 to be held in Capistrano A on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building
Presenter(s):
Aimee White, Trident United Way, awhite@tuw.org
Eileen Rossler, Trident United Way, erossler@tuw.org
Abstract: This demonstration will offer attendants an opportunity to learn about an organizational learning and capacity building process and subsequent set of tools/resources for the integration of program areas within service organizations and funders. The system includes a series of trainings, technical assistance processes, and continuous quality improvement processes that will assist agencies performing under multiple programming areas or funders funding under multiple program areas in the daunting task of integrating. The strength of this approach is that it was designed with systems theory and therefore has a flexibility and adaptability that can translate across a variety of program areas and funding directives. Attendees will learn best practices associated with implementing the system of organizational learning and capacity building. The perspective is uniquely United Way in nature, but translates widely.

Session Title: Are We There Yet? How Internal & External Evaluation Work Together to Assess Progress
Panel Session 319 to be held in Capistrano B on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Internal Evaluation TIG
Chair(s):
Kathleen Tinworth, Denver Museum of Nature and Science, kathleen.tinworth@dmns.org
Discussant(s):
Kathleen Tinworth, Denver Museum of Nature and Science, kathleen.tinworth@dmns.org
Abstract: The internal and external evaluator of a geographically dispersed national organization will discuss the ways they ensure equal valuing of internal and external findings. Panelists will discuss tools they use to measure the organization's progress in achieving its goal of gender equity in STEM education and workforce. They will also explain the strategic planning and negotiation necessary to deliver a unified message to the organization's leadership and staff to help stakeholders gauge progress toward achievement of the organization's social-justice mission. This intermingling and merging of perspectives is critical for the organization's program improvement and reporting needs. In addition, sharing data and perspectives enables the organization to better embrace the multiplicity of factors that contribute to the success of a social movement. Panelists will encourage discussion with audience members about internal/external evaluation cooperation as well as the complexity of evaluating a social-justice movement.
The External Side of Evaluating Progress
Elizabeth Litzler, University of Washington, elitzler@u.washington.edu
Dr. Litzler has been working as the external evaluator for the National Center for Women & Information Technology (NCWIT) since a year after the organization's founding. The organization's strategies are ever-changing, and the evaluation must be nimble in response. NCWIT has multiple funders, target audiences and objectives, thus the external evaluation uses short, medium and long term measures, taking advantage of publicly available education data, participant observation, Web scans, key informant interviews, and an annual members survey. The potential for higher impact of the external evaluation findings grew when the internal evaluator was hired four years after the external evaluation had begun. This panelist will briefly discuss the external evaluation's overall strategy, methodologies, and how NCWIT's internal evaluation efforts are incorporated into her approach.
The Internal Side of Evaluating Progress
Wendy DuBow, National Center for Women & Information Technology, wendy.dubow@colorado.edu
The National Center for Women & Information Technology (NCWIT) internal evaluator will briefly describe methods used to measure program and outreach impact. Her tactics involve multiple methods - random sample, self-selected sample, snowball and convenience sample surveys, telephone and in-person individual and group interviews, focus groups, and Website analysis. She also will describe the types of negotiations she and the external evaluator have engaged in, and together, they will elaborate on the larger considerations of how to evaluate a complex social justice movement. Dr. DuBow was hired as the first internal evaluator when the organization was five years old and the external evaluator had already been involved for four years. Realizing the potential impact of two voices pushing evaluation data, they have worked together to create higher-impact evaluations.

Session Title: National Evaluation of the Stay on Track Program: Examining the Unique Outcomes of Adolescents From Military Families
Skill-Building Workshop 320 to be held in Carmel on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Presenter(s):
Melissa Rivera, National Center for Prevention and Research Solutions, mrivera@ncprs.org
Abstract: This session will outline the comprehensive approach to drug prevention education utilized in the Stay on Track curriculum, the measurement strategy employed to assess its effectiveness, and the outcomes of the latest large-scale national implementation. The results are further analyzed to differentiate students reporting having family members actively serving in the military as well as those reporting a current family member's deployment. The comprehensive approach encompasses the development and utilization of evaluation quality tools, fidelity measures, and engagement with certified implementers throughout the evaluation cycle. Additionally, a summary of the attitude and intention outcomes associated with illicit substance use, and the complex findings associated with students reporting family member deployment will be provided. Overall results for 36,664 sixth, seventh, and eighth graders who participated in the program will also be presented.

Session Title: Understanding Community Capacity and Readiness for Evaluation
Multipaper Session 322 to be held in El Capitan A on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Jeanette Treiber,  University of California, Davis, jtreiber@ucdavis.edu
Understanding of the Community Change Process: Using the Community Capitals Framework to Evaluate the Impact of 22 Community Wellness Grants
Presenter(s):
Mary Emery, South Dakota State University, memery@iastate.edu
Jane Schadle, Iowa Dept of Public Health, jschadle@idph.state.ia.us
Kala Shipley, Iowa Dept of Public Health, shipley@idph.state.ia.us
Cathy Lillehoj, Iowa Department of Public Health, clilleho@idph.state.ia.us
Abstract: Recent research focuses on the important role of the community eco-system in determining an individual's overall health. Yet, we know little about what approaches are likely to lead to changes in people's participation within their communities with respect to healthy behaviors. The Iowa Department of Health Promotion and Chronic Disease Prevention funded 24 community wellness projects to develop new learning about community wellness change processes. Grantees were required to use the community capitals framework (CCF) in reporting results. Using the CCF helped develop a better understanding of the importance of community capacity in strengthening wellness-related work. The results indicate those grants that leveraged multiple assets to foster healthier communities and thus increased assets in the intangible capitals (human, social, cultural, and political) were more likely to have a long-term impact on the community than those efforts that focused on a single capital (human or built for example).
Using Evaluation Readiness in Health Promotion Programs to Determine True Value
Presenter(s):
Janet Clinton, University of Melbourne, janetclinton@xtra.co.nz
Tinoci O'Connor, University of Auckland, t.oconnor@auckland.ac.nz
Amanda Dunlop, University of Auckland, aj.dunlop@auckland.ac.nz
Faith Mahony, University of Auckland, f.mahony@auckland.ac.nz
Abstract: Communities are often at different levels of readiness for implementing and evaluating programs; and evaluation is seen as a very low priority. Ultimately these levels of readiness and prioritizing or valuing evaluation will determine the overall success of a program. Understanding and accounting for these different levels of readiness has always been a challenge for evaluators. This paper describes a model that was derived from working in Pacific Island communities to ensure that these levels of readiness are taken into account when deriving an evaluative judgement. A number of health promotion programs with the goal of enhancing health status are used as case examples. Evaluation evidence from the sites is analysed to illustrate the impact of different the levels of readiness on the success of a program. A major argument is that working with the communities at their differing levels of readiness will increase the value of evaluation.

Session Title: Is Outcome Measurement Possible in the Peacebuilding Field?
Panel Session 323 to be held in El Capitan B on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Gretchen Shanks, Mercy Corps, gshanks@mercycorps.org
Abstract: Monitoring and evaluation of peacebuilding programs presents unique challenges, which often inspire resistance from practitioners. While some challenges are more perceived than real there are numerous barriers, including dynamic conflict contexts, a lack of impact indicators, the challenges associated with measuring prevention, and ethical constraints. Despite these challenges, measuring the results from peacebuilding programs remains more important than ever. This panel session will offer two examples where peacebuilding practitioners are pushing themselves and their partners to move beyond the comfortable, to develop and test indicators, tools and theories of change. These teams conducted research on key causal mechanisms in peacebuilding programming, and developed indicators, survey tools, and practical data collection forms to track some of these outcomes. While the research and the tools are imperfect, things are trending in the right direction. This panel will discuss what worked, what didn't, and where we might go from here.
Measuring Increases in Disputes Resolved in Iraq
Sharon Morris, Mercy Corps, smorris@dc.mercycorps.org
Since the removal of the Baathist regime in 2003, Iraq has struggled to redefine itself as a peaceful, democratic nation. After years of authoritarian rule and conflict, too many leaders still see violence as an effective strategy for meeting their goals. However, a new generation of leaders is emerging that is committed to consensus and compromise. Since early 2009, Mercy Corps has implemented a program that provides negotiation training, mentoring and support to a nationwide network of Iraqi leaders. From the beginning we aimed to measure outcomes rather than just outputs - numbers of disputes resolved, changes in negotiation expertise, reduction in levels of violence. Initially negotiation trainers were extremely reluctant to attempt these measurements. Sharon Morris, Director of Mercy Corps' Conflict Management Group and lead designer of this program, will discuss why we pushed beyond outputs, how we structured indicators and measurement tools, and challenges encountered throughout the effort.
Exploring the Causal Mechanisms That Link Poverty and Conflict
Jenny Vaughn, Mercy Corps, jvaughan@bos.mercycorps.org
As a relatively young discipline, the field of peacebuilding is struggling with the best ways to measure impact and identify success. A number of challenges have consistently stymied monitoring and evaluation of peacebuilding programs. Chief among these is the lack of indicators for measuring impact across programs and contexts and the lack of tools for collecting data systematically and rigorously. In order to evaluate the impact of our peacebuilding and poverty alleviation programs in complex conflict-affected environments, and ultimately, improve their effectiveness, we need better tools - meaningful indicators and practical data collection methods. During this presentation Jenny Vaughn, Program Officer for Mercy Corps' Conflict Management Group and research lead, will explore Mercy Corps' effort aimed at improving our knowledge of the causal relationships between poverty and conflict in order to develop a set of meaningful indicators and tools - the Evaluation and Assessment of Poverty and Conflict (EAPC) research project.

Roundtable: A Model of Total Survey Error: Examining the Inferential Value in Survey and Questionnaire Data and its Implications on Evaluation Findings
Roundtable Presentation 324 to be held in Exec. Board Room on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Michelle Bakerson, Indiana University South Bend, mmbakerson@yahoo.com
Abstract: Charged with making judgments about the quality, merit or worth of a program many evaluators use survey or questionnaires to gather data, which is one of the most important and uniquely informative data collection tools available. The significance placed on this data collection tool is examined as limitations and biases inherently exist. Virtually all surveys will contain some form of error which will harm the inferential value of the data. A model of total survey error is examined including sampling error and non-sampling error with a major focus on non-response error. Practical attention to survey methodology in particular techniques for reducing and correcting for non-response will facilitate evaluators' knowledge and ability to make informed decisions regarding their data and in turn their ability to inform stakeholders.

Session Title: Exploring How Values, Identity and Gender Influence Evaluator Approach and Role
Think Tank Session 325 to be held in Huntington A on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Feminist Issues in Evaluation TIG
Presenter(s):
Jara Dean-Coffey, jdcPartnerships, jara@jdcpartnerships.com
Discussant(s):
Jill Casey, jdcPartnerships, jill@jdcpartnerships.com
Summer Jackson, Independent Consultant, snjackson22@gmail.com
Nicole Farkouh, jdcPartnerships, nicole@jdcpartnerships.com
Abstract: This session explores the role of identity and values in our work as female evaluators. Participants will identify their own values and identities, naming the primary ones and looking at the ways in these values and identities interact with their work as evaluators. Session leaders represent a diverse array of backgrounds, and particularly different life choices common among professional women. They will share how their values and identities simultaneously enhance and challenge the process and product of their work, attending to their internal processes as well as how they are perceived by clients. Participants will then break into groups to examine their values and dimensions of their own identities. Participants will have an opportunity for personal reflection and processing and leave with a personal values statement to use as they wish.

Session Title: Stakeholder Engagement in Government Evaluations
Multipaper Session 326 to be held in Huntington B on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Government Evaluation TIG
Chair(s):
James Newman,  Idaho State University, newmjame@isu.edu
Stakeholder Interaction and Measures for Reduced Sick Leave in Norway
Presenter(s):
Morten Nordberg, Office of the Auditor General of Norway, morten.nordberg@riksrevisjonen.no
Knut Arne Vgeskar, Office of the Auditor General of Norway, knut-arne.vageskar@riksrevisjonen.no
Abstract: The Office of the Auditor General has evaluated how the public authorities follow up persons on sick leave. Norway has one of the highest sick leave rates in the OECD. One of the Government's main goals is to prevent sick leave and increase the number of employees who return to work. To reduce permanent exclusion from the labour force, early implementation of medical- or vocational rehabilitation, or graduated benefits, might be required. This demands that the relevant stakeholders, such as the Norwegian Parliament, the Government, the Labour and Welfare Service, employees, employers and medical doctors, work together. It is also important that the measures and actions implemented have the desirable effects. This requires that the investigation must take into account the plurality of stakeholder's values and issues. This paper will elaborate the methodological challenges faced when evaluating this national program, and will also share the main findings of the audit.
Different Stakeholders Interest in Public Performance Audit: The Norwegian Case
Presenter(s):
Dag Henning Larsen, Office of the Auditor General of Norway, dag-henning.larsen@riksrevisjonen.no
Helge Strand Sttveiten, Office of the Auditor General of Norway, helge-strand.osttveiten@riksrevisjonen.no
Abstract: Yearly, The Office of the Auditor General of Norway send about 15 performance audit reports to the Norwegian parliament. There are several stakeholders to these performance audits: The Norwegian parliament, the central government of Norway, media and the general public. These stakeholders have different positions and expectations regarding performance audit reports. The Office of the Auditor General of Norway collects audit evidence from the ministries and their underlying departments and reports the findings to the parliament. Independence from the ministries is an important condition for doing this work. At the same time, the Auditor General of Norway has as a vision to make a better public sector. This is not necessary a dilemma, but must considered when collecting and reporting audit findings. How far should the auditor go in interacting with the ministries, and how can a fair and balanced audit be communicated to the public?

Session Title: Measurement Challenges
Multipaper Session 327 to be held in Huntington C on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Allan Porowski,  ICF International, aporowski@icfi.com
A Process for Determining the Acceptability Of Measurement Tools: How to Decide?
Presenter(s):
Lydia Marek, Virginia Tech, lydiam8992@yahoo.com
Donna-Jean Brock, Evaluation Consulting Services Inc, djbrock.ecs@cox.net
Abstract: Many organizations fund multiple programs which are diverse in audience and focus. This poses certain challenges including inconsistent quality of evaluation across sites and an inability to determine overall impact of the initiative. To address these challenges, the CYFAR Initiative and the 4-H National Council through Kraft Foods, funded research to develop methodology for determining the adoption of measures for program evaluation. A brief rating form was developed based upon criteria identified in the literature as critical in selecting quality evaluation instruments. This rating form was reviewed by four experts in the field. It was then implemented with 44 reviewers across the country to review over 400 potential common measures for acceptability and applicability to these two projects. This methodology of choosing acceptable measures streamlined a massive review process, increased buy in by key stakeholders for the use of the measures, and ensured relevancy of measures to these initiatives.
Creating and Sustaining Systemic Change: A Rubric for Measuring Organizational Capacity in Higher Education Alliances
Presenter(s):
Sarah Hug, University of Colorado, Boulder, hug@colorado.edu
Heather Thiry, University of Colorado, Boulder, heather.thiry@colorado.edu
Abstract: Evaluating organizational capacity and sustainability in a collaborative alliance is a challenge. Through a National Science Foundation-funded project, we developed an evaluation rubric that measures four constructs vital to understanding capacity in multi-site initiatives in higher education settings: healthy educational pipeline development, academic resource development and training, faculty/staff engagement, and alliance-wide collaborative engagement. The Computing Alliance of Hispanics (CAHSI) is a ten-institution consortium funded by the National Science Foundation to recruit, retain, and advance Hispanics in computing fields. We propose the CAHSI evaluation rubric as a viable model for evaluating organizational capacity and sustainability. In this talk, we highlight the components that led to the development of this evaluative measurement tool. In addition, we show how it can be used in collaborative higher education alliances focused on educational reform and innovation.

Session Title: Building Evaluation Capacity in College Access and Success Programs
Panel Session 328 to be held in La Jolla on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the College Access Programs TIG
Chair(s):
Wendy Erisman, Strix Research LLC, werisman@strixresearch.com
Abstract: This session examines lessons learned from two strategies designed to build evaluation capacity in college access and success programs. The first strategy is an online toolkit that helps program staff design and implement effective evaluations with or without the help of external evaluators. The second strategy involves offering funding to college access and success grantees for evaluation capacity-building through technical assistance provided by external consultants. The session will examine the strengths and weaknesses of each strategy, provide recommendations for using the strategies successfully, and discuss future plans for evaluation capacity-building work in the college access and success field.
The Evaluation Toolkit: An Online Resource for Developing an Evaluation Mindset and Evaluation Culture Among College Access and Success Programs
Chandra Taylor Smith, The Pell Institute, chandra.taylorsmith@pellinstitute.org
In 2010, The Pell Institute for the Study of Opportunity in Education in partnership with The Pathways to College Network launched an online Evaluation Toolkit. The toolkit is designed to help college access and success program staff, including staff for federal TRIO programs, undertake small scale, high quality evaluations of their programs. The toolkit includes advice to help programs design and implement evaluation plans, collect and analyze data, and use evaluation findings for program improvement. This presentation will demonstrate the toolkit's features, discuss the challenges of designing an evaluation toolkit intended for use by program staff with limited evaluation experience, illustrate the approach to engaging college access professionals to utilize the toolkit as an exercise in developing an evaluation mindset and evaluation program culture as well as describe future plans for the project, including efforts to incorporate empowerment and culturally responsive evaluation approaches into the toolkit.
Including Evaluation Capacity-Building in College Access and Success Program Grants: Lessons Learned from Technical Assistance Projects
Wendy Erisman, Strix Research LLC, werisman@strixresearch.com
College access and success funders recognize the importance of evaluation for understanding and improving program outcomes and are more often providing financial support for evaluation in their grant-making. For many programs, however, hiring an external evaluator to conduct an evaluation is not sufficient. Program staff also need to build their internal capacity to collect, analyze, communicate, and use evaluation findings. This presentation describes lessons learned from a series of funder-sponsored evaluation technical assistance projects. Topics addressed include project scope and cost, timing, content, and approach as well as thoughts about how programs can pitch such work to prospective funders.

Session Title: The Value of Knowledge Management in Evaluation: A Research Perspective
Multipaper Session 329 to be held in Laguna A on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Research on Evaluation
Chair(s):
Karen Widmer,  Claremont Graduate University, karen.widmer@cgu.edu
Evaluation as a Mechanism for Knowledge Translation
Presenter(s):
Catherine Donnelly, Queen's University, Kingston, donnelyc@queensu.ca
Abstract: Despite the vast amount of health research there is a large gap between the knowledge produced by researchers and the knowledge used by practitioners. While knowledge translation activities are used to bridge this gap, there is little evidence to suggest these activities result in changes to practice. The evaluation literature has focused on the evaluation of knowledge translation programs with no specific attention on how the process of evaluation itself may serve as a mechanism for knowledge translation. The objective of the paper is to explore how evaluation can facilitate knowledge translation. Methods: A review of the evaluation literature identified concepts and approaches that support the role of evaluation in knowledge translation. Participatory evaluation, evaluation influence and organizational development offered theoretical and empirical evidence to understand the dimensions of evaluation that may facilitate the translation of knowledge. The paper offers a foundation to support further research.
Valuing of Knowledge in Health and Development: A Knowledge Management/Knowledge Exchange (KM/KE) Conceptual Framework
Presenter(s):
Saori Ohkubo, Johns Hopkins University, sohkubo@jhuccp.org
Tara Sullivan, Johns Hopkins University, tsulliva@jhuccp.org
Abstract: Knowledge management/knowledge exchange (KM/KE) for health and development practitioners work in a complex environment where knowledge has the potential to improve efficiency, effectiveness and health outcomes. However, theoretical frameworks, indicators and methods have yet to guide this important and multifaceted area. The current effort led by a group of evaluators addresses the challenge of measuring the impacts of KM/KE programs. Building upon the earlier work—a monitoring and evaluation (M&E) guide for health information programs published in 2007, the new edition of the guide offers an updated conceptual framework covering a wide range of KM/KE practices including knowledge sharing and learning at the individual, organizational and programmatic levels. The framework further explores and integrates relevant social and behavior change communication theories. Audience values on knowledge influence the process of knowledge dissemination, sharing and uptake. The value also plays an important role in KM/KE programs and can affect returns on investments.

Session Title: Applying Universal Design for Learning Principles to Evaluation
Multipaper Session 330 to be held in Laguna B on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Disabilities and Other Vulnerable Populations and the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Don Glass,  Boston College, donglass@gmail.com
Applying Universal Design for Learning (UDL) to Educational Evaluations
Presenter(s):
Don Glass, Boston College, donglass@gmail.com
Tracey Hall, Center for Applied Special Technology, thall@cast.org
Abstract: In this presentation, we will use two online digital evaluation tools- one completed and one in development- to highlight the critical features of applying the Universal Design for Learning (CAST 2011) to the design of educational evaluations. We will also argue that the accessible, flexible, and interactive digital environment aligns well with Fetterman's (2005) Empowerment Evaluation principles because of its inclusive, participatory nature, and its focus on self-determined learning and improvement.
The Value of Universal Design for Evaluation (UDE): Lessons Learned Piloting the UDE Checklist
Presenter(s):
June Gothberg, Western Michigan University, june.gothberg@wmich.edu
Jennifer Sullivan Sulewski, University of Massachusetts Boston, jennifer.sulewski@umb.edu
Abstract: The value of incorporating Universal Design principals in architecture is a well-documented movement. The seven principals of Universal Design target the design of products, services, and systems to be used by as many people as possible without the need for adaptations. From Canada's endorsement of age-friendly communities, Europe's push for e-inclusion, Japan's barrier-free Human Centered Design policies, and Australia and Brazil's promotion of accessible and inclusive tourism, Universal Design is headline news across the world (Institute for Human Centered Design, 2010). In 2010, the Universal Design for Evaluation Checklist was drafted and saw revisions at the annual conference (Sullivan-Sulewski & Gothberg, 2010). Since that time, the checklist has been piloted on a variety of evaluation projects across the country, using an assortment of evaluation tools, in several different contexts. This session will focus on lessons learned and discussion as to the revisions needed prior to the final version.

Roundtable: Evaluation of an Online Versus Classroom Based Undergraduate Social Psychology Course
Roundtable Presentation 331 to be held in Lido A on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Assessment in Higher Education TIG
Presenter(s):
Jessica Carlson, Western New England College, jcarlson@wnec.edu
Craig Outhouse, University of Massachusetts, Amherst, craigouthouse@gmail.com
Abstract: Findings regarding the effectiveness of online versus classroom based courses in higher education have been inconsistent, with some revealing higher exam performance in online courses (e.g., Maki, Maki, Patterson, & Whittaker, 2000; Poirier & Feldman, 2004), others discovering an advantage for students in 'live' classes (e.g., Edmonds, 2006; Wang & Newlin, 2000), and some indicating no performance difference between the two (e.g., Waschull, 2001). However, there is evidence overall of similar instructor evaluations by students in both delivery formats (Knight, Ridley, & Davies, 1998; Ridley, 1995). This roundable will begin with the presentation of results from a study investigating online versus classroom based instruction in a social psychology course at a private northeastern college. Results of this study will be discussed in the context of evaluation for both practitioners and researchers, with thoughts and feedback solicited from the audience.

Session Title: Is Not Killing Patients Cost-effective? The Economics of Quality Improvement in Health Care
Expert Lecture Session 332 to be held in  Lido C on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Mary Gutmann, EnCompass LLC, mgutmann@encompassworld.com
Presenter(s):
Edward Broughton, University Research Co, ebroughton@urc-chs.com
Abstract: Countless thousands of patients die unnecessarily each year from medical errors and other lapses in the quality of their health care. Quality improvement interventions can address health service dysfunction and improve patient outcomes. But many decisionmakers believe such interventions are too expensive and inefficient. Making the business care for improving health care quality with sound economic analyses is becoming more important as budgets are tightened and administrators strive to improve efficiency. Using examples from US and international health care settings, this lecture discussed methods of cost-effectiveness analysis for such programs - why they are done, how they are performed and how to interpret their results. This information is crucial to anyone interested in understanding and performing economic evaluations of programs to make health care work better for everyone.

Session Title: Assessing Additionality of Public Support of Industrial Research and Development
Multipaper Session 333 to be held in Malibu on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Cheryl Oros,  Oros Consulting LLC, cheryl.oros@gmail.com
Evaluating Effectiveness of Public Support to Industrial R&D in Turkey Through Input and Output Additionality
Presenter(s):
Sinan Tandogan, The Scientific and Technological Research Council of Turkey, sinantandogan@gmail.com
M Teoman Pamukcu, Middle East Technical University, Ankara, pamukcu@metu.edu.tr
Abstract: In this paper, two quantitative studies examining the causal relations between direct public support and R&D activities of beneficiary firms are presented. The first study, which uses an econometric model, indicates that R&D subsidy is an important determinant of private R&D intensity. In the second study, adopting the semi-parametric propensity score matching and differences-in-difference methods and using a panel dataset, effectiveness of public grant for industrial R&D projects in Turkey is examined. The results indicate program-induced input additionality in R&D personnel, R&D intensity and R&D expenditure per employee of the beneficiary firms. However, no statistically significant output additionality is observed in the same period, possibly because a longer time series dataset is needed. Sufficient evidence was obtained to conclude that government's industrial R&D project support program has encouraged most private firms in Turkey to increase their R&D spending and R&D personnel in the period of 2003-2006.
Evaluating the Additionality and Certification Effects of Research and Innovation Policy on Small Business Start-Ups: An Inflow-Sampling and Counterfactual Approach
Presenter(s):
Reynold Galope, Georgia State University, reynold.galope@gatech.edu
Abstract: This paper proposes to examine the effectiveness of a U.S. federal technology program on inducing innovative effort among small business start-ups using a new sample and methods motivated by the counterfactual approach to causation. Its focus is the additionality and certification effects of the Small Business Innovation Research (SBIR) program, a federal program that co-finances the development of pre-competitive products, processes, or technologies with small firms. Our preliminary empirical results show that recipient small business start-ups spent more than four times in research and development (R&D) as much as their observationally similar counterparts did, suggesting that the SBIR grants did not crowd out firm-financed R&D. We will also examine the effect of the SBIR grant on the small business start-ups' ability (1) to introduce product and process innovations, and (2) to attract external capital necessary for the firm's survival, growth, and long-run innovative capacity.

Session Title: Connections and Hawaiian Culture: Evaluators as Boundary Spanners
Panel Session 334 to be held in Manhattan on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Indigenous Peoples in Evaluation
Chair(s):
Martha Ann Carey, Kells Consulting, marthaanncarey@gmail.com
Abstract: Evaluators often serve as a bridge between the client and the resource organization, spanning two cultures with different needs, perceptions, expectations, and occasionally somewhat different goals. Boundary spanning is a type of social network connection that can be effective in program development and documentation of a program's success. Factors important in being effective include appreciation of different cultural values and perspectives arising from gender, power distance, individual traits of dominance, and group cohesiveness. In two applied settings in Hawai'i, the researchers/evaluators learned to share the boundary spanner role with community members. The first presentation describes work with a nonprofit organization in planning and evaluation using a logic model, needs assessment, environmental scan, and plans for assessing outputs and outcomes. The second presentation involves boundary spanning across academic and community cultures for a study with Hawaiian elders in their understanding of wellness.
Evaluation in Na Wai Iwi Ola: Tools and Lessons Learned
Kumu Keala Ching, Na Wai Iwi Ola Foundation, kumukeala@nawaiiwiola.com
Martha Ann Carey, Kells Consulting, marthaanncarey@gmail.com
Rolinda Bean, Na Wai Iwi Ola Foundation, rolindabean@outrigger.com
Na Wai Iwi Ola is a growing, nonprofit organization on the island of Hawaii. The Director invited an evaluator to help the organization understand how to coordinate its activities and obtain funding. The organization had a 10 year history of working to preserve the Native Hawaiian culture through educational programs. It had many supporters and occasional funding, but it did not have an overall plan. The logic model process introduced at a Directors' meeting was enthusiastically received and led to better program goals, and clarification of the relationships between resources, activities, and outcomes. The first presenter is an internationally recognized expert in Hawaiian culture and was the cofounder of the organization. The second presenter has experience working with a wide variety of organizations in development and evaluation. The role of a boundary spanner will be highlighted in the activities of the Hawaiian expert and in the evaluator's role.
Trust for Bridging Cultures
Anne Odell, Azusa Pacific University, apodell@apu.edu
In planning to do research with Native Hawaiian elders, I needed to gain entry, establish trust, and listen well. The quality of the research data was improved by my being a boundary spanner between my culture as a mainland White person with some experience of the culture, and the Hawaiian elders. I also spanned the boundary between the researcher community in terms of what was needed for research and what the Hawaiian elders felt comfortable with. In addition to my experience of being a nurse practitioner, having lived in the local community of Kona, Hawaii, and having family still there, two community experts greatly assisted me in selecting a culturally relevant movie to start the focus group sessions, planning the logistics of meeting in a community center, and the incentives for participants. My research findings of "keeping balance" as the key to wellness resonated well with the elders.

Session Title: Strategies for Tackling Complexity in Environmental Programs and Evaluations
Multipaper Session 335 to be held in Monterey on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Environmental Program Evaluation TIG
Chair(s):
Johanna Morariu,  Innovation Network, jmorariu@innonet.org
Strategies for Dealing with Scale Issues When Evaluating Water Quality Projects
Presenter(s):
Karlyn Eckman, University of Minnesota, eckma001@umn.edu
Valerie Were, University of Minnesota, were0005@umn.edu
Abstract: The movement of pollutants in soil and water often crosses jurisdictional boundaries and watersheds. This complicates the evaluation of water quality projects at the local, state and national levels. Tracking the origin of pollutants can be challenging, particularly when human behaviors upstream contribute to pollution carried downstream. Similarly, stakeholders at multiple levels may have very different needs for information, at different scales. For example, state and federal agencies require quantitative evaluation frameworks based upon data that is comparable across watersheds, states and regions. However, local governments and stakeholders often prefer localized data and simple evaluation methods. While the biophysical results of such projects are monitored and evaluated, the social dimensions of water pollution remain largely unevaluated. We offer some strategies for clarifying scale issues based upon our research in Minnesota with diverse audiences, government agencies and waterbodies. We also discuss methods for evaluating the social dimensions of water quality projects.

Session Title: Leading the Horse to Water, Part III: Embedding Evaluation in a Knowledge Management Project
Think Tank Session 336 to be held in Oceanside on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Business and Industry TIG
Presenter(s):
Thomas Ward, United States Army, tewardii@aol.com
Discussant(s):
Rhoda Risner, United States Army, rhoda.risner@us.army.mil
Abstract: This is the third in a topical series of Thinks Tanks and Round Tables. The first examined how to guide an organization in its initial consideration of a knowledge management project. The second focused on how to ensure such a project starts off with an effective front end needs assessment that both determines specific needs and identifies outcomes to observe and measure for subsequent project evaluation. This Think Tank will present the lessons learned from experience during a long-term implementation of a knowledge management initiative. The focus is on two issues: using results of the needs assessment to prioritize effort, and the practical aspects of building in evaluation during early implementation phases. It will highlight both quantitative and qualitative measurements and dealing with the messiness of human interaction, especially when institutional inertia is a major factor. The session will be 1/2 presenter time, and 1/2 participant brainstorming in small groups.

Session Title: Using Advocacy Evaluation and Learning Processes in Countries With Limited Political Space to Understand Actors, Identify Openings, and Achieve Policy Advances
Panel Session 342 to be held in San Clemente on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Laura Roper, Brandeis University, l.roper@rcn.com
Discussant(s):
Sono Aibe, Pathfinder International, saibe@pathfinder.org
Abstract: This session takes several advocacy case examples - a family planning campaign in Tanzania, a nascent disability movement in Vietnam, and a gender-based violence prevention campaign in El Salvador - where civil society actors have to navigate complex political dynamics, not only with an array of formal and informal power structures, but also amongst both domestic and international non-governmental actors. In each case we discuss the role that evaluation and learning has played and, from there, address more broadly the ways in which advocacy planning, monitoring and evaluation tools need to be refined to be more useful in political settings that range from limited democracy to more authoritarian systems
Using Developmental Evaluation for Better Coalition Advocacy on Disability and Reproductive Health Issues
David Devlin-Foltz, Aspen Institute, david.devlin-foltz@aspen.org
The Aspen Institute's Advocacy Planning and Evaluation Project (APEP) works with foundation and NGO clients to identify effective advocacy strategies in a wide range of contexts. In East Africa, our partnership with the Hewlett Foundation's "Money Well Spent" grantees includes support for coalitional work in contexts where relationships between international and local NGOs pose particular challenges for assessing the contribution of various parties. In Vietnam, a nascent disability rights movement is trying to create greater space for movement while addressing the challenging legacy of Agent Orange use by the US military during the war. This purely local movement is also working through its relationship with international NGOs and funders. This presentation will draw on these examples, together with other APEP projects on reproductive health - all in their relatively early stages - to discuss how a developmental evaluation approach can contribute to coalitional advocacy.
Policy Success in the Campaign to Prevent Gender-based Violence in El Salvador - The Contribution of Formal and Informal Evaluation and Learning Practice
Laura Roper, Brandeis University, l.roper@rcn.com
In 2005 Oxfam-America and several counterpart organizations launched The Campaign to Prevent Gender-based Violence in an adverse political context, characterized by conservative dominance, political polarization, and seemingly uncontrollable criminal violence, including violence against women. It has employed a strategic mix of popular awareness-raising, targeted outreach to policy-makers in both major parties, engagement with government authorities in key municipalities, and results-focused capacity building targeted at key stakeholder groups (e.g. judicial authorities, education officials, parliamentarians). This presentation talks about the how the campaign developed its own form of developmental evaluation and employed an array of formal and informal learning practices that led to notable political successes including incorporation of a GBV prevention curriculum at the Ministry of Education and the passage of the Law for a Life Free from Violence for Women.

Session Title: Growing Your Business in the Current Economic Climate
Panel Session 343 to be held in San Simeon A on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Independent Consulting TIG
Chair(s):
Patricia Mueller, Evergreen Evaluation & Consulting LLC, pat@evergreenevaluation.net
Discussant(s):
Patti Bourexis, The Study Group, studygroup@aol.com
Abstract: This presentation will describe strategies used to grow an independent consultant business into a viable corporation that continues to expand, despite the current economic climate. Critical business principles will be highlighted that have proven successful in the growth and development of an education-focused evaluation small business. Participants will gain an understanding of how the business started 15 years ago and grew in into a corporate business structure with the associated labor, legal, financial needs and requirements. Topics to be addressed include: ensuring ongoing mentorship from a senior mentor; the importance of professional development for all employees; contract, project and time management; diversifying the business portfolio; implications of technological innovations; cash flow and sleepless nights! The presentation's value to the audience will be a combination of the real and practicalGǪthe how tos, with a focus on constraints and pitfalls, and suggestions and solutions for business growth in today's economic climate.
From a "One Woman Show" to a Full Service Firm
Patricia Mueller, Evergreen Evaluation & Consulting LLC, pat@evergreenevaluation.net
The President of this small business will outline how she started the business 15 years ago as an independent consultant. Topics addressed for this initial life cycle of the business' development will include: developing a business plan, strategic focus, marketing strategies that worked and failed, and the impact and importance of having a business mentor. The presenter will then address the more current life cycle issues and concerns as the business expanded, to include delivery of high quality services to clients. Topics will include: employees and managing the work of others; business philosophy; and capacity issues related to time, personnel and technology. Dr. Mueller's company, Evergreen Evaluation & Consulting LLC, primarily evaluates education grants at State and Local levels and for Institutes of Higher Education.
Handling Growth Spurt and Business Logistics
David Merves, Evergreen Evaluation & Consulting LLC, david@evergreenevaluation.net
The Manager and Evaluation Associate for the business will discuss: the technological, communication, legal, accounting / financial structures necessary to maintain a viable corporate entity; what type of business structure is best for the independent consultant managing cash flow; and utilization of technology for communications, reporting and meetings. The presenter will then discuss the process of delegating and prioritizing the work in a growing business. Mr. Merves has an MBA in Operations Research and spent 30 years in the corporate hospitality industry. He completed the Claremont Graduate University Certificate in Advanced Graduate Study in Evaluation. His experience in expanding / growing businesses in new markets under varied economic conditions will provide insight to pitfalls and positives for the emerging independent consultant.

Session Title: Use Technology to Monitor Programs as They are Implemented: A Moodle and SAS Approach
Demonstration Session 344 to be held in San Simeon B on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Bo Yan, Blue Valley School District, byan@bluevalleyk12.org
Mike Slagle, Blue Valley School District, mslagle@bluevalleyk12.org
Abstract: Many districts adopt and implement intervention programs to help at-risk students. However, educators usually do not know whether a program works and issues exist in the program until evaluation at the end of implementation. To address this problem, we developed a data system that collects program implementation data using the database module of Moodle, and automatically and intelligently delivers alerts and reports to stakeholders using the SAS Business Intelligence framework. With the system, stakeholders receive alerts whenever issues occur in program implementation and reports about the effect of the program on a regular basis. In this session, we first introduce the concept of program monitoring and then demonstrate how we use this approach to monitor the gifted program and a math intervention program in our district.

Roundtable: Action Learning: An Intervention to Enhance Cultural Competencies of Evaluators
Roundtable Presentation 345 to be held in Santa Barbara on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Presenter(s):
Kyehyeon Cho, University of Illinois at Urbana-Champaign, kcho20@illinois.edu
Abstract: Action learning is a learning intervention based on questioning and critical reflection. It aims to provide chances to learn ways to deal with issues arising from actual work situations (O'Neil & Marsick, 2007). Marsick and Maltbia (2009) further argued that action learning can contribute to participants' transformative learning. In spite of lack of agreement on its definition (Cho & Egan, 2010), action learning has been accepted as a method to contemplate one's own perceptions and to generate discussion about discrepancies among the different ways of thinking. In this sense, action learning appears to be a suitable intervention to develop cultural agency of evaluators. This literature review intends to explore the theoretical possibility of utilizing action learning intervention as a tool to develop cross-cultural competencies for evaluators. Furthermore, this study aims to provide implications of cross-cultural learning interventions for evaluators.

Session Title: Statistical Methods for Interrupted Time-Series Analysis: Using the Auto-Regressive Integrated Moving Average (ARIMA) Technique in Program and Policy Evaluation
Demonstration Session 346 to be held in Santa Monica on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Quantitative Methods: Theory and Design TIG and the Crime and Justice TIG
Presenter(s):
Derek Cohen, University of Cincinnati, cohendk@mail.uc.edu
Abstract: This demonstration is designed to present attendees with the "bare bones" essentials to using ARIMA, a sophisticated interrupted time-series statistical model. Attendees will first be briefly educated in the mechanics and assumptions of using of the ARIMA technique. Then, the presenter will demonstrate how to compile or acquire an appropriate dataset as well as how to prepare it for statistical analysis. Once the data is prepared, attendees will be shown the basics of model construction, as well as how to alter a model to minimize autocorrelation. The output will then be interpreted and explained to attendees so that they will be able to garner conclusions from their personal data and models. The demonstration will be conducted using the SPSS/PASW statistical software package. The data used in the demonstration is from a work-in-progress evaluation of firearm policy in the state of Ohio.

Session Title: Outcomes for Youth in Residential Treatment and Foster Youth Education Programs
Multipaper Session 347 to be held in Sunset on Thursday, Nov 3, 11:40 AM to 12:25 PM
Sponsored by the Human Services Evaluation TIG
Chair(s):
James Sass,  Rio Hondo College, jimsass@earthlink.net
Discussant(s):
Michel Lahti,  University of Southern Maine, mlahti@usm.maine.edu
Examining Implementation From Many Perspectives: How Different are the Views of Implementation Quality From Observers, Supervisors, Staff, and Clients?
Presenter(s):
Kristin Duppong Hurley, University of Nebraska, Lincoln, kdupponghurley2@unl.edu
Justin Sullivan, University of Nebraska, Lincoln, justin.sullivan@boystown.org
Chrystal Jansz, University of Nebraska, Lincoln, cerj7@hotmail.com
Abstract: One key, but often overlooked, issue surrounding research studies is to assess the quality with which the treatment was implemented. Many issues surround the assessment of implementation, such as whether to collect data on dosage, adherence, competence, or engagement. Moreover, one needs to decide on how such data will be collected (e.g. who, how often). Minimal research has been conducted to examine the relationship among a variety of implementation assessment perspectives. This presentation will focus on how different and similar the implementation ratings were for observers, supervisors, direct-care staff, and the youth clients of a 24/7 residential treatment program. We will share the results of this NIMH funded project to examine if there was agreement among the methods to assess low, adequate and high levels of implementation, how implementation levels varied over time and experience levels, and if quality of implementation was related to youth mental health outcomes.
Evaluation of the Gloria Molina Foster Youth Education Program: A New Model of Collaboration Between School and Child Welfare Systems
Presenter(s):
Maura Harrington, Center for Nonprofit Management, mharrington@cnmsocal.org
Erin Maher, Casey Family Programs, emaher@casey.org
Lyscha Marcynyszyn, Casey Family Programs, lmarcynyszyn@casey.org
Carrie Miller, Los Angeles County Office of the CEO, cmiller@ceo.lacounty.gov
Angel Rodriguez, Gloria Molina Foster Youth Education Program, rodang@dcfs.lacounty.gov
Jessica Vallejo, Center for Nonprofit Management, jvallejo@cnmsocal.org
Jennifer Thibault, Center for Nonprofit Management, jthibault@cnmsocal.org
Abstract: The Gloria Molina Foster Youth Education Program strives to increase graduation rates by identifying an educational advocate for each student, improving academic performance and encouraging student retention. Implemented in two school districts as a pilot program and a third district in the second year, the staffing models varied (having either the social worker take on additional duties or having a second social worker serving as an educational advocate. One of the unique and promising, yet challenging aspects of this project is the interface between school and child welfare systems. The evaluation examined the impact of the program and documented challenges encountered in implementation. Discussion of the results will include the transparency of evaluators in their work with a vulnerable population as well as the potential influence of evaluator values in working in a political context with a range of stakeholders and in the design and analysis of the study.

Return to Evaluation 2011
Search Results for All Sessions