Evaluation 2009 Banner

Return to search form  

Roundtable: Challenges of Using an Integrated Methodology to Assess Programs at a Country Level
Roundtable Presentation 577 to be held in the Boardroom on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Government Evaluation TIG
Presenter(s):
Amr Elleithy, Department of Foreign Affairs and International Trade Canada, amr.elleithy@international.gc.ca
Stephen Kester, Department of Foreign Affairs and International Trade Canada, stephen.kester@international.gc.ca
Abstract: This presentation discusses methodological and conceptual challenges in empirically applying an integrated methodology to evaluation studies. The methodology was designed at the Department of Foreign Affairs and International Trade (DFAIT's), Canada, to evaluate the performance of all of its programs at a country level. The main challenges in applying the methodology lie in the need for an in-depth coverage of wide range of issues and the jurisdictional limitations that complicate attempts to assess the contribution of the results achieved by other departments. Another challenge is related to the attribution of results achieved especially when dealing with foreign relations. The presentation highlights the impact of the country context on the selection of evaluation questions and the methods used, and draws lessons for future applications of the methodology.

Session Title: Turning the Tables: Assessing Grantmakers' Advocacy Capacity
Panel Session 578 to be held in Panzacola Section F1 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Astrid Hendricks, The California Endowment, ahendricks@calendow.org
Discussant(s):
Barbara Masters, The California Endowment, bmasters@calendow.org
Abstract: Since its beginning, the advocacy evaluation field has recognized the critical importance that advocacy capacity, the knowledge, skills, and systems an organization needs to implement and sustain effective advocacy work, plays in policy change efforts. Because it plays such an important role, evaluators have found it valuable to assess advocates' strengths and skills at the beginning of a campaign, use that information to suggest ways of strengthening specific capacity areas, and then track improvements over time. In this session, the presenters -- an advocate and an evaluator, and the discussant -- a grantmaker -- will discuss another side of the capacity equation, showing that it also is important for grantmakers to think about and assess their own advocacy capacity, as their grantmaking practices can profoundly affect the success of their grantees' work. Speakers will introduce specific ideas about the kinds of advocacy knowledge and skills grantmakers need to conduct successful advocacy grantmaking.
The Advocate's Perspective: The Capacities That Grantmakers Need
Susan Hoechstetter, Alliance for Justice, shoech@afj.org
To support advocacy capacity assessment, the Alliance for Justice, with assistance from Mosaica and in partnership with The George Gund Foundation, developed an "Advocacy Capacity Assessment Tool" that helps advocates and their funders assess their ability to sustain effective advocacy efforts, develop a plan for building advocacy capacity, and determine appropriate advocacy plans based on the organization's advocacy resources. The tool is available both online and in print, and has been used in numerous advocacy evaluations. Drawing on this work, the Alliance for Justice is now also thinking about the kinds of capacities grantmakers need to be effective. This presentation will describe how funding practices affect advocates, and what grantmakers need to know and do to ensure their grantmaking strategies and their advocacy grantees can be as effective as possible.
The Evaluator's Perspective: Helping Grantmakers Choose the Right Strategies
Julia Coffman, Harvard Family Research Project, jcoffman@evaluationexchange.org
A critical aspect of grantmakers' advocacy capacity is ensuring that funders understand how to choose appropriate grantmaking strategies. Such decisions require a clear assessment of grantmakers' public policy goals and related outcomes, the audiences they are trying to move, how long they are willing to invest in achieving their goals, the amount of "risk" they are willing to assume, and the extent to which advocacy strategies fit with a foundation's history and mission. This presentation will offer a specific framework, developed for and tested by the James Irvine Foundation, that helps foundations think about their advocacy and public policy grantmaking options. The framework literally maps possible advocacy and policy change activities according to where they fall on two strategic dimensions, the audience targeted and the outcomes desired.

Session Title: Introduction to Evaluation and Public Policy
Expert Lecture Session 579 to be held in  Panzacola Section F2 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the AEA Conference Committee
Presenter(s):
George F Grob, Center for Public Program Evaluation, georgefgrob@cs.com
Abstract: Evaluation and public policy are intimately connected. Such connections occur at national, state, and local government levels, and even on the international scene. The interaction moves in two directions: sometimes evaluation affects policies for public programs, and sometimes public policies affect how evaluation is practiced. Either way, the connection is important to evaluators. This session will explain how the public policy process works. It will guide evaluators through the maze of policy processes, such as legislation, regulations, administrative procedures, budgets, re-organizations, and goal setting. It will provide practical advice on how evaluators can become a public policy players, how they can influence policies that affect their very own profession, and how to get their evaluations noticed and used in the public arena. There will opportunities for audience discussion of sensitive topics, such as how evaluators can protect their independence in a world of compromise and deal making.

Session Title: Using Unique Evaluation Methods in Mental Health Care
Multipaper Session 580 to be held in Panzacola Section F3 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Ross Conner,  University of California Irvine, rfconner@uci.edu
Using Draw the Path to Evaluate a Mental Health-Primary Care Integrated Care Services Model
Presenter(s):
Louise Miller, University of Missouri, lmiller@missouri.edu
Karen R Battjes, Missouri Department of Mental Health, karen.battjes@dmh.mo.gov
Abstract: In 2008, the Missouri Department of Mental Health partnered with the Coalition of Community Mental Health Centers and the Primary Care Association to pilot an integrated care model, sharing clients between seven FQHC and CMHC partnerships to provide comprehensive 'one stop' mental health and primary care services. Using Draw the Path, a mid-project evaluation was done to describe implementation issues faced by the partnerships. These included system issues, care processes, and staff preparation and training. The value of this approach was to understand stakeholder perspectives at individual sites as well as across sites to identify immediate barriers, recognize key successes, and sustain project momentum. Interim evaluation data was pivotal to this project in order to show need for continued allocation of funds by the Missouri legislature. The presentation will present highlights of use of Draw the Path to provide formative evaluation data that contributed to meeting overall project goals.
Impact of Education on Dementia Care Practices: Are 'Action Plans' Effective in Modifying Behavior?
Presenter(s):
Dolores Gallagher-Thompson, Stanford University, dolorest@stanford.edu
Eunice Rodriguez, Stanford University, er23@stanford.edu
Renee Marquett, Stanford University, rmmarquett@yahoo.com
Melen Mcbride, Stanford University, mcbride@stanford.edu
Ladson Hinton, University of California Davis, ladson.hinton@ucdmc.ucdavis.edu
Abstract: To continue improving the quality of care provided to the elderly it is important to evaluate ongoing training of medical and allied health professionals working in clinical settings. The Stanford Geriatric Education Center, in collaboration with the Alzheimers Association of Northern California has provided an annual 7-hour conference entitled 'Updates on Dementia' to inform clinicians and non-clinicians working with some aspect of dementia care of new research developments and clinical practices. The evaluation component of this conference includes the development of an innovative 'action plan' developed by the participants at the end of the training. This study investigates the usefulness of developing 'action plans,' as an evaluation tool to assess actual change (implementation of the planned behavior) after a 3-month follow-up. The primary reason for developing this assessment instrument was to gain insight into whether or not clinicians were utilizing the training received at the conference in concrete and measurable ways.

Session Title: Systems Thinking Approaches for Program Evaluation
Multipaper Session 581 to be held in Panzacola Section F4 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Systems in Evaluation TIG
Chair(s):
Margaret Hargreaves,  Mathematica Policy Research Inc, mhargreaves@mathematica-mpr.com
A Case Study of the iPlant Collaborative Evaluation Plan Development: An Integrative Approach Using Outcomes-based and Systems Methods And Concepts
Presenter(s):
Barbara Heath, East Main Educational Consulting LLC, bheath@emeconline.com
Jennifer Young, East Main Educational Consulting LLC, jyoung@emeconline.com
Xiaodong Zhang, Westat, xiaodongzhang@westat.com
Margaret Hargreaves, Mathematica Policy Research Inc, mhargreaves@mathematica-mpr.com
Abstract: Funded by the National Science Foundation, the iPlant Collaborative (iPlant) is a distributed, cyber infrastructure-centered, international community of plant and computing researchers. The goal of iPlant is to bring together the community to (1) identify new conceptual advances through computational thinking and (2) address an evolving array of the most compelling Grand Challenges in the plant sciences and associated, cutting-edge research challenges in the computing sciences. Our presentation intends to provide a case study of how we changed the evaluation approach from a traditional outcome-based model to an integrative approach that includes both outcome-based and systems-based methods and concepts. Several methodologies are being deployed for data collection and analysis, i.e. outcome-based methods, social network analysis, case studies, and consumer-oriented surveys.
Perspectives, Boundaries, and Entanglement: Using Systems Thinking in the Evaluation of Programs Addressing the College Readiness Gap
Presenter(s):
Mary McEathron, University of Minnesota, mceat001@umn.edu
Abstract: Context, the theme of this year's conference, is a well-acknowledged component of every evaluation. Typically, an evaluation includes a thorough description of a program and its sphere of influence. But what do we do with the contextual factors that lie outside the program, especially when the 'out there' has more influence on the issue than the program itself? This paper presents a case study of the use of soft systems methodology (SSM) to address this quandary. Based on an evaluation of a community college program aimed at improving retention for at-risk students, the study explores the program's attempt to bridge the 'readiness gap.' The presentation will focus on the use of SSM not only to clarify analysis of the situation but also to facilitate a dynamic discussion with stakeholders, which created a deeper understanding of the situation and the development of clearer recommendations for change and action.

Session Title: Developmental Evaluation as a Special Utilization-Focused Purpose and Niche
Expert Lecture Session 582 to be held in  Panzacola Section G1 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Evaluation Use TIG
Chair(s):
Michael Patton, Utilization-Focused Evaluation, mqpatton@prodigy.net
Presenter(s):
Michael Patton, Utilization-Focused Evaluation, mqpatton@prodigy.net
Abstract: Developing a model is different from testing its overall effectiveness (summative evaluation) or improving it (formative evaluation). Innovations generated in complex environments under emergent and uncertain conditions present special challenges for evaluation. To adapt rapidly to changing conditions, innovators need to be able to learn quickly. That means evaluators have to be able to gather relevant data rapidly and provide rapid feedback if findings are to be useful. Simultaneously, the evaluator is inculcating evaluative thinking into the innovative process, which is its own challenge, because creative innovators are often more intuition-driven than data-driven. Using understandings from systems thinking and complexity science, this session will describe and give examples of an approach to evaluation -- Developmental Evaluation (DE) -- that makes rapid feedback for learning and adaptation the centerpiece of the evaluative process. Learning in DE includes both substantive learning (findings use) and learning to think evaluatively (process use.)

Session Title: Rejecting the Traditional Outputs, Intermediate and Final Outcomes Logic Modeling Approach and Building More Stakeholder-friendly Visual Outcomes Models
Demonstration Session 583 to be held in Panzacola Section G2 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Presenter(s):
Paul Duignan, Massey University Auckland, paul@parkerduignan.com
Abstract: Helping stakeholder groups construct logic models continues to cause frustration related to teaching stakeholders the difference between terms traditionally used in such modeling - outputs, intermediate and final outcomes. Using outcomes theory, the function of these terms in logic modeling can be identified; it is to: 1) encourage the identification of outcomes 'further-up' the causal chain; and, 2) identify what is demonstrably attributable to a particular project (the 'outputs' layer). The traditional approach distorts the causality portrayed in the logic model by structuring its lower levels based on demonstrable attribution. Both of these functions can be achieved more effectively by constructing free-form visual outcomes models (e.g. in DoView outcomes and evaluation software) that are easier for stakeholders to draw and understand, and then subsequently mapping onto them what is demonstrably attributable to a particular program. How to do this for evaluation, monitoring, contracting and other purposes will be demonstrated. http://www.tinyurl.com/ot233.

Session Title: Getting More Value From International Evaluation to Improve Development Results
Expert Lecture Session 584 to be held in  Panzacola Section H1 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Presidential Strand and the International and Cross-cultural Evaluation TIG
Chair(s):
Patrick Grasso, World Bank, pgrasso@worldbank.org
Presenter(s):
Vinod Thomas, World Bank, vthomas@worldbank.org
Abstract: The growing complexity of world development increases the challenges for evaluators. The interconnectedness of programs, multiplicity of donors, greater scrutiny from stakeholders, and greater volatility of growth, all add to the challenges. And while globalization spreads prosperity, we are seeing that it also spreads damages when times go bad. This presentation notes the importance of evaluation basics and suggests the shifts needed to realize better development results through evaluation. At the same time, it discusses the need to recognize the tough challenges in many development contexts to achieving these shifts.

Session Title: Testing the Robustness of Propensity Scores That Violate Balance Criteria
Expert Lecture Session 585 to be held in  Panzacola Section H2 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
MH Clark, Southern Illinois University at Carbondale, mhclark@siu.edu
Presenter(s):
MH Clark, Southern Illinois University at Carbondale, mhclark@siu.edu
Abstract: The effectiveness of propensity scores is determined by how well they balance treatment and control groups, making them as similar as possible prior to an intervention. In 2001, Rubin established three criteria that should be met to conclude that propensity scores are balanced across covariates. These balancing criteria serve as statistical assumptions for the use of propensity scores when making adjustments to non-randomized experiments. Unfortunately, it is not unusual for one or more of these assumptions to be violated. Therefore, it is useful to know which statistical adjustment method (matching, subclassification, weighting or ANCOVA) is least affected by these violations. The present study used computer simulations to create various data sets that violate those statistical assumptions for having balanced propensity scores. A two-factor design examined the reduction of bias in treatment effects from quasi-experiments depending on the type of statistical adjustment and the type of statistical assumption that was violated.

Session Title: The Chicken and the Egg: Integrating Program Evaluation Plans Into Curriculum Design
Demonstration Session 586 to be held in Panzacola Section H3 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Assessment in Higher Education TIG
Presenter(s):
Rhoda Risner, United States Army, rhoda.risner@conus.army.mil
Llinda Kay Williams, United States Army, lin.k.williams@us.army.mil
Abstract: As one of two Army masters degree-granting Colleges, the Command and General Staff College recognizes that curriculum change must be grounded in evidence. Curriculum change grounded in evidence is imperative to ensure that Army officers are attaining the education they need to be critical and creative thinkers, prepared to win our nation's wars - and secure the nation's peace. Additionally, the College must assure Congress that joint and professional military education is occurring. The College answers also to taxpayers, who want proof that the military officers are capable of making leadership decisions that will protect the nation and effectively lead Soldiers to accomplish their missions. The most convincing way to answer both Congress and the taxpayer is to do an accurate, effective curriculum program evaluation. Program Evaluation Plan is an integrated part of the analysis, design, develop, implement and evaluate (ADDIE) process used to develop curriculum at the College. This session will share the Program Evaluation Plan process with participants as well as offer opportunities for other organizations to share processes relevant to their unique contexts.

Session Title: Evaluating the Value of Partnerships in State Nutrition, Physical Activity and Obesity Programs
Panel Session 588 to be held in Sebastian Section I1 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Government Evaluation TIG and the Health Evaluation TIG
Chair(s):
Jan Jernigan, Centers for Disease Control and Prevention, jjernigan1@cdc.gov
Abstract: The logic model for the Centers for Disease Control and Prevention (CDC) State-based Nutrition, Physical Activity and Obesity (NPAO) Program posits that funding and technical assistance provided by CDC helps states build capacity, including qualified staffing and strong partnerships, to implement effective obesity programs. We describe the mixed-methods NPAO evaluation design, which assessed states' progress on required grantee activities and elucidated the contextual realities of states' program planning and implementation experiences. For example, this mixed methods approach allowed the team to identify how strong partnerships are central to states' success in developing and implementing NPAO programs. CDC staff share their experiences using evaluation findings to inform planning for future NPAO program evaluations and program improvement efforts. Panelists also discuss the promise that evaluation findings have for justifying federal investment in state capacity building.
Evaluating the Value of Partnerships in State Nutrition, Physical Activity and Obesity Programs
James Hersey, RTI International, hersey@rti.org
Bridget Kelly, RTI International, bkelly@rti.org
LaShawn Curtis, RTI International, lcurtis@rti.org
Joseph Horne, RTI International, jhorne@rti.org
Amy Roussel, RTI International, roussel@rti.org
Pam Williams-Piehota, RTI International, ppiehota@rti.org
Jim Hersey is the Project Director for the RTI International evaluation contract with the Division of Nutrition, Physical Activity and Obesity (DNPAO). Dr. Hersey worked closely with DNPAO staff to design the evaluation, and he managed evaluation activities throughout the five-year project period. Dr. Hersey will present the mixed-methods approach used to evaluate the FY 2003-2008 NPAO program. While the presentation focuses primarily on evaluation methods and process, evaluation findings will be highlighted to demonstrate the value of the mixed-methods NPAO evaluation design. Specifically, results of qualitative analysis, as well as correlation and means comparison analyses, will be presented as they represent emerging evidence for the centrality of strong partnerships to states' success in obesity program development and implementation. Dr. Hersey will share lessons learned about evaluating state obesity programs and discuss promising directions for evaluation in federally funded, state-based chronic disease programs.
Evaluating the Value of Partnerships in State Nutrition, Physical Activity and Obesity Programs
Rosanne Farris, Centers for Disease Control and Prevention, rfarris@cdc.gov
As Chief of the Program Development and Evaluation Branch, Rosanne Farris provides oversight to the CDC teams that manage the state-based NPAO program and the Division of Nutrition and Physical Activity's evaluation efforts. She will speak to the challenge of balancing federal performance monitoring and accountability demands with the need to conduct evaluations that support program improvement. Dr. Farris will also discuss how evaluation findings from the completed NPAO program are being used by the Division to guide evaluation planning and program improvement efforts for the current NPAO program. Lastly, drawing on her extensive evaluation experience with other CDC chronic disease programs, Dr. Farris will discuss the benefits and challenges of exploring context in evaluations of federally funded, state-based public health initiatives.

Session Title: Frameworks for Evaluation
Multipaper Session 589 to be held in Sebastian Section I2 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Pamela Bishop,  University of Tennessee at Knoxville, pbaird@utk.edu
Development of a Visual Program Theory Framework for Multilevel Evaluation of the National Institute for Mathematical and Biological Synthesis
Presenter(s):
Pamela Bishop, University of Tennessee at Knoxville, pbaird@utk.edu
Abstract: Evaluators of multilevel programs face a daunting challenge of organizing and aligning evaluation plans that respond to the unique needs of each project while remaining focused on the overall objectives of both the program and its funding agency. Additionally, evaluators must work to communicate a complex evaluation framework to stakeholders who may be unfamiliar with the processes and terminology of program evaluation. The current multilevel evaluation framework covers a mathematical biology institute offering many levels of both research and education/outreach oriented projects. The proposed paper outlines the presenter's method in developing a visual program theory model that aligns resources and outcomes at all program levels, and serves as a utilization-focused collaborative communication tool for developing the program theory and evaluation process with program stakeholders.
Reach, Effectiveness, and Implementation: A Reporting Framework for Multisite Evaluation in Public Health
Presenter(s):
Douglas Fernald, University of Colorado Denver, doug.fernald@ucdenver.edu
Mya Martin-Glenn, University of Colorado Denver, mya.martin-glenn@uchsc.edu
Abigail Harris, University of Colorado Denver, abigail.harris@ucdenver.edu
Stephanie Phibbs, University of Colorado Denver, stephanie.phibbs@ucdenver.edu
Vicki Weister, University of Colorado Denver, vicki.weister@udenver.edu
Elizabeth Ann Deaton, University of Colorado Denver, elizabeth.deaton@ucdenver.edu
Nicole Tuitt, University of Colorado Denver, nicole.tuitt@ucdenver.edu
Arnold Levinson, University of Colorado Denver, arnold.levinson@ucdenver.edu
Abstract: A voter-approved tobacco tax in Colorado supports a variety of public health projects across several tobacco-related disease areas. To evaluate a complex portfolio of funding that covers a range of project designs, target populations, and diseases we sought an evaluation framework that could: 1) guide our assessment of current programming, and 2) guide the development of a standard set of reporting tools for individual projects that would fit within their existing budget. Because projects had to demonstrate an evidence base for their work, our evaluation sought an approach that emphasizes explaining programming reach and implementation over effectiveness. Drawing from existing work that emphasizes evaluating external validity in individual interventions, we developed a reporting toolkit to capture reach and implementation data in a standardized format. This paper describes the development and implementation of a reporting framework and toolkit for projects.

Session Title: Storytelling as a Technique to Communicate Evaluation Results in Cluster and Multi-site Evaluations
Panel Session 590 to be held in Sebastian Section I3 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Karen Debrot, Centers for Disease Control and Prevention, bol6@cdc.gov
Abstract: There are two types of data: quantitative and qualitative. Evaluation uses both to describe the progress programs make in achieving their goals and objectives. One particular evaluative method that includes both types of data is storytelling. Storytelling uses qualitative data to describe the context and effects of the program and quantitative data to describe the size of the program and its effects. Storytelling is especially useful in cluster and multi-site evaluations where programs vary from site to site, yet have underlying commonalities. This session will describe the process of storytelling and demonstrate how storytelling can successfully be used in both cluster and multi-site evaluations.
Success Stories: A Way to Communicate Evaluation Findings
Karen Debrot, Centers for Disease Control and Prevention, bol6@cdc.gov
Program evaluation is used to assess the value or worth of health programs by assessing programs' progress in achieving their goals and objectives over time. Program evaluation begins with a clear description of the program, its goals, and objectives. This provides the framework for deciding what evaluation questions to ask. Another step in the evaluation process generally involves systematic data collection and analysis of quantitative data for comparing obtained results with expected results. Communicating these results in a way that stakeholders can understand is an important piece of the evaluation process. If results are not understood, they are unlikely to be used to support and improve a program. Success stories are a simple way to describe a program's progress and achievements over time. This presentation will describe common elements collected for writing success stories, as well as methods for collecting these elements.
What's in a story? Giving a Voice to Multi-site Programs
Rene Lavinghouze, Centers for Disease Control and Prevention, rlavinghouze@cdc.gov
An outgrowth of naturalistic design, narrative evaluations do not control variations among sites but rather celebrate the testing of logical connections and contextual parameters through stories. In 2001, the Centers for Disease Control and Prevention announced a funding opportunity for infrastructure development. Programs were loosely linked around a common goal and broad performance measures. The intent of the evaluation was to determine whether and how infrastructure development facilitated progress on health outcomes. The dilemma was how to test theory amongst diverse programs with varied implementation strategies. The use of narrative often facilitates the shaping of the initiative by discerning possible linkages among program strategies, contexts and results. This can help stakeholders formulate the next evolutionary step for projects as well as the initiative as a whole. An example of an integrative story will be provided and a discussion of how a model was developed through the analysis of the stories.

Session Title: Research on Evaluation TIG Business Meeting
Business Meeting Session 591 to be held in Sebastian Section I4 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Research on Evaluation TIG
TIG Leader(s):
Christina Christie, Claremont Graduate University, tina.christie@cgu.edu
Tarek Azzam, Claremont Graduate University, tarek.azzam@cgu.edu

Session Title: Evaluation of a Train-the-Trainer Program Developed in Israel and Adapted for First Responders in Swiss Culture
Panel Session 592 to be held in Sebastian Section L1 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Cheryl Meyer, Wright State University, cheryl.meyer@wright.edu
Abstract: The two proposed presenters took a team approach to an evaluation of a train-the-trainer program developed by the Community Stress Prevention Center (CSPC) in Kiryat Shmona, Israel. The overarching goal of the program is to better prepare first responders for the experience and after-effects of trauma in themselves and the people they serve. Although the program model has been used across the world by the Israeli trainers and adapted to specific cultures, its adaptation has never been systematized. Ideally, the materials by both presenters were developed for continued use by the Israeli and Swiss trainers to monitor improvement for future training in trauma response.
Evaluation of a First Responder Train-the-Trainer Program for Israeli and Swiss Firefighters
Kristin Galloway, Wright State University, galloway.11@wright.edu
A program evaluation on training effectiveness was performed on the train-the-trainer program developed by the Community Stress Prevention Center (CSPC), in Kiryat Shmona, Israel. The evaluator worked with Israeli expert trainers in trauma-related issues, as well as Swiss mental health practitioners seeking to enact the training program in their Swiss community. The trainers were assisted in developing and refining measureable goals and objectives based on their training curriculum, which was adapted for both Israeli and Swiss firefighters. A subjective survey and knowledge-based assessment for trainees were developed, in both Hebrew and German, to capture the effectiveness of the training in meeting its objectives. Post-training, a focus group with the primary trainers was held online in order to process their training experience and any lessons learned. The presenter is a doctoral student in clinical psychology at Wright State University. The program evaluation was initiated as the presenter's doctoral dissertation.
Cross-Cultural Adaptations of the Train-the-Trainer Model for First Responders to Disaster and Terrorism
Anna Fedotova, Wright State University, fedotova.2@wright.edu
The purpose of the evaluation was to explore how the CSPC adapted the train-the-trainer protocol to accommodate specific needs of the diverse populations in various cultures. Previously, adaptations to the training were made for survivors and first responders in countries like Turkey, Great Britain, Russian Federation, Sri-Lanka, Thailand, Singapore, and the U. S. The common cultural adaptation factors that emerged during tailoring of the protocols were examined. The evaluation aimed to assist CSPC experts to systematize a framework for adapting the protocol during training with Swiss and Israeli firefighters, building on the key dimensions for cultural adaptation (Bernal, Bonilla, & Bellido, 1995) and the framework for the program adaptation (Barrera & Castro, 2006). The evaluation was accomplished through observing the training process and conducting focus groups with the Israeli and Swiss trainers. The evaluation was completed as the presenter's doctoral dissertation in the clinical psychology program at Wright State University.

Session Title: Distance Education and Other Educational Technologies TIG Business Meeting
Business Meeting Session 593 to be held in Sebastian Section L2 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Distance Ed. & Other Educational Technologies TIG
TIG Leader(s):
Mark Hawkes, Dakota State University, mark.hawkes@dsu.edu

Session Title: Mixed Methods and Matched Pairs: Combining Methods in College Access Program Evaluation
Multipaper Session 594 to be held in Sebastian Section L3 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the College Access Programs TIG
Chair(s):
Kurt Burkum,  ACT, kurt.burkum@act.org
Evaluating the Effectiveness of a College Access Program for Latino High School Students: Lessons Learned From Using a Matched Pair Design
Presenter(s):
Catherine Batsche, University of South Florida, cbatsche@fmhi.usf.edu
Teresa Nesman, University of South Florida, nesman@fmhi.usf.edu
Mario Hernandez, University of South Florida, hernande@fmhi.usf.edu
Abstract: Reducing the barriers to higher education for Latinos has received national attention for more than 25 years. As a result, a growing number of college access programs for Latinos have been implemented. The evaluation of these programs has, for the most part, focused on pre and post testing, attitudinal surveys, and follow-up tracking of students. This presentation will describe the evaluation of a college access program called ALAS (Awareness, Linkages, And Support) that used a matched pair design to compare ALAS students and non-ALAS students on selected measures of academic performance at the end of their sophomore and senior years in high school. Students were matched based on ethnicity, language spoken in the home, gender, grade level, GPA, school of attendance, and course of study . This session will discuss the evaluation approach that was used, its limitations, and recommendations to improve upon the evaluation design.
A Mixed Methods Approach to Evaluating Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) Partnerships
Presenter(s):
Watson Scott Swail, Educational Policy Institute, wswail@educationalpolicy.org
Patricia Moore Shaffer, Educational Policy Institute, pshaffer@educationalpolicy.org
Abstract: While GEAR UP is an excellent federal program that helps children around the country, evaluations of partnership programs have been uneven at best. For many programs, collection of APR data is the end-all goal of their evaluation effort. This paper presents a model for a mixed methods evaluation of a GEAR UP partnership, including stakeholder surveys and focus groups and school site visits in addition to the collection of quantitative data on program participation and student outcomes.

Session Title: Evidence-Based Medicine: Cutting-edge Research or Bad Science?
Expert Lecture Session 595 to be held in  Sebastian Section L4 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Theories of Evaluation TIG
Chair(s):
Chris L S Coryn, Western Michigan University, chris.coryn@wmich.edu
Presenter(s):
Cristian Gugiu, Western Michigan University, crisgugiu@yahoo.com
Abstract: While the clinical trials regulated by the Food and Drug Administration (FDA) are held to the highest of scientific standards, the quality of clinical trials that are not regulated by the FDA rests upon the shoulders of the medical investigators and the journal editors to which these studies are often submitted for publication. The present paper will present an evaluation of the clinical trials conducted on the Chronic Care Model (CCM). The CCM is a popular method for enhancing standard pharmaceutical treatment of chronic illnesses by improving medical decisions, redesigning delivery systems, encouraging self-management, implementing clinical information systems, reorganizing the healthcare system, and providing referrals to community resources. This presentation will focus on whether these clinical trials demonstrate that the evidence-based movement is sustainable based on whether (1) researchers produced high quality studies, (2) journal editors detected poor studies, and (3) readers differentiate between substantiated and unsubstantiated results.

Session Title: Demonstrating Impact of a Nationwide Leadership/Top Executive Training and Development Program: A Systematic Evaluation of Complex Behavioral Outcomes
Multipaper Session 596 to be held in Suwannee 11 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Heidi Deutsch, National Association of County and City Health Officials, hdeutsch@naccho.org
Abstract: A nationwide leadership/top executive training and development program was designed in response to a need for action-oriented programs to train Local Health Officials to effectively respond to a host of local health challenges. More specifically, the Survive and Thrive Program was designed to increase the competence and skills of new Local Health Officials to succeed within the multi-faceted environment of local health practice. Correspondingly, a comprehensive evaluation process was developed to systematically evaluate program effectiveness and provide evidence of the impact of the Survive and Thrive program in developing effective leadership, building Local Health Department capacity, and strengthening the infrastructure of local governmental public health. The papers in this session will present specific components of this evaluation and discuss issues with regard to evaluating complex behavioral outcomes associated with leadership and executive development programs, particularly within the context of local governmental public health.
Documenting Improved On-the-job Performance of Local Health Officials: The Survive and Thrive 360 Degree Performance Evaluation
Sue Ann Sarpy, Sarpy and Associates LLC, ssarpy@tulane.edu
Seth Kaplan, George Mason University, skaplan1@gmu.edu
Alicia Stachoski, George Mason University, astachow@gmu.edu
The Survive and Thrive program was developed to train new Local Health Officials - top executives at Local Health Departments - in applying effective strategies and solutions to respond to current local-level challenges. Consistent with other leadership and executive development programs, a critical component of the Survive and Thrive program is the use of a 360 degree feedback process. The Survive and Thrive 360 Degree Performance Evaluation was developed in response to the need for a tailored, behaviorally-oriented measure that is focused on training-related job performance and applicable across diverse Local Health Departments nationwide. The Survive and Thrive 360 Degree Performance Evaluation assesses critical behaviors and skills that new Local Health Officials are expected to perform at their Local Health Departments. This presentation will highlight issues related to the development and implementation of a tailored 360 Degree Performance Evaluation to provide evidence of program impact on training-related outcomes across various communities and Local Health Departments.
Considering Organizational Factors in Determining the Impact of a Leadership/Top Executive Training and Development Program
Seth Kaplan, George Mason University, skaplan1@gmu.edu
Sue Ann Sarpy, Sarpy and Associates LLC, ssarpy@tulane.edu
In understanding the effects of workforce development and leadership initiatives on building training-related competence and skills, a consideration of organizational factors that can positively or negatively influence the trainees' subsequent job performance is essential. These organizational factors are conditions in the work environment that facilitate or impede the attainment of high levels of effectiveness. In effect, these conditions in the work environment place an upper limit on the potential impact that the training may have on related outcomes. This presentation will highlight the results from an evaluation designed to systematically examine the organizational facilitators and barriers existing across Local Health Departments nationwide as well as their influence on the leadership and top executive training-related behaviors for Local Health Officials. Discussion will also focus on the development and implementation of an Organizational Factors Survey, and on the importance of considering these factors in evaluating the impact of leadership and executive training and development initiatives.

Session Title: The Context of Collecting Data: How to Work With Non-researchers to Collect Quality Data
Think Tank Session 597 to be held in Suwannee 12 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Tiffany Comer Cook, University of Wyoming, tcomer@uwyo.edu
Discussant(s):
Laran Despain, University of Wyoming, ldespain@uwyo.edu
Abstract: Evaluations often rely on clients and other non-researchers to collect data. The purpose of this think tank is to brainstorm ideas on how to collect quality data with inexperienced data collectors. Session attendees will discuss (a) how to ensure that inexperienced data collectors follow correct and ethical protocols, (b) how properly to train inexperienced data collectors including situations where face-to-face training may be unfeasible, (c) how to ensure accurate and precise data, and (d) how to instill the importance of collecting quality data to accomplish the goals of the research project. Session attendees will discuss these points in small groups. The think tank will conclude with a collective discussion about working with non-researchers to collect quality data.

Session Title: The Gender Context of Program Evaluation: A Rural Appalachian and International Perspective
Panel Session 598 to be held in Suwannee 13 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Feminist Issues in Evaluation TIG and the International and Cross-cultural Evaluation TIG
Chair(s):
Denise Seigart, Mansfield University, dseigart@mansfield.edu
Discussant(s):
Divya Bheda, University of Oregon, dbheda@uoregon.edu
Abstract: One of the weakest areas of program evaluation practice is the lack of adequate evaluation methodologies for understanding how gender affects the context within which programs are designed and operated. The lack of attention to gender often means that women have only a limited role in program planning and implementation, and unequal access to program benefits. The purpose of this panel is to provide 2 evaluators with the opportunity to present different perspectives on how gender affects the context of the programs they are evaluating (from rural Tennessee and an international setting, the methods they have used for assessing gender and some of the challenges in convincing clients, program staff and funding agencies that gender issues matter.
Evaluating the Methamphetamine Evidence-based Treatment and Healing Program Among Rural Women in a Community Mental Health Center Context: Is Evidence-Based Practice Gender Sensitive?
Kathryn Bowen, Centerstone Research Institute, kathryn.bowen@centerstone.org
While both rural and urban women experience drug abuse problems, the consequences and experiences are not the same due to the limited ability of rural areas to offer effective substance abuse treatment that is accessible and sensitive to gender. In addition to briefly discussing evaluation findings the presenter will discuss rural contextual barriers that are disproportionally faced by rural women particularly in terms of accessing treatment that is sensitive to issues of gender, past trauma, generational patterns of behavior that are oppressive in the family and rural cultural norms that can negatively impact a woman's ability to access treatment, complete treatment and successfully maintain a drug free life-style afterward. Context is an important consideration regardless of the evaluation setting. Evaluators routinely think about contextual factors at program, organizational and social levels; however, evaluations conducted in rural settings present additional contextual factors that may not be as familiar and may not receive the attention that is required in order to identify implementation barriers that unfairly impede access to treatment and services for rural women who are addicted and/or have a mental health disorder.This paper will describe the evaluation of a program that used an evidence-based model for treating adults 18 years and older who abuse methamphetamine and/or other emerging drugs in six rural Middle Tennessee counties. This paper is intended to expand evaluators' awareness of contextual factors that are less commonly discussed including rural culture that can be oppressive because of high rates of poverty and low levels of education, generational prejudice that help keep women powerless and poor, generalized social stigma of women who have a substance abuse and/or a mental health disorder and program environments that use evidence based practices that are not gender sensitive.
The Gender Context of Program Evaluation: An International Perspective
Michael Bamberger, Independent Consultant, jmichaelbamberger@gmail.com
This paper explores challenges gender analysis into the evaluation of international development programs. Clients often argue that "hard" economic sectors are "gender neutral" and that both sexes benefit equally from well designed housing, transport or export promotion projects. It is also claimed in many middle-income countries that gender issues are only important in less developed regions. There is often only limited recognition, or even denial, of the importance of gender among many evaluation clients; while on the supply side many evaluators do not have the orientation or research tools to adequately address gender issues. The lack of attention to gender often means that women have only a limited role in program planning and implementation, and unequal access to program benefits. An even more challenging issue is to recognize that rapid economic and political change can also disadvantage men and boys - an area in which even less gender research is available.

Session Title: Program Evaluation and "Scientific" Research: One Community's Serendipitously Successful Approach Via Character Education - Nixus
Panel Session 599 to be held in Suwannee 14 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
David Hough, Missouri State University, davidhough@missouristate.edu
Abstract: Presented here are findings from the evaluation of a United States Department of Education (USDE) - funded character education program that examined the program's impact on school climate, student behavior, attendance, and achievement. Forty-seven schools in a large, county-wide district in the Deep South participated in this four-year quasi-experimental wait-list control design, along with university personnel and five community organizations. The multi-agency character education model implemented is described in detail, along with the development of a School Climate Index (tm) and observation protocols used by the evaluation team. Qualitative and quantitative data document a number of positive, negative, and problematic issues associated with program integrity, fidelity of data collection, and reporting. Many character education activities were found to be associated with improved school climate and student outcomes, findings evaluators were able to report while advancing sound evaluation methods combined with an "evidence-based" research design.
Quantitative Methods in Support of Program Evaluation and "Scientific" Research: An Example of Meeting All Stakeholders' Needs in a Character Education Program
Vicki Schmitt, University of Alabama, vschmitt@bamaed.ua.edu
This presentation describes quantitative methods used to examine a four-year, grant-funded character education project designed as a quasi-experimental, wait-list control study. Data collection methods are described, and findings are presented along with detailed explanations of how evaluation theory and practice can coincide with randomization and controls imposed by "evidence-based" research, per USDE mandates. While the character education program examined herein was required to employ a "rigorous, evidence-based, scientific evaluation," both PI's and evaluators were able to find a nexus between evaluation and research that met school, community, and USDE requests and requirements. This approach resulted in program improvement, refinement, documented outcomes, and scientific evidence of value and use to all stakeholders.
Qualitative Methods Used to Inform, Improve, and Address Program Integrity in One Community's Character Education Initiatives
David Hough, Missouri State University, davidhough@missouristate.edu
While opinions differ as to where the line between evaluation and so-called "scientific" research should be drawn, the integrity of program implementation remains a subject of interest and concern to both. This presentation describes a number of qualitative methods used to examine the level and degree of implementation of a character education program in a county-wide community located in the Deep South. In addition to focus groups and personal interviews, 547 separate field notes written by ten different data collection "coaches," over a four-year time frame were collected and content analyzed. In addition to presenting findings, this presentation focuses on the positive, negative, and problematic issues associated with this data collection approach. Recommendations for systemic improvements and rigorous protocols in qualitative evaluation and research are offered.

Session Title: Overcoming Contextual Limitations: Point and Click Access to Aggregate Longitudinal Student Data
Demonstration Session 601 to be held in Suwannee 16 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Integrating Technology Into Evaluation
Presenter(s):
Jordan Horowitz, Cal-PASS, jhorowitz@calpass.org
Lauren Davis Sosenko, Cal-PASS, lsosenko@calpass.org
Abstract: Evaluators struggle to track longitudinal student-level outcomes, because of contextual issues. The process is prohibitively expensive or not possible due to incompatible data systems, data ownership concerns, and policy restrictions (e.g., the Family Educational Rights and Privacy Act [FERPA]), among other barriers. California Partnership for Achieving Student Success (Cal-PASS) members use web-based Online Analytical Processing (OLAP) cubes to efficiently access aggregate summaries of K-16 student-level data to inform practice. During this demonstration, Cal-PASS staff will describe the Cal-PASS data system, explain what an OLAP cube is, and demonstrate its use to the audience. The audience will gain knowledge of the Cal-PASS database, as well as the OLAP cube tool for data access.

Session Title: Evaluation of Social and Economic Security Programs: International Examples
Multipaper Session 602 to be held in Suwannee 17 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Human Services Evaluation TIG and the International and Cross-cultural Evaluation TIG
Chair(s):
Vajeera Dorabawila,  Bureau of Evaluation and Research, vajeera.dorabawila@ocfs.state.ny.us
Discussant(s):
Dale Hill,  World Bank, dhill@worldbank.org
Incentives Work! Given the Right Context: Motivation for Increased Retraining for Social Security Recipients in Denmark
Presenter(s):
Rasmus Doerken, The National Evaluation Institute of Local Government, rd@krevi.dk
Abstract: KREVI has evaluated a common tool in public policy regulation: economic incentives as a means to encourage a specific behaviour. The evaluation gives important insights in the consequences of economic incentives and shows how evaluations can be strengthened using mixed quantitative and qualitative methods. In Denmark 98 municipalities are responsible for social security payment for uninsured, unemployed workers. The payments are partly refunded by the State. The project evaluates the results of a change in the fiscal refund of social security, initiated because the government wanted to increase the municipal incentives to re-train social security recipients. The change increased or decreased the refund dependent on the recipients' status as either in re-training programs or not. The evaluation shows that the economic incentive - in general - did not increase the percentage of social security recipients in re-training programs. However, the same quantitative data shows that in a few municipalities the economic motivation dramatically increased re-training. On the basis of the data, 8 municipalities were selected for the qualitative part of the project with the purpose of explaining how, when and why the incentive did work.
Promoting Social Rights and Tackling the Intergenerational Transmission of Poverty: Lessons Learned From the Implementation of Ciudadanía Porteña
Presenter(s):
Irene Novacovsky, Buenos Aires City Government, irenenovac@yahoo.com.ar
Abstract: The paper reviews the results of a recent impact evaluation of Ciudadanía Porteña. This is a conditional cash transfer program (CCT), which targets poor and extremely poor households in the City of Buenos Aires. As a condition to receive the monthly benefit, the beneficiaries must comply with certain commitments and obligations related to education and health, aimed at promoting children rights and interrupting the intergenerational transmission of poverty. The evaluation methodology of Ciudadanía Porteña comprises a quasi-experimental design that allows to contrast the results and impact of the program between a comparison group and a treatment group. It compares the situation of the target population at the beginning of the program and during its implementation (difference-in-difference estimator). After less than a year of the implementation of Ciudadanía Porteña, the evaluation found a positive impact on poverty reduction and school attendance among beneficiary households.

Roundtable: Strengths-Based Personnel Evaluation: A Context for Courageous Conversations and Beyond
Roundtable Presentation 603 to be held in Suwannee 18 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Presenter(s):
Allison Titcomb, ALTA Consulting LLC, atitcomb@cox.net
Cassandra O'Neill, Wholonomy Consulting, cassandraoneill@comcast.net
Abstract: Strengths-based (SB) strategies are being used in organizational development, strategic planning, evaluation, and many other contexts. This roundtable will consider and examine the possibilities for using strengths-based perspectives with personnel evaluation. Principles and practices from Appreciative Inquiry, Cognitive Coaching, Asset Mapping, Appreciative Coaching, Learning Maps and other SB approaches will frame the discussion. Examples of how to modify or create SB assessments and evaluations will be shared.

Roundtable: Teaching Aspiring Evaluators: Opportunities and Challenges in the Conduct of an Evaluation in a Non-Traditional Setting
Roundtable Presentation 604 to be held in Suwannee 19 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Teaching of Evaluation TIG
Presenter(s):
Katye Perry, Oklahoma State University, katye.perry@okstate.edu
Zhanna Shatrova, Oklahoma State University, zhanna.shatrova@okstate.edu
Sarah Wilkey, Oklahoma State University, sarah.wilkey@okstate.edu
Abstract: Presenters at this session will share their first experience in conducting an evaluation and of a unique program implemented in a non-traditional setting on the campus of a comprehensive university. An evaluation of a program was the major assignment in their second evaluation course and with many components to be completed by the end of the course. The course was also offered by the instructor for the first time. Several factors contributed to challenges encountered by both instructor and students beginning with the trial offering of the course and most important to this conference, the context in which the evaluation was conducted. A summary of their experiences as well as lessons learned will be shared.

Roundtable: Using A Mixed-Methods Design to Conduct a State Wide Evaluation of Districts' Readiness for Large-Scale Online Testing
Roundtable Presentation 605 to be held in Suwannee 20 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Jacqueline Stillisano, Texas A&M University, jstillisano@tamu.edu
Trina Davis, Texas A&M University, trina-j-davis@tamu.edu
Hersh Waxman, Texas A&M University, hwaxman@tamu.edu
Brooke Kandel-Cisco, Texas A&M University, brookeandel@yahoo.com
Judy Hostrup, Texas A&M University, jhostruo@usa.net
Abstract: Like policy makers in many states, those in Texas have evinced a strong interest in converting state-required assessments to a computer-based format. This paper reports on an evaluation study commissioned to examine the extent to which all school districts across the state of Texas are prepared for large-scale, online testing. Using a mixed-methods design, the evaluation team examined the technology infrastructure; the staffing and training needs; and the current technology capacities for every district and campus in the state, as well as the ability of campuses and districts to successfully administer computer-based assessments. Two primary methods of data collection were used: (1) An online survey was administered to 1,214 districts and 8,200 campuses. (2) In-depth case studies were conducted with six school districts representing a cross-section of the state. These case studies provided comprehensive supplemental information regarding specific experiences, challenges, and opportunities related to planning and implementing online assessments.

Roundtable: Evaluating Theatre Within a Museum Setting: Challenges of Process and Content
Roundtable Presentation 606 to be held in Suwannee 21 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Evaluating the Arts and Culture TIG
Presenter(s):
Sarah Cohn, Science Museum of Minnesota, scohn@smm.org
Elizabeth Wood, Indiana University Purdue University Indianapolis, eljwood@iupui.edu
Abstract: Increasingly, informal education programs in museums are using theatre as an interpretive tool to emotionally involve visitors in exhibit content. While emotional involvement is certainly one potential outcome, defining broader impact on visitors is necessary. Working with departments to expand theatrical goals from emotion to impact is a challenging task, especially when bound by sometimes opposing theories of the goals of theatrical and informal learning. This discussion focuses on how to engage various stakeholder audiences in a meaningful evaluation of their efforts in any context. It will use the context of museum theatre to discuss ways of framing studies that capture participant experiences in complex programs while also effectively communicating with stakeholders to support use.

Session Title: A Culturally Responsive Approach to Science, Technology, Engineering, and Mathematics (STEM) Evaluation
Multipaper Session 607 to be held in Wekiwa 3 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Stafford Hood, University of Illinois at Urbana-Champaign, slhood@illinois.edu
Abstract: The increased focus on Science, Technology, Engineering and Mathematics (STEM) initiatives and the desire to increase the rates of participation of African American students in these fields necessitates that culturally responsive program evaluators identify those programs that are working and/or those that are failing to provide positive mathematics outcomes and increased rates of participation for African American students. Researchers are in the process of developing a culturally responsive logic model for use in the evaluation of programs in settings with African American students and in African American communities. The African American Culturally Responsive Evaluation System for Academic Settings (ACESAS) strives to combine both theory and practice in the area of culturally responsive evaluation in order to provide a visual representation of what culturally responsive evaluation would look like within African American communities.
Employing a Culturally Responsive Evaluative Lens to Advance the Pace of African American Mathematics Achievement
Kevin Favor, Lincoln University, favor@lincoln.edu
Concerns about mathematics disparities abound for good reason, given the centrality of this skill set for thriving in the world. Myopic assumptions focus upon constitutional and familial factors hypothesized to be short in supply among African Americans. The resultant programmatic agenda coincides with this set of assumptions. Promotion from a long muffled segment of stakeholders renews speculation of success paths for African American youth in mathematics. This support emerges from a shared intimate familiarity with this cultural group. These folk decry the failure of acknowledging a student's mathematics desires subject to stimulation. There is growing intolerance of decision-makers' actions that stultify capable African American students. This paper gives voice to what such persons have witnessed that mitigates against math achievement-those in seats of power within educational institutions where opportunity for future prospects is most often created.
Building A Culturally Responsive Logic Model Based on Theory and Practice: The African American Culturally Responsive Evaluation System for Academic Settings (ACESAS)
Pamela Frazier-Anderson, Frazier-Anderson Research & Evaluation, pfa@frazier-anderson.com
Stafford Hood, University of Illinois at Urbana-Champaign, slhood@illinois.edu
The African American Culturally Responsive Evaluation System for Academic Settings (ACESAS) is a logic model that is specifically being developed to visually conceptualize culturally responsive evaluations of programs designed to serve African American students in grades PreK-12. The developers of this evolving logic model believe that the ACESAS is a tool that could potentially represent the steps utilized in conducting culturally responsive program evaluations in most settings (health care, non profits, private sector) and with most cultural groups. However, the reference groups utilized for its theoretical development were African American students in Pre-K through 12th grade settings. This paper will discuss the theoretical development of the ACESAS as well as its intended use for future research and practice.

Session Title: Cost-Effectiveness of School Programs
Multipaper Session 608 to be held in Wekiwa 4 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
Chair(s):
Nadini Persaud,  University of West Indies, nadini.persaud@cavehill.uwi.edu
The Cost-Effectiveness of 22 Approaches for Raising Student Achievement
Presenter(s):
Stuart Yeh, University of Minnesota, yehxx008@umn.edu
Abstract: Review of cost-effectiveness studies suggests that rapid assessment is one magnitude (10 times) more cost-effective with regard to student achievement than comprehensive school reform, cross-age tutoring, or computer-assisted instruction; two magnitudes more cost-effective than a longer school day, increases in teacher education, teacher experience or teacher salaries, summer school, more rigorous math classes, value-added teacher assessment, class size reduction, a 10 percent increase in per pupil expenditure, or full-day kindergarten; three magnitudes more cost-effective than Head Start, high-standards exit exams, NBPTS teacher certification, higher teacher licensure test scores, high quality preschool, an additional school year, or voucher programs; and four magnitudes more cost-effective than charter schools. The differences in cost-effectiveness suggest the results are robust even if future research indicates that the effect sizes or costs are mis-estimated by factors of 10 or more.
Assessing the Cost-Effectiveness of Comprehensive School Reform in Low Achieving Schools
Presenter(s):
John Ross, University of Toronto, jross@oise.utoronto.ca
Abstract: We evaluated the cost-effectiveness of Struggling Schools, a user generated approach to Comprehensive School Reform (CSR) for 100 low achieving schools serving a disadvantaged student population in a Canadian province. We conducted a quasi-experimental, pre-post matched sample design with school as unit of analysis. The cost-effectiveness of CSR was determined in terms of the annual cost of bringing one student to the provincial achievement standard and in terms of effect per $1000. Key finding: The program had positive achievement effects but the cost was too high: researchers need to provide cost-effectiveness information to CSR developers and users. (2) Selecting from CSR options is more cost-effective than developing a new program. (3) There are unresolved issues in conducting cost-effectiveness studies of CSR, e.g., Should unfunded school costs and CSR development costs be included? To which programs should the cost-effectiveness of particular CSR programs be compared-the status quo, the best, or the typical?

Session Title: Collaboration: Necessary Method or Personal Value?
Panel Session 609 to be held in Wekiwa 5 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Stephanie Townsend, Independent Consultant, stephanie.townsend@earthlink.net
Discussant(s):
Robin Miller, Michigan State University, mill1493@msu.edu
Abstract: Collaboration has been posited as an important component of what many consider "good" evaluation practice. However, many questions remain. Among them is whether we should consider collaboration as essential to evaluation practice and, if so, what it looks like across contexts. This panel will initiate a discussion related to the who, how and why of collaborative approaches. Following an overview and synthesis of existing models of collaborative evaluation, the panel will explore the notion of what constitutes collaboration from multiple viewpoints. From the field, we explore paradoxes that arise when evaluators define what collaboration will entail and the challenges when it is viewed differently by their clients. From academe, we explore the implications for training and practice when collaboration is deemed a required competence. The intended goal is provide ongoing opportunities for dialogue to contribute to a more nuanced understanding of collaboration.
Collaboration in the Field: Exploring Evaluator-Client Fit
Stephanie Townsend, Independent Consultant, stephanie.townsend@earthlink.net
The first presenter will begin by providing a brief overview of the state of collaboration in evaluation, highlighting multiple perspectives and approaches and synthesizing them by identifying common definitional elements. Then, drawing from experience consulting with diverse organizations, she will explore how she has enacted collaboration in her work. This exploration will highlight paradoxes that have arisen from disparities between her value and definition of collaboration and the ways her clients envision collaboration.
Collaboration in Academe: Defining Professional Competencies
Cheryl Poth, University of Alberta, cpoth@ualberta.ca
The second presenter will extend the discussion into academe by examining how collaboration perspectives are enacted in research and graduate training. Having been involved in discussions related to the Canadian Evaluation Association's current efforts to establish certification requirements, she will explore the implications of collaborative aspects for training and defining professional competencies.

Session Title: Pragmatic Evaluation and Management of Network-wide Research, Innovation and Value Creation: Overcoming the 'Valley of Death' With Value-inclusive 'Co-evaluation'
Expert Lecture Session 610 to be held in  Wekiwa 6 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
Chair(s):
Michael Scriven, Claremont Graduate University, mjscriv@gmail.com
Presenter(s):
Ron Visscher, Western Michigan University, visscron@aquinas.edu
Abstract: This session summarizes a benefit-cost-value-inclusive study that uses a generally applicable 'co-evaluation' method to evaluate itself, particularly it's fitness for enhancing processes used for research proposal review and driving value-creation across extended complex research and innovation networks. The interactive panel method is used here to model and simulate an extended peer review process implemented to address concerns of 'impactees' involved in research review and management. Panel members, representing anticipated 'impactees', gain first-hand experience with the method while assessing expected impact benefit-cost in actual operations. Impactee representatives use an interactive review process, preferably online to reduce travel cost. Sufficiently interesting proposals graduate through the following main steps: identification and estimation of interdependent impactees and impacts, comparison with competitive proposals and similar past efforts, alignment of unique and valuable proposals with synergistic knowledge and ongoing efforts into solution-oriented portfolios, and 'apportionment' of limited resources according to optimum expected network-wide value creation.

Session Title: Qualitative Methods in Healthcare Settings
Multipaper Session 611 to be held in Wekiwa 7 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Qualitative Methods TIG and the Health Evaluation TIG
Chair(s):
Gregory Benjamin,  Nemours Health and Prevention Services, gbenjami@nemours.org
Development of a Program Maturation Model Using a Multiple Case Study Approach
Presenter(s):
Rebecca Glover-Kudon, University of Georgia, gloverku@uga.edu
Amy DeGroff, Centers for Disease Control and Prevention, adegroff@cdc.gov
Jennifer Boehm, Centers for Disease Control and Prevention, jboehm@cdc.gov
Judith Preissle, University of Georgia, jude@uga.edu
Abstract: The Centers for Disease Control and Prevention (CDC) has funded five sites to implement colorectal cancer screening among uninsured, low-income populations. These five sites, together, comprise a demonstration project for which the aim is to examine various service delivery models in order to explore the feasibility of establishing a national colorectal cancer screening program. As part of a multi-method evaluation effort, a multiple case study is being conducted to understand and document program implementation processes over a three-year period. Anticipating the need to guide programmatic diffusion, the case study team conducted a cross-site qualitative analysis with the intention of describing hallmark characteristics, activities, successes, and challenges of programs at various stages of maturation. This paper presents a working theory of program development and maturation based on an evaluation of implementation processes. Findings will be used by CDC to develop and provide technical assistance to future colorectal cancer screening programs.
A Qualitative Evaluation Component of a Healthcare-based, Multi-year Quality Improvement Initiative
Presenter(s):
Gregory Benjamin, Nemours Health and Prevention Services, gbenjami@nemours.org
Vonna LC Drayton, Nemours Health and Prevention Services, vdrayton@nemours.org
Denise Hughes, Nemours Health and Prevention Services, dhughes@nemours.org
Abstract: In May 2007, Nemours Health and Prevention Services (NHPS) began a quality improvement initiative (QII) among a sample of primary care pediatric clinic sites (n=11) and school wellness centers (n=4) in Delaware. The purpose of this two year initiative was to determine if quality improvement strategies, education, and tools could improve primary care provider care and outcomes related to childhood overweight. As part of the comprehensive evaluation, in April 2008 (Year 1) a series of focus groups (n=4) was conducted so that participating physicians and staff could provide NHPS with critical feedback on QII components, materials, and tools. In Year 2, a second round of focus groups (n=4) will be conducted by June 2009, so to determine sustainability and effectiveness of Year 1 changes to QII components. Results from both rounds of focus groups, along with implications for improvement of the QII will be presented.

Session Title: Evaluating Systems Change in Mental Health and Addictions
Multipaper Session 612 to be held in Wekiwa 8 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Garrett Moran,  Westat, garrettmoran@westat.com
Evaluation of Behavioral Health Service System Needs: Clinical Needs and Resource Allocation
Presenter(s):
Denine Northrup, Western New England College, d.northrup@comcast.net
Kenneth Marcus, Connecticut Department of Mental Health & Addiction Services, kenneth.marcus@po.state.ct.us
Abstract: A pragmatic high quality evaluation was undertaken in collaboration with the CT Department of Mental Health and Addiction Services that would be useful to policy decision makers to assess resource allocation based on clinical needs of the population in a timely manner. Over 2500 client level surveys were completed by primary clinicians pertaining to client characteristics, needs and services. Results were utilized to plan effectively for client services and to redirect clients to the most appropriate services. The presentation will discuss the evaluation strategy and implications in the context of the clinical, fiscal and political circumstances. In addition, state agency perspective will be described pertaining to the usefulness of the evaluation information in addition to the facilitating factors and challenges.
Seeing the Forest and the Trees: Using Evaluation to Understand the Nuances of Mental Health System Change
Presenter(s):
Kraig Knudsen, Ohio Department of Mental Health, knudsen@mh.state.oh.us
Holly Setto, Ohio Department of Mental Health, settoh@mh.state.oh.us
Carol Carstens, Ohio Department of Mental Health, carstensc@mh.state.oh.us
Abstract: When evaluating system change, mixed methods may be used to study the effects of a grant or program implemented throughout a system of care, such as a state's mental health system. This presenter will discuss applying mixed methods to analyze the process and outcomes of Ohio's Mental Health Transformation State Incentive Grant (TSIG). The presenter will focus on the importance of using both qualitative and quantitative methods to capture the historical, political, and decision-making context in which mental health system transformation occurs, and how these factors influence project activities and outcomes. Further, using examples from Ohio's TSIG experience, the presenter will discuss how evaluation results can be useful and informative for policy-makers faced with making decisions in a dynamic, ever-changing environment.

Session Title: Climate Change and Avoided Deforestation: Challenges of Evaluation
Multipaper Session 613 to be held in Wekiwa 9 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Environmental Program Evaluation TIG
Chair(s):
Nancy MacPherson, Rockefeller Foundation, nmacpherson@rockfound.org
Discussant(s):
Aaron Zazuata, Global Environment Facility, azazueta@thegef.org
Abstract: The IPCCC and the Stern Report have placed strong emphasis on reducing rates of deforestation and degradation (REDD) in tropical countries as way to sequester carbon and mitigate climate change. Numerous schemes have emerged to pilot REDD in preparation for the COP 15 meetings to be held in Copenhagen in December 2009. Concurrent growth of Global Programs and Partnerships (GPPs) in the last decade to deliver Global Public Goods in environment and various other sectors (e.g. health, agriculture and finance among others) has resulted in the development of internationally recognized evaluation principles and standards to assess results. The papers in this session will discuss peculiar challenges in evaluating forest carbon sequestration programs as a way to deliver the Global Public Good of climate change. The session will outline steps needed to avoid past mistakes and design an evaluation framework to achieve credible measurement of outcomes.
Carbon Sequestration From Avoided Deforestation as a Complex Global Public Good
Uma Lele, Independent Consultant, uma@umalele.org
Alain Karsenty, CIRAD, akarsenty@free.fr
Benjamin Singer, CIRAD, benjamin.singer@gmail.com
Carbon Sequestration from Reducing Rates of Deforestation and Degradation REDD) faces a variety of design, implementation and evaluation issues: Contested property rights of public forest lands in developing countries, competing pressures on land use from population growth, food and agriculture, poverty, urbanization, transport and international trade in agriculture, forestry and energy products as well as governance issues of illegality, decentralization and voice of poor communities. The paper will illustrate the issues of property rights, policies, institutional choices as they relate to the measurement of baselines and subsequent changes in outcomes in deforestation as well as the attribution issues. It will contrast these challenges of public goods delivery of carbon sequestration from reduced deforestation, by comparing them with the challenges faced in other sectors, e.g. control of communicable diseases. It will explore its implications and lessons.
Critical Choices in Reductions of Emissions From Deforestation and Degradation (REDD) Architecture, Their Design and Evaluation Implications
Alain Karsenty, CIRAD, akarsenty@free.fr
Benjamin Singer, CIRAD, benjamin.singer@smail.com
There are competing REDD architectures and philosophies including (a) Market-based and centralized scheme ("mainstream" approach), (b) International fund and centralized: countries rewarded with money ("Brazilian proposal"),(c) Market-based and decentralized: certified projects get directly carbon credits, along with countries ("nested" approach),(d) International fund for financing (sectoral and extra-sectoral) policies & measures, and country-broad Payments for Environmental Services (PES) schemes. Each faces different choices with regard to baselines and measurement, opportunity costs and benefits, issues of leakages, risks, their measurements, financing and outcomes and the challenges vary at the local, national and global levels. These will be discussed in this paper with implications for evaluation theory and practice.

Session Title: Evaluating Outreach to LGBT People in Diverse Contexts
Multipaper Session 614 to be held in Wekiwa 10 on Friday, Nov 13, 3:35 PM to 4:20 PM
Sponsored by the Lesbian, Gay, Bisexual, Transgender Issues TIG
Chair(s):
John Daws,  University of Arizona, johndaws@email.arizona.edu
Examining the Lens:
Presenter(s):
Joe E Heimlich, Ohio State University, heimlich.1@osu.edu
Kevin Seymour, Center of Science and Industry, kseymour@mail.cosi.org
Abstract: Evaluation is contextually based and in free-choice (nonformal and informal) learning settings, the context is both the physical and the psychological context in which the learning occurs. For museums including zoos, science centers, parks, and living history sites, visitors have tremendously complex relationships with both the physical and psychological contexts. One rarely examined dimension of the context is the sexual identity of the visitor. The few studies of visitor and sexual identity or sexual orientation indicate that the visit is viewed very differently than it is for the normative culture. Some studies have suggested that family and visit group composition further complicate how a visitor experiences their time in a facility or in a location. This paper explores how sexual identity of the parent affects the perceived comfort and welcome in a science center. Gay parents, lesbian parents, and hetero-normative parents were interviewed in separate groups after an invited visit to a science center. The study, under an IRB from a partner university, was conducted during a normally busy visitation period to maximize the interaction of intact family groups with other visitors. The presentation will share patterns of comfort and engagement in a variety of cultural and scientific institutions that emerged and also clear expressions of comfort and discomfort within and between groups.
Gay, Lesbian, Bisexual and Transgender (GLBT) Resources: A Preliminary Evaluation of a Burgeoning GLBT program
Presenter(s):
Nicholas G Hoffman, Southern Illinois University at Carbondale, nghoff@siu.edu
Abstract: As the GLBT population becomes more visible, their needs and issues are given more and more attention. Universities across the nation are devoting more and more resources to addressing these needs and issues. One such university recently opened the doors to an official resource center for GLBT . This center has only just begun, but an assessment was conducted for this center in hopes of determining its future. The present evaluation set out to determine the needs, expectations, and visibility of this resource center as it began its implementation. The experiences and evaluation strategies expressed in this discussion may be useful to evaluations of new programs, especially when working with sensitive subjects.

Return to Evaluation 2009
Search Results for All Sessions