2011

Return to search form  

Session Title: The Value of Qualitative Data for Assessing Program Impact: When Qualitative and Quantitative Findings Tell Two Different Stories
Think Tank Session 701 to be held in Avalon A on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Mixed Methods Evaluation TIG
Presenter(s):
L Shon Bunkley, Community Research Partners, sbunkley@researchpartners.org
Discussant(s):
Gary Timko, Community Research Partners, gtimko@researchpartners.org
Abstract: This 90-minute Think Tank session will engage attendees in discussions of the value of mixed methods in general and qualitative data in particular for assessing program impact. The findings of a 5-year evaluation of a teacher education program will provide background and contextual information as attendees discuss various aspects of using mixed methods to assess program impact. Discussions will center on aspects like: - How and to what extent qualitative data can provide credible evidence of the impact of a program, in the absence of strong quantitative evidence - How and to what extent qualitative data can provide credible evidence of the impact of a program in the face of the push for experimental and quasi-experimental research designs - How to handle conflicting qualitative and quantitative findings

Session Title: An Exposition of Values Associated with Cultural Competence in Evaluation.
Skill-Building Workshop 702 to be held in Avalon B on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Presenter(s):
Arthur Hernandez, Texas A&M University Corpus Christi, art.hernandez@tamucc.edu
Abstract: Cultural Competency is an essential part of professional practice. This session will consist of an interactive experience of values exploration and clarification. Participants of various backgrounds and perspectives will share their cultural frameworks through the expression of values words and statements which will be written on cards organized by working groups with the intent to determine hierarchies and relationships. Participants will identify their various preferences/perspectives related to: 1) favorite or most often used theory of evaluation, 2) propensity for quantitative or qualitative methods, and 3) experience in cross and multicultural settings by selecting colored cards related to each. The groups will be comprised of groups of similar colors. The product of the session will be “concept maps” reflecting the various perspectives and enabling the exploration of rules and relationships among and between values, valuing and culture and the practice of evaluation which will be shared with the larger group and can serve as the basis for a monograph to which any interested participant can contribute.

Session Title: Integrated Monitoring, Evaluation, and Planning (IMEP): An Approach to Evaluating International Research and Systems Change
Panel Session 703 to be held in California A on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Systems in Evaluation TIG and the International and Cross-cultural Evaluation TIG
Chair(s):
Jane Maland Cady, McKnight Foundation, jmalandcady@mcknight.org
Discussant(s):
Margaret Hargreaves, Mathematica Policy Research, mhargreaves@mathematica-mpr.com
Abstract: In 2008, The McKnight Foundation began a new phase of its Collaborative Crop Research Program (CCRP). An evaluation design was intended to encourage cross-program coherence among 65 independent projects; to build local and regional capacity; to support communities of practice (Andes, Western Africa, East/Horn of Africa, Southern Africa); to encourage systems thinking in the areas of gender equality, agroecointensification, and sustainability; and to evaluate systemic improvements in nutrition and livelihood arising from basic crop research. Integrated Monitoring, Evaluation, and Planning (IMEP) was the result. These three panel members provide perspectives on design, development, and implementation of this approach. The funder (McKnight Foundation) discusses the drivers and challenges of an integrated M&E approach. The designers (Human Systems Dynamics Institute) discuss the principles and practices of a dialogue-based, systemic evaluation. The implementers in the field (Regional M&E Support) describe the process of introducing IMEP to project teams in diverse cultures, locales, scientific disciplines, and readiness.
Funding (and Evaluating) Complex Systems Change
Rebecca Nelson, McKnight Foundation, rjn7@cornell.edu
Jane Maland Cady, McKnight Foundation, jmalandcady@mcknight.org
Funders supporting social innovation wrestle to find evaluation approaches that support the emergent nature of those approaches, while also responding to practical demands of grant compliance. Increasingly, global efforts on top of foundation values, add to the complexity of finding effective and appropriate evaluation approaches, along with foundation values. In this experience, we will examine the ways in which the grantmaking process can be infused with a comprehensive perspective using systems concepts, iterative learning, case appropriate evaluation methods, and capacity building across countries, across sectors, across disciplines. This section will also present how the integration of monitoring, evaluation and planning fits into the overall foundation evaluation strategy of the McKnight Foundation.
Designing an Evaluation Framework for a Complex, Global Agricultural Research Initiative
Marah Moore, I2I Institute, marah@i2i-institute.com
Glenda Eoyang, Human Systems Dynamics Institute, geoyang@hsdinstitute.org
As a place-based, global agricultural research initiative, the Collaborative Crop Research Program (CCRP) is complex in its functions, its relationships, its points of influence, its capacities, its accountability systems-and its needs related to evaluation. The IMEP design is a response to these complexities, attempting to strike a balance across multiple evaluation purposes: documenting outcomes; increasing understanding of the contextual influences in place-based research; supporting adaptive practice; guiding capacity building; and providing a framework for accountability at multiple levels. At the center of these is the overarching purpose of learning. IMEP is committed to a paradigm shift in international development M&E-a move from external monitoring and summative evaluation focused on accountability to an integrated, iterative, learning-based system that supports high quality evaluation practice. This presentation discusses our process of developing IMEP, the challenges and successes we have encountered along the way, and what we have learned in the process.
Evaluation for Learning: CCRP Implementation Opportunities and Challenges
Claire Nicklin, McKnight Foundation, clairenicklin@gmail.com
Adiza Lamien Ouando, Independent Consultant, azouando@yahoo.fr
Carolyne Nombo Mtoni, Sokoine University, cnombo@yahoo.com
Kemigisa Margaret, , kemmargaret@yahoo.com
To the extent that projects and institutions in the field have worked with M&E it is usually falls into three categories, donor driven mechanisms for accountability, donor driven external evaluations and/or participative processes with stakeholders that rely on anecdotal evidence and testimonials, whereas as the CCRP approach emphasizes internal learning and flexibility based on solid data. Another challenge for the implementation of the IMEP system is that participating organizations are usually geared towards biophysical research and are unfamiliar with social and qualitative methods or the organizations are mostly dedicated to development activities and are unfamiliar with rigorous evaluation methods. The IMEP support endeavors to use various tools and methods of iterative dialogue to provide insight on the relevance and quality of project activities and thereby help facilitate the creation of flexible frameworks in the form of theories of change, monitoring and evaluation plans and work plans, which take into account the leverage points for change in agriculture systems and institutions that the projects are researching. The IMEP support also helps to aggregate the projects at a regional level to measure impact but also to see how the synergies and collaboration that the Collaborative Crop Research Program is based on creates portfolios that are more powerful than the individual parts, a process that is replicated at the program level.

Session Title: Working With Sensitive Data and Challenging Settings for Data Collection
Multipaper Session 704 to be held in California B on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Jenica Huddleston,  Deloitte Consulting, jenicahuddleston@gmail.com
Dealing with Sensitive Data in Community-based Organization Settings
Presenter(s):
Caroline Taggart, Ciurczak & Company Inc, caroline@ciurczak.net
Jessica Weitzel, Ciurczak & Company Inc, jessica@ciurczak.net
Abstract: Administering adolescent risk behavior surveys in community-based organizations (CBOs) presents unique challenges. While schools offer a relatively structured environment, CBOs may have youth arriving at staggered and unpredictable times, experience high staff turnover, and serve different youth each day. As the evaluators for a Community-Based Abstinence Education grant awarded to a network of local Boys and Girls Clubs (BGC), we worked with BGC staff to establish straightforward procedures for administering pre- and posttest surveys. The procedures were established to ensure data confidentiality and veracity and to track respondents for the one-year posttest survey in the somewhat unpredictable CBO environment. We will discuss the challenges faced in ensuring that all sites and staff members followed these procedures, and how we addressed situations in which this was not the case. We will also engage the audience in discussion on alternative ways to gather such information in CBOs and similar settings.
Healthy Eating and Active Living Survey Data Collection in the School Setting from Students - Response Rates, Data Quality, and Lessons Learned
Presenter(s):
Flora Stephenson, Alberta Health Services, flora.stephenson@albertahealthservices.ca
Abstract: Survey data collection is routinely used to solicit information from stakeholders. This session will share the process findings from a non-government mandated student survey at four Canadian school jurisdictions. Since data collection approaches were determined by the jurisdictions, different survey media (paper and online) were used and different levels of facilitation for data collection were received. School jurisdictions that chose the paper medium had a good response rate, as did the school jurisdiction with in-kind support to facilitate online survey data collection at the schools. One school jurisdiction chose the online medium and was not able to provide in-kind support, which resulted in poor response rate and data quality. The findings suggest that support from school stakeholders to facilitate survey data collection was the key factor for good response rate and data quality, not survey media. The session will outline future directions and ideas for student survey data collection.
Long-Term Follow-up to Evaluate Effectiveness of Parent-Child Sexual Health Communication Program Among Hispanics
Presenter(s):
Sheetal Malhotra, Medical Institute, smalhotra@medinstitute.org
Abstract: Several programs educate parents on sexual health communication with their children. However, there is not much information on long-term effectiveness of such programs. Furthermore, cultural barriers in Hispanics deter continued parent-child communication on sex. Methods: A Spanish-language curriculum, showed improved knowledge and skills in Hispanic parents in border communities. Follow up data were obtained to assess retention of knowledge and skills as well as communication frequency and behaviors. Objectives: The follow up assessment objectives were to 1) test retention of parent knowledge and 2) assess frequency and behaviors of parent-child communication on sexual health issues. Results: 174 of 263 (66%) parenting adults responded to follow-up phone surveys 12-24 months after completion of the parent prorgam. Data revealed significant retention of knowledge of risk factors for teen pregnancy, STIs, and dating violence. A majority of parents reported continued comfort and frequent communication on sexual health topics with their children.

Session Title: Valuing Evaluation Intersections: Building a Stronger Transdiscipline
Panel Session 705 to be held in Pacific A on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Theories of Evaluation TIG and the Presidential Strand
Chair(s):
Stewart Donaldson, Claremont Graduate University, stewart.donaldson@cgu.edu
Discussant(s):
Robin Lin Miller, Michigan State University, mill1493@msu.edu
Abstract: Evaluation has been characterized as a transdiscipline with a clear definition, subject matter, logical structure, and multiple fields of application. Its transdisciplinary nature is notable because it provides essential tools for other disciplines, while retaining an autonomous structure and research effort of its own. The purpose of this panel is to explore the importance and value that evaluation contributes to related disciplines, and the potential value that disciplines such as sociology, law, social psychology, economics, policy analysis, and organizational behavior can add to the improvement of evaluation theory and practice. It will be argued that valuing and expanding the intersections of evaluation with related disciplines holds great promise for building a stronger transdiscipline and evaluation profession.
The Intersection of Program Evaluation With Sibling and Ancestral Disciplines
Michael Scriven, Claremont Graduate University, mjscriv1@gmail.com
Michael Scriven will discuss the ‘sibling links’ between program evaluation and: (i) the other subdisciplines of evaluation— product evaluation, personnel evaluation, performance evaluation, etc.; and (ii) the ‘ancestor links’ to the two Elder Disciplines of evaluation—Ethics and Logic, each of which, like the siblings—consists largely of a particular application field of evaluation (normative ethics evaluates human behavior and attitudes in terms of an ethical framework, and logic evaluates arguments for validity, in everyday talk or technical fields e.g., law and much of pure and applied science). The emphasis will be on the ways in which understanding of the shared core—the ‘logic of evaluation’—which has been dev¬elop¬ed largely by people working in program evaluation, can help the other areas—and the ways in which the other sub-disciplines, and the Elders, can inform each other about useful tools for evaluation, e.g., as product evaluation taught program evaluation that knowledge of designer or funder goals is not essential for doing evaluation (though [degree of] goal achievement should usually be noted in a comprehensive evaluation report).
Social Psychology and Evaluation
Melvin Mark, Pennsylvania State University, m5m@psu.edu
Social Psychology has an interesting and important linkage to evaluation. In the Great Society growth spurt in evaluation, psychological and sociological social psychologists, such as Donald Campbell and Peter Rossi, respectively, were among the thought leaders in field. This presentation, after a brief historical review, examines the current status of the relationship between social psychology and evaluation. Social psychology is a major source of program theory, and the occasional source of guidance for the practice challenges that evaluators' face. The potential future of the social psychology-evaluation relationship is also addressed, including suggestions for more mutually beneficial linkages. Among the potential future directions noted is the possibility of social psychology providing a richer set of alternative value bases for evaluation theory and practice.
Versions 1.0 and 2.0 of Policy Analysis and Evaluation
Robert Klitgaard, Claremont Graduate University, robert.klitgaard@cgu.edu
Evaluation shares with policy analysis the use of tools from economics, statistics, and mathematical modeling. They share something else: disappointment with use and impact. Decision makers and institutions don't use evaluation or policy analysis as often as practitioners would like. But examples of high-impact evaluation and policy analysis suggest that we must supplement the economics, statistics, and modeling with what we might call "convening." This paper hypothesizes some shared, key features of versions 2.0 of evaluation and policy analysis.
Organizational Behavior and Evaluation
Stewart Donaldson, Claremont Graduate University, stewart.donaldson@cgu.edu
The multidisciplinary field of organizational behavior (OB) draws on disciplines such as sociology, psychology, communication and management to understand organizational dynamics and performance at multiple levels of analysis. The application of OB knowledge to improve organizational effectiveness and quality work life, known as organizational development, shares common interests and values with contemporary evaluation practice. For example, they share the desire to pursue the rigorous and systematic development of theory-driven and evidence based programs and policies to prevent and ameliorate a wide variety of social and organizational problems. This presentation will examine the current intersection of OB and evaluation, and suggest strategies to expand this intersection in a way that improves OB and evaluation theory, research, and application.

Session Title: Strategically Planning for Evaluation in Statewide Programs: Opportunities and Drawbacks
Panel Session 706 to be held in Pacific B on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Evaluation Policy TIG
Chair(s):
Paul Garbe, Centers for Disease Control and Prevention, plg2@cdc.gov
Discussant(s):
Hallie Preskill, FSG Social Impact Consultants, hallie.preskill@fsg.org
Abstract: Since 2009, the National Asthma Control Program at the Centers for Disease Control and Prevention has required funded state asthma programs to develop strategic evaluation plans that encompass multi-year evaluation portfolios. These strategic plans are intended to help programs plan, implement, and use evaluation to build program and evaluation capacity sequentially and systematically. Well conceptualized evaluations can support program improvement and maximize value from limited evaluation resources. If developed through a strategic process, planning for evaluation is also a key factor in building evaluation capacity. In this session we explore our approach to applying strategy to development of evaluation portfolios. During this panel, presenters will orient attendees to the rationale behind this new strategy, describe what makes strategic evaluation planning unique, share information about the strengths and weaknesses of this process, and offer examples of how strategic evaluation plans can facilitate comprehensive evaluation efforts in large programs.
Strategic Evaluation Planning: Guiding Federal Grantees Through Unchartered Waters
Carlyn Orians, Battelle Centers for Public Health Research and Evaluation, orians@battelle.org
Maureen Wilce, Centers for Disease Control and Prevention, muw9@cdc.gov
Leslie Fierro, Claremont Graduate University, Leslie.Fierro@cgu.edu
Shyanika Rose, Battelle Centers for Public Health Research and Evaluation, rosesw@battelle.org
Joanne Abed, Battelle Centers for Public Health Research and Evaluation, abedj@battelle.org
Linda Winges, Battelle Centers for Public Health Research and Evaluation, winges@battelle.org
Strategic evaluation planning holds great potential for comprehensively examining the performance of large, multi-faceted social service program performance over time. In this presentation we will describe what strategic evaluation planning is and how it differs from traditional evaluation planning. The process developed by CDC's Air Pollution and Respiratory Health Branch to create strategic evaluation plans within state asthma programs will be shared in detail, along with our hopes for how the resulting strategic evaluation plans may lead to more effective public health program management will be provided. By describing the original expectations for strategic evaluation planning, we will set the stage for other presenters on the panel to discuss the lessons learned to date in implementing the processes described.
Planning it out: Successes and Struggles in Creating Quality Strategic Evaluation Plans
Leslie Fierro, Claremont Graduate University, Leslie.Fierro@cgu.edu
Sarah Gill, Centers for Disease Control and Prevention, sgill@cdc.gov
Robin Shrestha-Kuwahara, Centers for Disease Control and Prevention, rbk5@cdc.gov
During their first year of a five-year cooperative agreement, state asthma programs that are Addressing Asthma from a Public Health Perspective were asked to articulate their vision for implementing multiple evaluations of their programmatic processes and outcomes. The strategic evaluation planning process and final products submitted by state asthma programs varied substantially. During this time we will present findings from a systematic review of the content and quality of strategic evaluation plans submitted by state asthma programs. These findings will be interpreted in light of insights provided to the presenters about the strategic evaluation planning process compiled from interviews conducted with a subset of state asthma program representatives.
Making It Happen: Reflections From States on Implementing Strategic Evaluation Plans
Sheri Disler, Centers for Disease Control and Prevention, sdisler@cdc.gov
Amanda Savage Brown, Centers for Disease Control and Prevention, abrown2@cdc.gov
As of November 2011, state asthma programs Addressing Asthma from a Public Health Perspective will be entering their third year of a five-year cooperative agreement lifecycle. In their second year of programmatic funding, states began implementing their strategic evaluation plans, and this presentation will describe the breadth of evaluations implemented during this second year of funding. The presenters will use examples of how states successfully used timelines and capacity building strategies as tools to move from their strategic evaluation plans to creating and implementing plans focusing on specific evaluations. Challenges and barriers encountered by state programs in transitioning to the implementation process will also be shared, along with solutions. Examples of how states revisited the strategic evaluation planning process based on their experiences will also be highlighted.

Session Title: Understanding Alternative Economic-based Approaches to Valuation
Panel Session 707 to be held in Pacific C on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Quantitative Methods: Theory and Design TIG and the Costs, Effectiveness, Benefits, and Economics TIG
Chair(s):
George Julnes, University of Baltimore, gjulnes@ubalt.edu
Discussant(s):
Michael Scriven, Claremont Graduate University, mjscriv1@gmail.com
Abstract: Economic-based approaches to judging the value of public policies and programs are becoming the standard by which other approaches to valuation are assessed. These quantitative approaches offer many insights but are also controversial. This session will offer three views of this approach to valuation, along with a discussant.
Valuing Program Outcomes Relative to Program Costs to Improve Both: Alternative Forms of Cost-Inclusive Evaluation
Brian Yates, American University, brian.yates@mac.com
"Value" can be understood as referring not only to what the outcomes of a good or service are likely to be, but also to how those outcomes compare to what we anticipate sacrificing to achieve those outcomes. This form of evaluation compares the worth of outcomes to the worth of resources expended to produce those outcomes. Alternative means of assessing the worth of both outcomes achieved and resources consumed are described in the first part of this presentation, using a few specific evaluations of human services. The second portion of this presentation uses an additional example to illustrate an evaluation framework that includes the activities in which program participants engage, and the changes that occur in participants, as well as resources used and outcomes produced. This more comprehensive valuing process also empowers better decision-making by incorporating information from clients and community members as well as from providers and funders.
Lessons from Valuing in Resource, Environmental and Conservation Settings
Andy Rowe, GHK International, andy.rowe@earthlink.net
Resource, environmental and conservation (REC) settings feature a two-system evaluand where both the human and natural systems must be explicitly included in evaluation. This provides a useful perspective to consider some of the implicit underlying premises of valuing as part of summative evaluations. This paper reviews some of these including: - Whose values get counted? Cultures often have very different uses for resources and these differences in utility can lead to radical differences in how cultures value resources. Value is fundamentally a human construct; does evaluation need to consider values important primarily to the natural system such as connectivity and resilience, position in a food chain, etc.? - What can we learn from emerging approaches in conservation such as ecosystem services and natural capital that is useful for valuing in evaluation? - How do we incorporate temporal and spatial boundaries relevant to the natural system when evaluation values largely from within the human system and is timed to address the temporal and spatial needs of the human system?
Reconciling Conflicting Views of Benefit-Cost Analysis in Service of the Public Interest
George Julnes, University of Baltimore, gjulnes@ubalt.edu
Benefit-cost analysis began as a tool for serving a utilitarian view of the public interest. As confidence in the Kaldor-Hicks logic has faded in economics, other foundations of benefit-cost analyses have been developed. This presentation will help orient evaluators to the challenges and controversies in this area.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Managing Program Evaluation: The Continued Invisibility of a Core Practice
Roundtable Presentation 708 to be held in Conference Room 1 on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Evaluation Managers and Supervisors TIG
Presenter(s):
Don Compton, Centers for Disease Control and Prevention, dcompton@cdc.gov
Michael Baizerman, University of Minnesota, mbaizerm@umn.edu
Abstract: Our 2009 New Directions for Evaluation volume (Compton & Baizerman, Eds., No. 121, Spring) pointed out that there was then a very small evaluation literature on the everyday practice of managing evaluation. Missing too was a theoretical and conceptual evaluation literature about the mundane work of managing professional evaluators ('knowledge workers'). This drought continues. Our text called for discussion by the field of whether managing evaluation was a(n) accepted professional practice, and if so, what a managing career might look like and what preparation for this work and career might be. We were met by silence. To challenge this drought and silence, we again propose a Round Table to discuss these and related practice issues as a way to keep focus on this work and its importance to the evaluation field. It is expected that participants will again provide examples which will enrich understanding of this work, while also serving to keep focus and interest in the topic of managing evaluation.
Roundtable Rotation II: Assessing Outcomes of and Improving Evaluation Through Client Feedback
Roundtable Presentation 708 to be held in Conference Room 1 on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Evaluation Managers and Supervisors TIG
Presenter(s):
Stacey Farber, Cincinnati Children's Hospital Medical Center, stacey.farber@cchmc.org
Janet Matulis, Cincinnati Children's Hospital Medical Center, janet.matulis@cchmc.org
John Murphy, Cincinnati Children's Hospital Medical Center, john.murphy@cchmc.org
Abstract: Client feedback is often sought to better understand satisfaction, service effectiveness, and evaluator competence or performance. However, 'reaction'-type data are generally not sufficient for communicating the benefits of evaluation and the return clients (or organization) get for investing in evaluation. Using frameworks that are typical to the evaluation of education and training (Kirkpatrick, 1975; Phillips and Phillips, 2007), our evaluation team designed a client feedback form that taps reaction-type information and higher-level outcomes, such as learning, application (evaluation use), impact (outcomes due to use), and value (monetized impact). We aver that it is here in 'outcomes' that value comes from evaluation. Our evaluation unit will share its client feedback form, process for administration, and data use for communication and business improvement. Attendees will be asked to share thoughts about their efforts to solicit and use client feedback for business improvement, communicating effectiveness and outcomes, and advertising for new business.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Using Values to Focus the Evaluation of a Dynamic, Multi-stakeholder Prevention System
Roundtable Presentation 709 to be held in Conference Room 12 on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Presenter(s):
Kristen Donovan, Center for Community Research, kdonovan@ccrconsulting.org
Lisa Garbrecht, Center for Community Research, lgarbrecht@ccrconsulting.org
Shanelle Boyle, Center for Community Research, sboyle@ccrconsulting.org
Erica Pachmann, Center for Community Research, epachmann@ccrconsulting.org
Abstract: Given a finite set of resources, it can be challenging to design and implement a quality evaluation of a dynamic, multi-stakeholder system encompassing regional, initiative, and countywide tiers. Valuing plays a critical role in determining which components of an overall system to focus on regarding evaluation design, methodology, and utilization. This roundtable will discuss challenges associated with assigning value in a complex prevention system located across San Diego County and consisting of multiple programs. Presenters will share their experiences, strategies, and lessons learned about how valuing was carried out during the planning and implementation phases of a countywide multi-initiative alcohol and other drug (AOD) prevention system evaluation project. Attendees will also have the opportunity to share their challenges, experiences, and effective strategies for prioritizing and incorporating values into their evaluations of dynamic, multi-stakeholder systems.
Roundtable Rotation II: The Recovery Oriented Systems of Care: A New Direction in Behavioral Health
Roundtable Presentation 709 to be held in Conference Room 12 on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Presenter(s):
Gisele Tchamba, Western Michigan University, gisele.tchamba@wmich.edu
Abstract: The Recovery Oriented Systems of Care (ROSC) is in part an ecological systems theory that demonstrates comprehensive ways to meet the needs of individuals and families seeking alcohol, drug and mental health services. When compared to its alternative, the Acute Care (AC) model, the ROSC provides long-term recovery outcomes, requires the participation of all stakeholders, and facilitates dialogue among service providers and clients. This relatively new conceptual framework and how it can be used in evaluation research, is still unclear to evaluators. The knowledge derived from this study will aid in understanding the ROSC and its feasibility for practice. This paper addresses the following questions: 1) what is the ROSC model? 2) How can its use improve the delivery of behavioral health? 3) How is it relevant to evaluation and research? The author intends to use an instrumental case study to explore the extent of the model's implementation as designed.

Session Title: Cost Analysis in International Development Evaluation: Working With Imperfect Data
Skill-Building Workshop 710 to be held in Conference Room 13 on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Keri Culver, Management Systems International, kculver@msi-inc.com
Abstract: International development projects today are more likely than ever to include a request for cost analysis as part of program evaluation. Though international development funders sometimes lack vital inputs for successful cost analysis, and may not even understand what is needed, these sponsors still expect to see cost analysis in final reports. Some of the challenges include missing or unclear budget data, funding from multiple donors, packaged reforms with multiple interventions, and uncertain beneficiary numbers. How can analyses be done under these kinds of circumstances? This skills workshop will include fun case studies (adapted from real evaluation data) for participants, helpful tools for cost analyses, and strategies for making the most of the data in hand. In a very concrete sense, evaluators must find ways to engage their partners to "value" the work undertaken, and to provide analyses of cost data that help in future decision-making.

Session Title: The Rigor and Practice of Dots, Stickers, and Labels as an Evaluation Method
Skill-Building Workshop 711 to be held in Conference Room 14 on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Lyn Paleo, First 5 Contra Costa, lpaleo@firstfivecc.org
Denece Dodson, First 5 Contra Costa, ddodson@firstfivecc.org
Abstract: Many health and social programs that we evaluate are intended for populations that are put off by self-administered written questionnaires. Often, these questionnaires can be re-formatted using use dots, stickers, and labels to appeal to groups that are resistant to "tests". Many think of using dots only for priority-setting activities; however, this method can substitute for questionnaires that assess individual or group opinions, experiences, and even knowledge gains. I have used this "sticky" method in situations such as a homeless shelter, a program for expelled middle-school students, and a home visiting program for at-risk new mothers. This workshop will cover the "ground rules" of the sticky method through the exploration real-world examples and discuss the adaptations workshop participants can make for their work. What about validity? The workshop will present ways to consider the validity of the results obtained through this method from perspectives of both conventional and participatory evaluation.

Session Title: From the Dusty Corner to the Corner Office: How to Bring Evaluation to Life
Demonstration Session 712 to be held in Avila A on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Evaluation Use TIG
Presenter(s):
Rick Groves, Mission Measurement, rgoves@missionmeasurement.com
Melodie Wyttenbach, NativityMiguel Network of Schools, mwyttenbach@nativitymiguel.org
Abstract: NativityMiguel Network of Schools was a crossroads. Despite stacks of research, and personal experience that its holistic model of Catholic middle-school education changed lives, the Network struggled to identify and prioritize its highest impact services and to understand and communicate that impact to its board and potential funders. Furthermore, the Network lacked a cohesive vision for the future of the organization that would serve as the foundation for growth. The Network turned to evaluation as a path to success. Hear how NativityMiguel used evaluation to define its mission, guide its programming, change the organizational culture and position it for growth. Mission Measurement will share the approach they used, including stakeholder engagement, collaborative workshops and benchmarking, to develop NativityMiguel’s plan for continuous evaluation. Melodie Wyttenbach will share the nuts and bolts of implementation, from the challenges of staffing, budgeting and buy-in, to the success of achieving higher-value outcomes.

Session Title: Building Networks on Sturdy Ground Through Evaluation Support
Panel Session 713 to be held in Avila B on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Nancy MacPherson, The Rockefeller Foundation, nmacpherson@rockfound.org
Abstract: Building networks for social change isn't an option anymore. In a complex issue and policy environment, networks are necessary to bridge expertise, resources, ideas, energy and extend learning and replication. Foundations increasingly see the value of developing and/or supporting networks, but face a difficult task in transcending power differentials and the lure of funding to create genuine and sustainable networks. Evaluation and related principles are a unique way for foundations to navigate network development. This session will highlight recent efforts by the Rockefeller Foundation to build and sustain networks addressing topics such as transportation, climate adaptation in the US, and climate resilience in South-East Asia. In each of these cases, Rockefeller used an evaluator to work along-side the network throughout its development to not only keep tabs on progress, but to inform network development. This approach increased engagement of network participants and laid the groundwork for sound and evaluable networks.
The Role of Evaluation in The Rockefeller Foundation's Vision for and Development of Networks
Nancy MacPherson, The Rockefeller Foundation, nmacpherson@rockfound.org
Ms. MacPherson will discuss the Foundation's evaluation strategy with regard to networks and why it chose to utilize the approach of both evaluators and evaluators as substance experts for process development. She will articulate how the process was viewed by senior staff, program officers and from her perspective overseeing evaluation.
Using Evaluation Principles in Network Development: Evaluator's Perspective
Jared Raynor, TCC Group, jraynor@tccgrp.com
Mr. Raynor, one of the evaluators contracted to help with network development, will discuss his experience using evaluation to do program development. He will share lessons learned from the experience, including how to gather and share information with diverse stakeholders, how to manage varying stakeholder expectations and ongoing learnings about developing and evaluating networks. He will be building on his previous work around assessing network capacity, network member capacity and network outcomes.
Using Evaluation Principles in Network Development: Network Perspective
Steve Adams, The Resource Innovation Group, steve@trig-cli.org
Mr. Adams will present on his experience building the CPLAN network with the input/assistance of an evaluator. CPLAN is a start-up network aimed to bridge disciplines within the climate adaptation field in the United States. He will describe where the evaluator provided the most value-added to the process and where he might have impeded progress or raised uncomfortable questions. Further, he will share any recommendations for how other networks can best work with evaluators and vice-versa.
Using Evaluation Principles in Network Development: Network Perspective
Stefan Nachuk, The Rockefeller Foundation, snachuk@rockfound.org
Mr. Nachuk will also present on his experience building the ACCCRN network with the input/assistance of an evaluator. The Asian Cities Climate Change Resilience Network (ACCCRN) is an emerging network of cities in South-East Asia aiming to facilitate relationships, information sharing and collective action across national boundaries, languages and political interests. He will describe where the evaluator provided the most value-added to the process and where he might have impeded progress or raised uncomfortable questions. Further, he will share any recommendations for how other networks can best work with evaluators and vice-versa.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Community Capacity Assessment: The Role of Evaluation in Creating Conditions for Continuous Learning and Accountability for Results
Roundtable Presentation 714 to be held in Balboa A on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building
Presenter(s):
Mary Achatz, Westat, maryachatz@westat.com
Abstract: This roundtable session presents the approach taken to the assessment of capacity to develop, implement and sustain powerful family and community strengthening strategies and tactics in Making Connections, the Annie E. Casey Foundation's multi-site demonstration. The presentation will address three key questions: 1) What does it take to create and sustain the depth and breadth of change needed to achieve durable results for significant numbers of families and children in neighborhoods with enormous challenges? 2) What role can data and evaluation play in building or building upon these capacities in communities and testing their effectiveness? And 3) How can evaluation be used as a tool for continuous learning, and strategic management of complex change processes? Discussion will focus on the explicit and implicit values embedded in the approach.
Roundtable Rotation II: Valuing Local Data and Evaluation Capacity: Lessons from the Annie E Casey Foundation's Making Connections Initiative
Roundtable Presentation 714 to be held in Balboa A on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building
Presenter(s):
Scott Hebert, Independent Consultant, shebert@sustainedimpact.com
Tom Kelly, Annie E Casey Foundation, tkelly@aecf.rog
Abstract: As part of its ten-year Making Connections initiative, the Annie E. Casey Foundation (AECF) placed significant emphasis on creating local data and evaluation capacity in the initiative sites. To support the creation of this capacity, AECF used a variety of strategies, including: establishment of 'Local Learning Partnerships'; development of standardized data collection instruments and tracking formats; 'evaluation dress rehearsals'; modeling of effective use of data in cross-site convenings; and on-going site-specific technical assistance from 'evaluation liaisons'. In 2011, telephone interviews were conducted with each site to determine what data and evaluation capacities had been established, and which AECF strategies and supports were most helpful in this process. Perhaps more important, the telephone interviews also explored which capacities would be sustained by the sites after the AECF funding ended, and why the sites valued these capacities. The roundtable will examine the findings from this research and its implications for other initiatives.

Session Title: The Twenty-year Journey of Evaluation Within a Foundation: The Colorado Trust
Panel Session 715 to be held in Balboa C on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Nancy Csuti, The Colorado Trust, nancy@coloradotrust.org
Discussant(s):
Patricia Patrizi, Patrizi Associates, patti@patriziassociates.com
Abstract: From the early days of evaluative questioning in the early 1990's through multiple board and CEO transitions, experimenting with varying grantmaking styles, changing visions and shifting roles of evaluation within the foundation, the underlying premise of evaluation for learning has remained steadfast at The Colorado Trust. This session calls on the combined knowledge of the four primary evaluation department staff at The Colorado Trust over the past two decades. Starting with the formation of the evaluation department and attempts to integrate learning across the organization, through the present day, this session will highlight and discuss the milestones reached in integrating evaluation into a foundation's work. Challenges faced over the twenty years, and the drivers for authentic evaluation-program integration mirror findings from the work of Patrizi Associates. This session will tie the real-life experiences of the foundation into the framework identified by the Patrizi Associates research.
The Journey Begins: Deciding What Evaluation Means for the Trust
Doug Easterling, Wake Forest University, dveaster@wfubmc.edu
After experimenting with responsive grantmaking for the first 5 years of its existence, the Board of the foundation made two crucial decisions in 1991: a) strategic initiatives would provide the framework for all funding, and b) each initiative would be evaluated. The board agreed to invest 10% of each initiative's budget in evaluation and to hire evaluators on staff. At the same time, the board had only a vague view of what evaluation would accomplish: "We need to know if we're making a difference." This session will describe the process through which the evaluation function took shape during Doug Easterling's tenure from 1992-1999. Evaluation evolved from a pure "test of effectiveness" approach (with the evaluator working from an external vantage point) to an approach that balanced summative and formative evaluation. At the same time, the evaluation director became responsible for facilitating a process of organizational learning within the foundation. Some of these learning attempts were more productive than others, depending in large part on the level of trust between staff. Over time, this process brought to the surface the question of how much influence the evaluation staff should have in the design and management of grantmaking. A larger set of lessons learned will be presented during the panel session.
The Journey Continues: The Rise and Fall of the Learning Organization
Nancy Csuti, The Colorado Trust, nancy@coloradotrust.org
When Doug Easterling (presenter #1) left the foundation the evaluation unit was moved under the direction of the VP of Programs. This move, intended to result in more opportunities for meaningful use of evaluation both in program planning and in assessment of impact, resulted in many of experiences reported in the Patrizi research - a reduced role of evaluation in planning, less reporting to the board, less disseminating "negative" findings, and so on. While the foundation investment in evaluation continued to grow, the purpose of the evaluations changed and learning as an intended outcome of evaluation was all but abandoned. The role of foundation leadership was critical in setting this path for evaluation during these years, as it was for changing the path back to one where learning was valued.
Haven't We Been Here Before?: The Cycle Begins Again
Tanya Beer, Center for Evaluation Innovation, tbeer@evaluationinnovation.org
The arrival of a new CEO triggered a fresh focus on evaluation, as she invited the evaluation department to focus once again on evaluation utilization and organizational learning. The same questions emerged among staff about how much influence evaluation should have on programmatic decisions, and the residue of tension between evaluation and programs made the transformation slow and contentious. Although staff turnover and the verbal support of the CEO relieved some of this tension, the evaluation staff had to test a variety of approaches-with varying degrees of success-to try to develop a cohesive evaluation philosophy and approach for the organization. The dilemmas facing evaluation staff (and consultants) during this period seem to be common across the evaluation-focused philanthropic sector, based on the work of Patrizi and others. Tanya, who served in the evaluation department during the new CEO's tenure, will share the key questions The Trust had to tackle, including: How can leadership support for evaluation be operationalized so that it's more than a public promise? What kind of processes and incentives need to be in place to help transform the relationship between program strategy and evaluation? What organizational capacities are required to build effective feedback loops? How does a new learning-oriented approach to evaluation change a funders' relationship with grantees and with external evaluation contractors?
Looking Forward With "Fresh Eyes:" The Past Informing the Present Informing the Future
Phillip Chung, Colorado Trust, phillip@coloradotrust.org
As the most recent member of the evaluation staff (October 2010), Phil will provide a "fresh eyes" perspective to the Trust's journey to implement an "evaluation for learning" approach within the foundation. He will discuss how The Trust's past efforts have informed their current approach to evaluation and organizational learning, with specific examples that illustrate the "fits and starts" he has witnessed in his short tenure. Furthermore, Phil will describe his experience - the day-to-day issues, challenges and lessons learned - in how the foundation oriented and supported a new evaluation staff member to implement such an approach. Finally, he will share ideas on what the role of evaluation at The Colorado Trust looks like going forward.

Session Title: The Impact of Evaluator Relationships on Evaluation Capacity Building
Multipaper Session 716 to be held in Capistrano A on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building
Chair(s):
Angela Moore,  Centers for Disease Control and Prevention, cyq6@cdc.gov
Examining Critical Episodes of a Developmental Evaluation: Unpacking the Progressions Involved in Relationship Building and Capacity Development
Presenter(s):
Cheryl Poth, University of Alberta, cpoth@ualbert.ca
Kathy Howery, University of Alberta, khowery@ualberta.ca
Dorothy Pinto, University of Alberta, dpp.pinto@ualberta.net
Abstract: Using a two-year developmental evaluation of a technology-focused professional development initiative as a case study, this paper describes the project team's progression with respect to relationship- and capacity-development over the course of planning and implementing the initiative. Innovative professional development initiatives require mechanisms to monitor and respond to learners in order to maintain relevancy for both individuals members of the BlackGold school district and organizations as a whole. Our current understandings suggest that a developmental approach offers such a mechanism (e.g., Patton, 2011). However, studies have yet to examine specifically how evaluators build relationships and develop organizational capacity to sustain such mechanisms. This paper reports on five critical episodes from the analysis of multiple sources of data where team members identified shifts in their roles and relationships. Implications for future consideration are discussed, including the potential of developmental evaluation to support the development and revision of professional learning opportunities.
Empowering Community-based Organizations in Evaluation - Findings from the Demonstrating Value Initiative
Presenter(s):
Bryn Sadownik, Vancity Community Foundation, bryn_sadownik@vancity.com
Abstract: This paper will discuss the findings of the Demonstrating Value Initiative (www.demonstratingvalue.org), a Canadian project to address challenges in reporting and evaluation in the social enterprise sector, and to ultimately move the sector towards improved stakeholder accountability, and better sharing and communication of innovative practices, learning and social value creation. This project brought over 30 funders and community-based organizations together over two years to develop a framework for improvement, and has since led to a non-profit capacity building program based at Vancity Community Foundation. This will be of interest to attendees who are interested in rationalizing funder reporting demands and in empowering community organizations to use evaluation and reporting as a means to attract support and investment, and to improve operational and strategic decision making.
Building Evaluation Capacity of Community-based Substance Abuse Prevention Programs in the United States Pacific Jurisdictions: Practical Implications for Work with Indigenous Populations
Presenter(s):
Alyssa O'Hair, Center for the Application of Prevention Technologies, aohair@casat.org
Eric Albers, Center for the Application of Prevention Technologies, ealbers@casat.org
Eric Ohlson, Center for the Application of Prevention Technologies, eohlson@casat.org
Anu Sharma, Center for the Application of Prevention Technologies, asharma360@yahoo.com
Wilhelm Maui, Center for the Application of Prevention Technologies, wil.maui@gmail.com
Kim Dash, Center for the Application of Prevention Technologies, kdash@edc.org
Abstract: In response to increasing pressures to demonstrate effectiveness, community-based organizations are challenged to develop greater capacity to evaluate their own programs. Although several models and definitions of evaluation capacity building (ECB) emphasize the importance of collaborating with practitioners to capture their knowledge and expertise, few initiatives have focused on operationalizing these models—especially with indigenous populations. The current paper presents a case study of efforts to enhance local evaluator knowledge and skills as well as increasing evaluation capacity of prevention workers implementing programs developed for indigenous populations in the U.S. Pacific Jurisdictions through Service to Science (STS). STS is a national ECB initiative funded by the Substance Abuse and Mental Health Services Administration's Center for Substance Abuse Prevention and implemented by the Center for the Application of Prevention Technologies. Specifically, we describe an ECB approach involving 24 programs implemented since 2008, highlighting methods employed, challenges faced, and outcomes attained.

Session Title: Conceptualizing and Valuing Child Welfare Outcomes in Evaluation
Think Tank Session 717 to be held in Capistrano B on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Social Work TIG
Presenter(s):
Kate LaVelle, The Boy's and Girl's Aid Society of Los Angeles, klavelle@5acres.org
Discussant(s):
Julie Murphy, Human Services Research Institute, jmurphy@hsri.org
Katrina Brewsaugh, One Hope United, katrina@katrinalee.com
Abstract: Evaluators commonly struggle with defining and measuring child welfare outcomes. The Federal Child and Family Service Reviews (CFSR) specify child welfare outcome goals in three areas: safety, permanency, and well-being. However, considerable variation in local context leads to different approaches to operationalizing specific outcomes within the CFSR areas. This creates challenges when aggregating findings to understand the overall 'state of child welfare'. In addition, data limitations, funder interests, and program mandates often cause the evaluation focus to shift to intermediate outcomes, delaying examination of higher-level outcomes. Using an ecological systems perspective as the framework, this think tank will examine ways in which the child welfare field conceptualizes and values safety, permanency, and well-being. Breakout groups will each focus on a CFSR outcome area, discussing how to operationalize and measure outcomes in that area, with an emphasis on sharing experiences, challenges, and successes.

Session Title: Evaluations Involving Criminal Justice Populations: Measurement Challenges and Advancements From the Field
Multipaper Session 718 to be held in Carmel on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Crime and Justice TIG
Chair(s):
Billie-Jo Grant,  Magnolia Consulting LLC, bgrant@magnoliaconsulting.org
Drug Courts Work, but How? Preliminary Development of a Measure to Assess Drug Court Structure and Processes
Presenter(s):
Blake Barrett, University of South Florida, bbarrett@fmhi.usf.edu
Roger Boothroyd, University of South Florida, boothroy@fmhi.usf.edu
M Scott Young, University of South Florida, syoung1@usf.edu
Abstract: Drug courts are specialty judicial programs designed to reduce criminal recidivism and substance abuse among offenders with substance use disorders. Drug courts operate through partnerships between the criminal justice and public health systems. The effectiveness of drug courts has been documented through numerous studies. The task remains now to determine their causal mechanisms. This presentation will review an iterative measurement development process conducted to create a measure to assess drug court structures and practices. Participants consisted of drug court personnel and academic experts. Measurement development activities included: 1) a comprehensive review of the literature; 2) interviews with key stakeholders to inform item development; 3) expert reviews of the initial item pool; 4) pile sort activity and subsequent exploratory factor analyses to determine which items best represent measure sub-constructs; 5) cognitive interviews completed by key stakeholders; and 6) final revisions to the item pool based upon results from cognitive interviews.
Characteristics Associated With Loss-To-Follow Up in a Multi-site Federally Funded Evaluation of Jail Diversion Programs
Presenter(s):
Annette Crisanti, University of New Mexico, acrisanti@salud.unm.edu
Brian Case, Policy Research Associates Inc, bcase@prainc.com
Henry Steadman, Policy Research Associates Inc, hsteadman@prainc.com
Abstract: Loss to follow-up is a significant problem in any evaluation, but even more so in criminal justice and mental health services research. The methodological issues resulting from high rates include selection bias and an inadequate sample size for data analysis. The purpose of our study was to examine what socio-demographic, clinical, legal or program level characteristics were associated with attrition. The study employed data from a multi-site evaluation of jail diversion programs. A self-report interview was conducted at baseline for 1,575 individuals. A 37% and 56% attrition rate was observed at the six month and 12 month follow-up, respectively. Our findings have several implications. For example, knowing which individuals are more likely to be lost to follow-up will allow evaluators to develop targeted sampling strategies (i.e., oversampling of those most likely to be lost to follow-up) and determine, in advance, who should be followed more closely in longitudinal prospective evaluation studies.
From Place-Based to Person-Centered: Lessons Being Learned From an Ongoing Process Evaluation of a Prisoner Reentry Initiative
Presenter(s):
Robert Kahle, Kahle Research Solutions Inc, rwkahle@kahleresearch.com
Abstract: As a result of extremely high rates of incarceration, annually, nearly 650,000 men and women return to local communities after being incarcerated in federal or state prisons in the United States. More than two-thirds return to prison within three years of release. (Langan & Levin, 2002). This paper presents preliminary results from an ongoing process evaluation of a privately funded and non-profit (as opposed to governmental) organization operated reentry initiative in a high-crime neighborhood on the east side of Detroit, MI. Analysis of the fidelity of the program to the original model, characteristics of the ex-offender population being served, and lessons learned from funding source, program and case management perspectives are presented and discussed. Observations on the language of reentry conclude the paper. Langan, P.A. & D.J. Levin. Recidivism of Prisoners Released in 1994. NCJ 193427. Washington, D.C.: U.S. Department of Justice, Bureau of Justice Statistics, 2002. bjs.ojp.usdoj.gov/content/pub/pdf/rpr94.pdf.
A Cost Study of Three Mental Health Courts
Presenter(s):
Henry Steadman, Policy Research Associates Inc, hsteadman@prainc.com
Lisa Callahan, Policy Research Associates Inc, lcallahan@prainc.com
Thomas Mcguire, Harvard Medical School, mcguire@hcp.med.harvard.edu
Pamela Clark Robbins, Policy Research Associates Inc, probbins@prainc.com
Roumen Vesselinov, Queens College CUNY, vesselinov@stat.com
Karli Keator, Policy Research Associates Inc, kkeator@prainc.co
Abstract: Mental health courts (MHCs) are a diversion program for persons with serious mental illness in the justice system. Over 280 MHCs currently exist across the U.S. Treatment courts are often viewed as a cost savings alternative to regular criminal justice processing, or treatment as usual (TAU). As part of the MacArthur Mental Health Court multi-site evaluation study, we collected criminal justice and behavioral health costs for 3 years pre- and post-target arrest for a MHC (n=311) and TAU (n-402) samples. This analysis compares the total, treatment, and criminal justice costs both between and among samples from pre to post. Overall costs increase for both the MHC and TAU samples from pre to post arrest with a surge in costs just prior to the target arrest. We examine subgroups to identify characteristics of high users of the criminal justice and behavioral health systems.

Session Title: National HIV Prevention Program Monitoring and Evaluation: Lessons Learned and Future Directions for Data Collection, Reporting and Use
Panel Session 720 to be held in El Capitan A on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Mesfin Mulatu, Centers for Disease Control and Prevention, mmulatu@cdc.gov
Discussant(s):
Dale Stratford, Centers for Disease Control and Prevention, bstratford@cdc.gov
Abstract: As part of its national HIV prevention monitoring and evaluation effort, the CDC has implemented the National HIV Prevention Program Monitoring and Evaluation (NHM&E) approach to collecting program data from over 200 funded health departments and community-based organizations. NHM&E consists of standardized variables, program performance indicators, and an optional data management and reporting system. CDC provides technical assistance to grantees to enhance their data collection and reporting, with the goal of using data for program improvement and accountability to stakeholders. This panel discusses critical steps taken by CDC and grantees to ensure the collection and reporting of NHM&E data; highlights the use of NHM&E and other data for local program improvement and national monitoring; and discusses lessons learned during these processes. It addresses the future direction of national-level monitoring and evaluation of HIV prevention programs in the context of limited resources, innovations in HIV prevention, and changes in the epidemic.
The National HIV Prevention Program Monitoring and Evaluation (NHM&E): An Overview
Antonya Rakestraw, Centers for Disease Control and Prevention, apiercerakestraw@cdc.gov
David Davis, Centers for Disease Control and Prevention, ddavis1@cdc.gov
CDC funds over 200 health departments and community-based organizations to conduct HIV prevention programs in the United States. In addition to the challenge of providing evaluative support across all agencies, CDC has to address the data use needs of stakeholders including Congress, national interest groups, and local HIV program directors and evaluators. This presentation highlights the approach taken by the Program Evaluation Branch in the Division of HIV/AIDS Prevention at CDC to implement a national-level monitoring and evaluation to meet the accountability, program monitoring, and program improvement needs of stakeholders. This presentation briefly introduces the National HIV Prevention Program Monitoring and Evaluation (NHM&E) approach, its history, and its components. The NHM&E includes standardized variables to capture information about HIV prevention programs, a set of program performance indicators, and an optional web-based data management and reporting system. This presentation is intended to provide the context for subsequent discussions.
Supporting the Collection and Reporting of Standardized NHM&E Data: Capacity Building, Technical Assistance and Quality Assurance
Elin Begley, Centers for Disease Control and Prevention, ebegley@cdc.gov
Michele Rorie, Centers for Disease Control and Prevention, mrorie@cdc.gov
This presentation identifies the capacity building, technical assistance and quality assurance mechanisms used by CDC to ensure standardized National HIV Prevention Program Monitoring and Evaluation (NHM&E) data reporting from health department and community-based organizations grantees. It is a challenge to provide these mechanisms to grantees that have unique epidemic profiles, while funded under different announcements, and using diverse data collection and reporting systems. To ensure standardization, CDC conducts webinars and workshops on data requirements and offers a fully functional service center to answer grantee questions. CDC also communicates quarterly with grantees about the quality of submitted data. For grantees using non-CDC supported data reporting systems, crosswalks are conducted to document the degree to which each variable submitted meets CDC requirements. Using partner services data, this presentation shares examples and lessons learned during the development of variable requirements, provision of webinars or workshops, and execution of variable crosswalks.
Using NHM&E Data for Program Improvement: Los Angeles County Department of Public Health's Experience
Mike Janson, Los Angeles County Department of Public Health, mjanson@ph.lacounty.gov
Sophia Rumanes, Los Angeles County Department of Public Health, srumanes@ph.lacounty.gov
Mario Perez, Los Angeles County Department of Public Health, mjperez@ph.lacounty.gov
This presentation highlights how the Los Angeles County Department of Public Health (LACDPH) utilizes NHM&E and other data sources for the purposes of program improvement. Using NHM&E data collected from HIV testing services (HTS) and HIV/AIDS surveillance data, LACDPH tracks program performance measures including HIV testing volume, HIV positivity rates, location of test, and linkages to care and partner services. Testing volume and positivity rates are measured at each site to ensure that the testing investment is being maximized. Testing location is measured to track geographic coverage of testing services. These data are matched to ensure that the volume of testing within specific geographic areas is commensurate with HIV disease burden. NHM&E data are matched with surveillance data to track linkage to care by verified CD4/Viral Load laboratory result. As a result of monitoring linkage to care rates, LACDPH has moved to incentivize linkage to care and partner services activities.
HIV Prevention Program Performance Indicators: A Tool for National Program Monitoring and Improvement
Barbara Maciak, Centers for Disease Control and Prevention, bmaciak@cdc.gov
Mesfin Mulatu, Centers for Disease Control and Prevention, mmulatu@cdc.gov
HIV Prevention Program Performance Indicators are standardized measures reported by CDC-funded prevention grantees that capture key components of prevention planning, service delivery, and evaluation. At the national level, CDC uses indicator data in combination with other data sources (e.g., surveillance, program context) to assess progress towards national prevention goals. In this presentation, we describe the use of standardized National HIV Prevention Program Monitoring and Evaluation (NHM&E) variables for indicator calculation, analysis, and reporting. We provide specific examples to illustrate the process used to align operational definitions for key indicator terms with NHM&E variables; develop calculation algorithms; build quality assurance checks; and manage and analyze indicator data. We highlight key facilitators and challenges associated with this process and describe ongoing efforts aimed at interpreting trends in indicator data in the context of multiple data sources; developing national indicator reports; and engaging stakeholders in using indicator data for program monitoring and improvement.
Evaluating in the Midst of a Paradigm Shift: New Directions for the NHM&E
Dale Statford, Centers for Disease Control and Prevention, bstatford@cdc.gov
Kimberly Thomas, Centers for Disease Control and Prevention, krthomas@cdc.gov
Romel Lacson, Centers for Disease Control and Prevention, rlacson@cdc.gov
This presentation focuses on 1) the shift in the nation's approach to HIV prevention and concomitantly to national-level HIV prevention program evaluation; 2) challenges this paradigm shift brings in an era of accountability; and 3) how lessons learned will contribute to strategies moving forward. The National HIV/AIDS Strategy calls for a comprehensive, coordinated federal effort with measurable goals and program evaluation that includes impact-driven measures. With this new strategy come challenges at the national and local levels, including evaluation questions that will have to be answered synthesizing data across federal agencies and involving cross-agency collaboration at the local level. Developing relationships across funders to respond to nationally coordinated efforts takes time, and implementing methods that will enable impact evaluation may take years. Finally, this presentation highlights proposed approaches for moving forward, including consistent engagement with stakeholders and use of innovative methods to synthesize cross-agency data.

Session Title: Implications for and Methods of Measuring HIV/AIDS Programs in Developing Countries and Brazil
Multipaper Session 721 to be held in El Capitan B on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the International and Cross-cultural Evaluation TIG and the Lesbian, Gay, Bisexual, Transgender Issues TIG
Chair(s):
Mary Crave,  National 4-H Council, mcrave@fourhcouncil.edu
Mental Health and HIV: A Tale of Two Countries
Presenter(s):
Mary Gutmann, EnCompass LLC, mgutmann@encompassworld.com
Melissa Sharer, John Snow Incorporated, msharer@jsi.com
Kimberly Green, FHI, kgreen@fhi.org
Dinh Thi Bich Hanh, FHI, bichhanh@fhi.org.vn
Abstract: To address the information gap in mental health (MH) services in HIV, AIDSTAR-One prepared a technical brief and two case studies to document innovative, successful approaches to integrating MH and HIV services among most at-risk populations in Vietnam and Northern Uganda. The evaluation used an appreciative inquiry approach to determine what worked well and contributed to program success. After completion of the case studies, each country program reflected on how participation in the case study resulted in: a) changes to the program; b) new programmatic insights or perspectives, and c) added value to the evaluation itself. Common themes included: 1) the value of an outside peer perspective, 2) the value of external validation, 3) an increase in staff morale/motivation, and 4) program changes based on recommendations in the case study. Appreciative approaches that focus on values and valuing can benefit programs by promoting self-reflection and dialogue, and reinforcing underlying values.
Metaevaluation of HIV/AIDS Prevention Intervention Evaluations in Sub Saharan Africa with a specific emphasis on Implications for Women and Girls
Presenter(s):
Tererai Trent, Tinogona Consulting, tereraitrent@gmail.com
Abstract: Given the norms that govern most patriarchal societies in Sub Saharan Africa (SSA), should the Western epistemology, ethics and concepts be the main default lens for evaluation. The empirical evidence upon which the evaluation of HIV/AIDS prevention is grounded in the region is based on behavior-focused interventions. The blindness of these evaluations to the true drivers of the epidemic, particularly the underlying social ecology that seems to give rise to women's vulnerability begs to question the validity of these evaluations despite their scientific evidence. Using a set of demonstrable properties (validity, credibility, utility, cost-effectiveness, ethicality, robustness) found to be relevant and adequate to characterize high-quality gender-sensitive evaluations of HIV/AIDS interventions in SSA, this presentation will demonstrate: 1) the use of a HIV/AIDS Prevention Evaluation Checklist (HAPEC) adapted from Scriven's (2007) Key Evaluation Checklist to include gender as a core merit-defining criterion to determine the merit of HIV/AIDS evaluations in SSA, and 2) ways in which evaluations can establish innovative prevention strategies that may influence and shift what drives women's intractable vulnerability to HIV exposure
Added Value of Appreciative Inquiry within a Systems Framework for Evaluating a Country-Level HIV/AIDS Information Management System
Presenter(s):
Tessie Catsambas, EnCompass LLC, tcatsambas@encompassworld.com
Mary Gutmann, EnCompass LLC, mgutmann@encompassworld.com
Elisa Knebel, EnCompass LLC, eknebel@encompassworld.com
Lisa Crye, EnCompass LLC, lcrye@encompassworld.com
Abstract: Use of Appreciative Inquiry in a systems framework greatly enhanced the strategic value of findings on the role of a country-level information management system developed by the UN to support the national HIV response. The evaluation methodology included interviews and an on-line survey with the database users and providers, site visits to three countries implementing the database system, and interviews or focus groups with senior management in Geneva. The information database tool and associated technical assistance uniformly catalyzed M&E thinking at the country level and strengthened national M&E systems by creating a common platform for monitoring and reporting and harmonization of data from different sources. The system offers comparative advantages over other systems and fulfills a unique need that enhances country ownership and M&E capacity building. These findings support the important role the UN plays in supporting national M&E systems.
Evaluation of Health Sites Under Consideration: AIDS in Brazilian LGBT Sites
Presenter(s):
Andre Pereira Neto, Oswaldo Cruz Foundation, apereira@fiocruz.br
Elizabeth Moreira, Oswaldo Cruz Foundation, bmoreira@ensp.fiocruz.br
Marly Cruz, Oswaldo Cruz Foundation, marly@ensp.fiocruz.br
Abstract: To present a methodology for assessing the quality of information available on health sites, with their respective indicators and weights, and apply this methodological tool in the evaluation of information on HIV / AIDS available at sites of Non Governmental Organizations (NGO) that advocate the rights for Lesbian, Gay, Bisexual and Transgender (LGBT) people in eight states of Brazil. METHODOLOGY. It combines three dimensions in the assessment of available information on health Web sites: content, navigability and readability. Each of these dimensions is subdivided into indicators that are weighted. The score of each site in each of the indicators is presented clearly. In this sense this article innovates and participates in the international debate on the issue. RESULTS. None of the sites examined presents information on HIV/AIDS who meet minimal criteria, indicators and weights methodology presented in this paper.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Peacemaker: Balancing Paradigmatic Concerns and Client Expectations in Collaborative Evaluation
Roundtable Presentation 722 to be held in Exec. Board Room on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Quantitative Methods: Theory and Design TIG and the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Scot Rademaker, University of South Florida, srademaker@mail.usf.edu
Tyler Hicks, University of South Florida, tahicks@mail.usf.edu
Sarah Mirlenbrink-Bombly, University of South Florida, mirlenbr@mail.usf.edu
Connie Walker, University of South Florida, cwalkerpr@yahoo.com
Abstract: In an era of high stakes accountability, clients frequently request statistical analysis to inform the evaluation product. This is not necessarily a problem, depending upon the source of data. However, in many cases the types of questions do not lend themselves well to statistical analysis. This problem is especially acute in cases where data from questionnaires is the primary source of data. Non-parametric analysis may present a viable quantitative solution to this problem. In this evaluation, the authors model how to use non-parametric statistical analysis to synthesize ordinal data (e.g. Likert Scale). The authors demonstrate how to navigate the client's requests and create a balance between disciplines of inquiry in order to answer the proposed questions in an effective manner. Results from an evaluation of a university sponsored tutoring project will be delineated in order to contextualize this discussion.
Roundtable Rotation II: Mediating Value Preferences Within the Evaluation Consultancy Role
Roundtable Presentation 722 to be held in Exec. Board Room on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Quantitative Methods: Theory and Design TIG and the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Debbie Zorn, University of Cincinnati, debbie.zorn@uc.edu
Sarah Woodruff, Miami University of Ohio, woodrusb@muohio.edu
Holly Raffle, Ohio University, raffle@ohio.edu
Barry Oches, Ohio University, oches@ohio.edu
Abstract: Evaluators often serve as the mediators among groups that have different value preferences. For example, the value preferences of local project staff regarding evaluation sometimes conflict with those of federal and state funding agencies that set strict standards for project design and evaluation methodology/rigor. This roundtable will discuss efforts by a statewide cross-project evaluation team, working with five local Mathematics and Science Partnership (MSP) projects, to mediate those differences. MSP projects are awarded federal funds to improve K-12 teachers' content knowledge in order to improve students' academic achievement. Our cross-site evaluation team has assumed a consultative role for helping local projects develop and conduct more rigorous and useful local evaluations that address local, state, and federal information needs. We have found that it is essential to develop a framework that promotes collective value preferences while supporting local evaluations that may reflect somewhat different value preferences.

Session Title: Feminist Evaluation and Research: A Preview of Good Things to Come
Multipaper Session 723 to be held in Huntington A on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Feminist Issues in Evaluation TIG
Chair(s):
Denise Seigart, Stevenson University, dseigart@stevenson.edu
Discussant(s):
Sharon Brisolara, Evaluation Solutions, sharon@evaluationsolutions.net
Denice Cassaro, Cornell University, dac11@cornell.edu
Donna Mertens, Gallaudet University, donna.mertens@gallaudet.edu
Divya Bheda, University of Oregon, dbheda@uoregon.edu
Abstract: One of the great challenges facing evaluators and researchers interested in feminist approaches is how to infuse feminist theory into their work in practical ways. Books on feminist research theory and the need for feminist research outweigh careful examinations of how to integrate theory and practice. Many questions regarding the application of feminist theory in evaluation contexts exist. Practical examples of the implementation of feminist evaluation are limited, and there is a great need for expanded discussion of these topics. A New Directions for Evaluation volume on Feminist Evaluation, published in 2002 (Seigart & Brisolara), opened conservations that began to address some of these concerns. A new book will soon be published which addresses these problems, and several of the authors and editors discuss this text and give a brief overview of their contributions in this session utilizing a Pecha Kucha format.
Cultural Competence and Gender Justice: Assumptions, Reality and Reconciling the Two
Saumitra SenGupta, APS Healthcare, saumitra.sengupta@gmail.com
This presentation will briefly examine cultural competence in evaluation from a gender justice lens. In the past decade, cultural competence in evaluation practice has come to be recognized as a necessity and in some cases an overarching principle. Feminist perspective stands at that crucial juncture today on the verge of being a needed, almost required, stance to consider in developing an evaluation agenda. It is important that the concepts of gender equality and gender justice are critically examined, compared and contrasted with the principles of cultural competence. Culture shapes values that are fundamental to evaluation. To be culturally competent, the evaluator must be contextually responsive and incorporate the local values in framing the evaluation question. That raises the question - If cultural competence indeed calls for deeper understanding and better appreciation of the cultural context, what can the evaluator do when faced with situations where gender inequality and even injustice is inherent to that cultural context?
Evaluation of a Rural Methamphetamine Treatment Program: Intensive Outpatient Therapy Using the Matrix Model Retrospective Gender Analysis in an Appalachian Context
Kathryn Bowen, Centerstone Research Institute, kathryn.bowen@centerstone.org
Inherent in implementing an evidence-based model are challenges related to maintaining fidelity through-out the life of the project. While maintaining fidelity was successfully maintained through-out this 3-year project cultural responsiveness and gender sensitivity was not. This SAMHSA funded project was a gender neutral intensive outpatient methamphetamine treatment program implemented in rural Appalachian communities. Integrating a treatment model that included building positive collaborative relationships, cognitive behavioral therapy, family education and individual therapy with same-sex therapist helped some women complete treatment, decrease depression and anxiety symptomatology and maintain sobriety. Sensitivity to the rural Appalachian culture by therapist indigenous to the area, transportation and flexible hours helped create an environment aligned with the special needs of women. However, these strategies alone were not enough to retain many women and perhaps the absence of gender sensitivity and cultural responsiveness played a significant role.
Designing, Conducting, and Interpreting Findings of Evaluations of International Development Interventions: An Elaboration of Gender Inequality and Ecological Concerns
Tristi Nichols, Manitou Inc, tnichols@manitouinc.com
The crux of feminism is gender equality. In using an economist lens, gender equality often refers to modifying the present system by promoting greater equality to opportunities, such as increased educational and workplace access. Through a social lens, gender equality frequently draws on empowerment, independence, and self-efficacy constructs. (Bandura) While such theoretical perspectives are useful in elucidating the critical components of progressing toward gender equality, they nonetheless present challenges when attempting to apply such constructs in the international development context. Namely, measuring and validating constructs at the community level are challenging processes/endeavors which many stakeholders and field practitioners dare not even initiate. This presentation will briefly explore and combine the use of two methodological approaches - feminist approaches and ecological inquiry - in international development.
Differences Between Gender Analysis and Feminist Evaluation
Donna Podems, OtherWISE, donna@otherwise.co.za
The presentation briefly encourages a practical understanding of feminist evaluation and gender approaches by using a comparative framework that describes the fundamental differences between feminist and gender evaluation. Within this context the presenter describes the theoretical and practical differences in the theoretical underpinnings, evaluation design and implementation of fundamentally different approaches that often attain different results. The presenter then practically demonstrates how combing the two approaches allow for an evaluator to surmount the barriers and constraints often associated with each approach. This presentation then encourages further debate and discussion on these two often confused approaches. It's a quagmire it seems, this discussion on gender and feminist evaluation. In my experience, the misunderstanding, and some many say mystification, surrounding the differences between gender and feminist evaluation occurs in various evaluation contexts throughout the 'developed' and 'developing' world, from conference venues to field work.

Session Title: Performance Measurement and Evaluation at the Bureau of Health Professions, Health Resources and Services Administration
Multipaper Session 724 to be held in Huntington B on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Government Evaluation TIG
Chair(s):
Cassandra Barnes, Health Resources and Services Administration, cbarnes1@hrsa.gov
Abstract: The Bureau of Health Professions administers a number of health profession training grant programs authorized by Title VII and Title VIII of the Public Health Service Act. The grants go to health professions schools and training programs across the United States to develop, expand and enhance training and to improve the distribution of the health care workforce. The passage of the Affordable Care Act and the GPRA Modernization Act have brought renewed interest and increased scrutiny to the performance of the Bureau's programs. This session will provide an overview of the response of the Bureau of Health Professions to this increased concentration on program performance as well as several examples of the current activities initiated in response.
Overview of the Performance Measurement and Evaluation Strategy
Sandra Abbott, Food and Drug Administration, sandra.abbott@fda.hhs.gov
Roger Straw, Health Resources and Services Administration, rstraw@hrsa.gov
This presentation will provide the necessary context about the Bureau's programs in order to understand the performance measurement and evaluation strategy. The strategy relies on several building blocks that are mutually reinforcing. Our Funding Opportunity Announcements (FOA) include guidance on performance measurement and evaluation to potential applicants so that they are aware of our evaluation expectations even before receiving grant funding. FOA's also include application review criteria associated with individual grantee level evaluation activities. We have made significant investments in training our program staff and in providing technical assistance to grantees about evaluation. We regularly conduct program reviews that rely on available performance data and evaluation results. We have completed a systematic process using logic modeling and other techniques that completely changed our approach to performance data collection. Finally, we have implemented a strategy for longitudinal data collection that will support impact evaluations of our programs in the future.
Using Logic Models to Rediscover the Value of Diversity Programs: Logic Model to Longitudinal Outcomes
Cassandra Barnes, Health Resources and Services Administration, cbarnes1@hrsa.gov
Evidence indicates that diversity is associated with improved access to care for racial and ethnic minority patients, greater patient choice and satisfaction, and better educational experiences for health professions students. Recognizing the need for diversity in the healthcare workforce, the Bureau of Health Professions aims to increase the diversity in the Nations' health workforce by providing funding to health professions schools to increase educational opportunities for disadvantaged students. What data is appropriate to collect on an individual participant level and in aggregate from grantees that will inform us most about health workforce diversity over time? A longitudinal study will provide information on retention of participants in primary care. Logic models set the foundation for describing legislative intent and program performance. The Division of Workforce and Performance Management will present the results of a collaborative process among stakeholders on creating improved performance measures and longitudinal outcomes.
Building Internal Capacity: Results of the Bureau of Health Professions Program Review Template
Courtney Pippen, Health Resources and Services Administration, cpippen@hrsa.gov
The Bureau of Health Professions has developed a program review template and user guide. The intent of the template is to allow program staff, or an independent evaluator, to systematically review the available qualitative and quantitative information about a program and to document their analyses and findings. The results are used to respond to routine requests for information as well as to guide and facilitate BHPr internal program evaluation efforts. Additionally, the participation of program staff in the review activities gives them a better appreciation of the value of evaluation. The template is broad enough to accommodate the range of purposes inherent in the Bureau's programs while allowing for flexibility for customization. The template has been used to review two BHPr programs; the Geriatric Education Center program and the Advanced Nursing Education Program. Additional programs will be reviewed. Lessons learned about both the programs and the template will be shared.
Impact of Medical School Funding on Physician Outcomes
Aisha Faria, Health Resources and Services Administration, afaria@hrsa.gov
The Bureau of Health Professions administers a number of health profession training grant programs authorized by Title VII and Title VIII of the Public Health Service Act. The grants go to health professions schools across the United States to develop, expand and enhance training and to improve the distribution of the health care workforce. This evaluation focuses on the impact of funding provided to U.S. medical schools between 1996-2009 on the diversity, supply, and distribution of the primary care physician workforce. The Association of American Medical Colleges will compile and link data from various internal and external sources and BHPr will use the data to analyze practice-level outcomes, graduate characteristics and the quality of education and their relationship with medical schools receipt of Title VII funding. This paper will provide an explanation of the methodology used in the study, present some of the results, and describe plans for additional analyses.

Session Title: Improper Payment Studies: Government Programs in Housing, Food, and Health Care
Panel Session 725 to be held in Huntington C on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Government Evaluation TIG
Chair(s):
Gary Huang, ICF International, ghuang@icfi.com
Discussant(s):
Daniel Geller, Insight Policy Research, dgeller@insightpolicyresearch.com
Abstract: This 90-minute panel session will introduce improper payment (IP) studies for government programs. To meet the requirements for accountability and financial integrity, federal programs are required by law to conduct studies to measure improper payment due to errors and frauds. IP studies must address design and methodological issues that reflect the dynamics of values and valuing. To determine IP risk, benefit eligibility, and ways to generate IP estimates to inform program improvement, researchers prioritize values and stakeholders' different interests. The panel session will examine methodological issues in designing and implementing IP studies, covering such issues as sample estimates vs. case audits, frauds vs. errors, local service providers vs. program participants, and field data collection vs. administrative data use. The panel members of three research organizations have years' experience conducting IP studies for programs that provide cash-equivalent benefits to low income populations in meeting basic needs: housing, food, and health care.
Comprehensive Improper Payment Evaluations for HUD's Assisted-Housing Programs
Sophia Zanakos, ICF International, szanakos@icfi.com
Sophia Zanakos, PhD, Deputy Project Director for the Quality Control for Rental Assistance Subsidy Determinations Studies, will discuss the comprehensive methodologies currently in place to assess improper payments associated with the major assisted-housing programs at HUD. The HUDQC studies utilize a stratified three-stage sampling design to provide nationally representative estimates of 1) the extent of erroneous rental determinations, 2) the extent of billing error associated with the owner-administered program, and 3) the extent of error associated with tenant underreporting of income. The extensive data collection and coordination methodologies will be discussed and include tenant case file abstraction, in-person CAPI interviewing, acquisition of third party information, and data matching with Social Security
Dynamics of Values and Valuing and Methodological Choices in IP Studies
Erika Gordon, ICF International, egordon@icfi.com
Erika Gordon, PhD, Project Director at ICF, will discuss the comprehensive methodologies currently in place to assess improper payments associated with a USDA program that provides meals to low income children in day care settings. She will present the methodological challenges of identifying improper payments from two distinct perspectives in the program, and contrast how the dynamics of values and valuing shape the methodological choices made in each assessment. She will also contrast the challenges of a quantitative approach with a case study that draws on her experiences using qualitative approaches to assess improper payments. Specific challenges associated with negotiating the objectives, eligibility requirements, stakeholder perspectives as incorporated in study design, data collection and analysis methods will be discussed.
Fraudulent Claims and the Resulting Improper Payments: Medicaid Administrative Data
Gary McQuown, Data and Analytic Solutions Inc, mcquown@dasconsultants.com
Gary McQuown,Project Manager at DAS for CMS Division of Fraud Research and Detection, will describe efforts by DAS for the Center for Medicare Medicaid Services (CMS) to identity, analyze and document probable fraudulent claims and the resulting improper payments to health care providers. The activity reviewed four years of Medicaid administrative claims data for all US states and territories with a variety of algorithms and statistical processes. Both individual health care providers and related institutions were reviewed with mixed results. The discussion will focus on issues and findings related to the practical and effective analysis of large administrative data from technical, managerial and political perspectives.
Models and Statistical Techniques for Updating Improper Payment Estimates Generated from National Surveys
Richard Mantovani, ICF International, rmantovani@icfi.com
Richard Mantovani,PhD, Project Director of the USDA's Supplemental Nutrition Assistance Program (SNAP, formerly Food Stamps) vendor trafficking study, will talk about using administrative data to improve improper payment estimates generated from national surveys. To obtain national estimates for calculating improper payments, many agencies conduct national representative surveys of individuals served and entities paid for providing services. In some cases, these surveys bear close similarities to audits and are overt, in some cases the surveys are covert with the data collector posing as a customer. These surveys are expensive, and some agencies have chosen to do them periodically. At the Food and Nutrition Service, there is a great deal of emphasis to provide updates to these studies using administrative and other available information. However, the administrative data are usually biased, and therefore cannot provide a national picture of improper payments. This discussion will focus on the basic models for updating improper payment estimates, and some of the statistical techniques useful in this Endeavour.

Session Title: The Use of Evaluation for Accountability: An International Perspective
Panel Session 726 to be held in La Jolla on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
David Nevo, Tel Aviv University, dnevo@post.tau.ac.il
Discussant(s):
David Nevo, Tel Aviv University, dnevo@post.tau.ac.il
Abstract: Accountability is a major use of evaluation in the domain of education in many countries and educational systems around the world (e.g. NCLB in the USA or OFSTED in the UK) Accountability is widely used as a basis for educational reforms intended to improve education in spite of its being highly criticized for its usefulness and side-effects in its attempts to improve educational systems. The purpose of this panel is to share research findings and insights from five educational systems (USA, British Columbia, England, Chile and Israel) regarding the intended and unintended consequences of accountability.
Evaluation to Improve Educational Accountability
Katherine Ryan, University of Illinois, k-ryan6@uiuc.edu
The goal of all educational evaluation is to enable programs and policies to improve student learning. At the same time, notions of educational accountability, control, and improvement are often entangled with evaluation. NCLB reauthorization has created the conditions required to address current U.S. educational accountability criticisms (e.g., lack of information about how to improve teaching and learning) and accumulating evidence about NCLB's unintended consequences. This paper reports findings from a three year mixed-methods (survey questionnaire and focus groups) evaluation examining state-level NCLB educational accountability consequences from teacher and principal perspectives. In addition to corroborating well-known unintended consequences, other findings (e. g., changing teacher views about cognition and student 'types') are presented. Based on these findings and other research, I propose an extended educational accountability model that incorporates school-based evaluation can better support instructional practices and school improvement efforts.
The Conflation of Educational Accountability and Educational Evaluation in British Columbia: The Impact of Neo-liberalism
Sandra Mathison, University of British Columbia, sandra.mathison@ubc.ca
In British Columbia, Canada the Ministry of Education has created an "accountability framework," which ostensibly provides the means for improving student achievement and is primarily built on results from a variety of provincially mandated tests. The neo-liberalism of the provincial government builds a rhetoric of educational evaluation and improvement that diverts attention from its sole interest in educational accountability. Educational policies, driven by economic goals, see children as either a means to or obstacles to economic prosperity and the concomitant dismantling of the welfare state. While there are efforts to resist the emphasis on accountability over evaluation, neo-liberalism presents a staunch challenge for educational evaluators and evaluation.
Intended and Unintended Consequences of a High-stakes National Teacher Evaluation Program
Sndy Taut, Universidad Catolica de Chile, staut@uc.cl
We investigated the consequences of the Chilean standardized, standards-based, high-stakes teacher evaluation program (SEDD), used to hold accountable all public school teachers of the country. We first explicated the underlying stakeholder theory, identifying intended effects and uses of this accountability policy for teachers, schools, and municipalities. Then we empirically examined both intended and unintended consequences. We analyzed large-scale databases and conducted numerous interviews. Our findings indicate that SEDD had mixed effects on teachers and more favorable effects on schools and municipalities. The associated professional development and incentives programs need to complement the accountability purpose with a more effective support function.
Accountability, Responsibility and Transparency: An Israeli Confusion
David Nevo, Tel Aviv University, dnevo@post.tau.ac.il
In this presentation the use of accountability to reform the Israeli educational system in recent years will be discussed pointing out its bureaucratic power and public viability based on confusion between accountability, responsibility and transparency. Discussing the meaning of these three concepts, it will be contended that a lack of distinction between the three enables the acceptance of accountability at face value as a useful means for the improvement of education. Conceptual and practical implications will be suggested regarding the meaning of accountability and its usefulness.
'Ahead of the Game': The Case for Professional Accountability Through Evaluation
Helen Simons, University of South Hampton, h.simons@soton.ac.uk
Accountability is so often seen as an external process generated from outside an institution or system to call people to account for what they do. This has always seemed to me slightly odd especially in professional fields where autonomy is valued. The corollary of such a stance is a corresponding responsibility to account. The problem is that some professions have not been 'up to the mark' or 'ahead of the game' in proposing their own educational evaluation systems that would demonstrate their accountability and lessen the need for external accountability in ways that do not match the reality of professional practice. With reference to various attempts in England to get schools to account, this presentation will explore what a professional accountability evaluation system might look like and argue that in the long term it will ensure standards of education and professional practice more effectively than external accountability systems.

Session Title: Friend or Frenemy? Research on Stakeholder Involvement
Multipaper Session 727 to be held in Laguna A on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Research on Evaluation
Chair(s):
Deborah Grodzicki,  University of California, Los Angeles, deborah.grodzicki@ucla.edu
"Friend or Frenemy:" Positioning the Evaluator in First and Second Party Evaluations
Presenter(s):
Girija Kaimal, Temple University, gkaimal@temple.edu
Abstract: Differences in the perception of evaluation can impact how partnerships develop and how a study is implemented. This paper examines the usefulness of the theoretical tool of 'positioning' to better enable evaluators and program partners to work together effectively. Positioning implies that when an individual is placed in an interpersonal dynamic they inevitably view the world from the vantage point of that position and in terms of particular images, metaphors, story lines and concepts which are made relevant within that particular discursive practice. Experiences from four different evaluation projects constitute the data. Results indicate that stakeholders' understanding and perceptions of evaluator positioning and the associated unique interactional dynamics can impact the process, outcomes and usefulness of the evaluation for the program. This framework is mapped using representative terms to help future evaluators negotiate the relationship with stakeholders more effectively.
Findings on Stakeholder Involvement: A Review of Empirical Studies of Stakeholder Involvement in Evaluation
Presenter(s):
Landry L Fukunaga, University of Hawaii, lfukunag@hawaii.edu
Paul R Brandon, University of Hawaii, brandon@hawaii.edu
Abstract: A multitude of articles about the benefits of involving stakeholders in the design and conduct of evaluations exist in the literature on evaluation. Proponents of stakeholder involvement in evaluation report enhanced credibility, validity, and utilization of evaluation findings as well as increased stakeholder evaluation capacity, skill, and empowerment. Summaries of this literature have not focused on empirical studies of stakeholder involvement to justify the results, conclusions and recommendations for evaluation practice. This paper addresses this issue through systematic review of 43 empirical studies identified through a comprehensive search for research on stakeholder involvement in evaluation published between 1985 and May 2010. Previously, we presented a paper on the disciplines, types of stakeholders involved, evaluation activities, and methods for collecting data about stakeholder involvement. In this paper we present a summary of the purpose and effects of involving stakeholders in evaluation based on empirical literature.
The Value of Relationships in Evaluation
Presenter(s):
A Rae Clementz, University of Illinois, clementz@illinois.edu
Abstract: This presentation presents preliminary results from an ethnographic study into relational dynamics between evaluators, clients and stakeholders during the design phase of an evaluation. Relationships are both cognitive (between ideas and actions) and also interpersonal (between the people who hold the ideas and act as interdependent agents). The research describes in rich detail client and stakeholder conceptions of educational program evaluation and examines the relationships between these conceptions and practical decisions made about the design of an evaluation study. These practical decisions about evaluation questions, data collection, and analysis impact the quality and helpfulness of the evaluation as a whole. Although just a small edition to the growing body of empirical research on evaluation as a practice and would address a significant lack of client and stakeholder voice within the field of evaluation.
Resistance to Engagement in Evaluations: A Theory of Planned Behaviour Perspective
Presenter(s):
Andy Thompson, Carleton University, arocznik@connect.carleton.ca
Bernadette Campbell, Carleton University, bernadette_campbell@carleton.ca
Abstract: It is generally recognized that for any evaluation to be successful there needs to be a significant amount of involvement from program stakeholders who are invested in the program. Ideally, these individuals would be highly engaged in the evaluation process; however, in practice the evaluation of social programs is often met with resistance. Application of the Theory of Planned Behaviour (TPB; Ajzen, 1991) may be useful in the identification and assessment of important determinants of resistance and engagement. This paper describes the steps undertaken to create a TPB measure of evaluation resistance. In Phase 1 behavioural, normative and control beliefs about evaluations were assessed. In Phase 2 these beliefs were transformed into corresponding attitudes, subjective norms and perceptions of behavioural control for the prediction intentions and subsequent behaviour. Preliminary data from these efforts will be presented and applications for use within evaluation settings will be discussed.

Session Title: An Inspired Design for Collecting Useful Data From Diverse Stakeholder Groups
Panel Session 728 to be held in Laguna B on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Disabilities and Other Vulnerable Populations and the Government Evaluation TIG
Chair(s):
Norma Fleischman, United States Department of Education, norma.fleischman@ed.gov
Discussant(s):
Norma Fleischman, United States Department of Education, norma.fleischman@ed.gov
Abstract: This panel will focus on how an external contractor (Westat) worked in close partnership with the funding agency (Rehabilitation Services Administration-RSA, U.S. Department of Education) and a federal grantee, the Helen Keller National Center for Deaf-Blind Youths and Adults (HKNC), to ensure that appropriate communications and other accessibility supports were in place to effectively facilitate a robust and valid study of HKNC, including interviews with critical stakeholder groups, including individuals with multiple disabilities, their families, service providers, and government agencies responsible for serving the needs of deaf-blind individuals. The evaluators were responsive to the needs of Federal stakeholders (RSA and OMB) for specific performance information. The resulting study design and implementation strategies were complex and required a true partnership with both the funders and HKNC to develop the necessary capacity to obtain study-specific information from former HKNC participants who were deaf-blind and other stakeholders.
An Evaluation of the Helen Keller National Center for Deaf-Blind Youths and Adults: Background, Scope, and Stakeholders
Norma Fleischman, United States Department of Education, norma.fleischman@ed.gov
The Helen Keller National Center for Deaf-Blind Youths and Adults (HKNC) provides services designed to equip consumers to live independently in their communities and/or enhance their ability to secure meaningful employment. In addition to Headquarter services, which include field services and training programs, 10 regional field offices located around the United States provide referrals, counseling, and transition assistance to deaf-blind consumers and their families and technical assistance and training to service providers. In 2008, Westat was contracted by the U.S. Department of Education's Rehabilitation Services Administration (RSA) to conduct an evaluation of HKNC. The study includes complex research questions and diverse participants and stakeholders, and involves challenges in communicating effectively and respectfully with individuals who are deaf-blind. Building communications capacities involved intensive on-site meetings at HKNC where the evaluators actively engaged with HKNC staff and consumers to observe the myriad of communication methods utilized by individuals who are deaf-blind.
Collecting Useful Data from Diverse Stakeholder Groups: Design Considerations and Challenges
David Bernstein, Westat, davidbernstein@westat.com
This presentation will summarize the design approach developed for an evaluation of the Helen Keller National Center (HKNC) for Deaf-Blind Youths and Adults. The study is evaluating HKNC by gathering data on program implementation to help deaf-blind individuals achieve independent living and vocational goals. The study included residential adult educational programs offered in Sands Point, NY at HKNC Headquarters as well as services offered by HKNC regional representatives and field staff across the United States. HKNC provides services to a wide variety of stakeholder groups including deaf-blind individuals, their families, service providers who support deaf-blind individuals, vocational rehabilitation agencies and staff, and other interest groups. The Rehabilitation Services Administration (RSA) in the U.S. Department of Education identified 14 evaluation questions to be addressed. Westat developed instruments to collect data through interviews, site visits, an email survey, and archival data analysis with 10 different stakeholder groups.
Design Considerations for Collecting Data from Deaf-Blind Individuals
Carol Cober, Westat, carolcober@westat.com
This presentation reviews the communication concerns and data collection approach for an evaluation of the Helen Keller National Center (HKNC). The study included interviews with deaf-blind former participants in HKNC's residential program in Sands Point, NY. A range of appropriate methods for communicating through interpreters were used. Pertinent background information was critical and included factors such as age of onset of hearing and vision loss, vision stability, level of language competencies, educational backgrounds and any physical, cognitive or other disabilities that might affect communication. Screening and matching of qualified interpreters and interview subjects was critical so that individualized interpreting needs could be met. A training video was developed to capture specific signs unique to the HKNC program and to orient the interpreters to the interview protocol. This presentation reviews what evaluators should consider for understanding communication concerns of people who are deaf-blind to ensure their full participation in an evaluation.

Roundtable: Constructing Performance Indicators for Democracy Promotion Projects and Programs
Roundtable Presentation 729 to be held in Lido A on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Krishna Kumar, United States Department of State, kumark@state.gov
Abstract: International donor agencies, implementing partners and management experts have been struggling to develop suitable performance indicators for their democracy interventions. While they have come up with a plethora of such indicators, the reliability, validity and utility of many of these indicators remain questionable. Moreover, there is no consensus about them. The Round-Table will discuss recent attempts to develop performance indicators for democracy interventions and the lessons which evaluators have learnt. A brief background paper listing critical lessons will be shared with the participants for discussion.

Session Title: Building Capacity and Translating Knowledge Across Multiple Settings
Multipaper Session 730 to be held in Lido C on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Patrick Koeppl,  Deloitte Consulting, pkoeppl@deloitte.com
Developing Organizational Capacity to Explore Care and Retention Barriers among HIV Positive Urban Patients
Presenter(s):
Jennifer Catrambone, Ruth M Rothstein CORE Center, jenncamcat@gmail.com
Susan Ryerson Espino, Ruth M Rothstein CORE Center, srespino@gmail.com
Abstract: Internal and external evaluators worked collaboratively with multidisciplinary staff to construct an anonymous survey to explore barriers impacting the care engagement of urban HIV positive patients. The setting is a large Midwestern one-stop primary care clinic for infectious diseases. Target participants are returning to care after a gap in care exceeding one year. Patients are asked to endorse any listed concerns that have impeded their care. Concerns are personal, interpersonal, community, and structural concerns in nature including those relating to living situations, transportation, providers, clinics, disclosure, health, employment, family, and violence. Factors influencing a return to care are also explored along with basic demographics. This presentation will describe in more detail the collaborative development process, pros and cons of method, data collected since the Spring of 2010, as well as clinic-wide dissemination successes and challenges in an attempt to increase staff awareness of barriers and foster dialogue about prevention.
A Framework to Guide Evaluation Knowledge Translation and Exchange (KTE) Within a Rural and Remote Health Authority
Presenter(s):
Jennifer Miller, Interior Health Authority, jennifer.miller@interiorhealth.ca
James Coyle, Interior Health Authority, james.coyle@interiorhealth.ca
Abstract: What works and what doesn't when trying to get evaluation findings into the hands of healthcare decision makers? Drawing on current theory on knowledge translation and exchange (KTE), the Interior Health Evaluation Department has developed strategies to facilitate both integrated and end-of-project KTE with health program stakeholders at all levels. This presentation will outline how our team is incorporating a simple framework and 5 key KTE questions (based on John Lavis, McMaster University, Canada): 1) what is the message?, 2) who is the audience?, 3) who is the messenger?, 4) what is the transfer method?, and 5) what are the expected outcomes?. There will also be time for discussion regarding how this KTE framework may be applied in other evaluation settings.
How We Established and Nurtured Evaluation Capacity Within a Complex and Largely Rural and Remote Canadian Health Authority
Presenter(s):
James Coyle, Interior Health Authority, james.coyle@interiorhealth.ca
Jennifer Miller, Interior Health Authority, jennifer.miller@interiorhealth.ca
Abstract: The Interior Health Authority Evaluation Department formed in 2007 and is a small team of dedicated health program evaluators within the large, complex, rural/remote regional health authority. The department's goal is to support learning, decision making, and planning, through tailored, thoughtful and participatory evaluation approaches. Some colleagues who don't have dedicated evaluation resources in their organization often ask 'how did you start up and sustain your evaluation team in your organization?' This presentation will outline the evolution of our team's strategies, evaluation approaches and activities (e.g. conducting executive-sponsored priority evaluations, consultation/coaching with program teams, and provincial evaluation liaison/representation) over the past 4 years. We will also discuss how our evaluation team ensures that both the process and findings of an evaluation are useful for customers and remain relevant to organizational sponsors.

Session Title: Progress Reporting for US Federal Grant Awards: Templates, Guidance, and Data Standards to Support Effective Program Evaluation
Panel Session 731 to be held in Malibu on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Laurel Haak, Discovery Logic, laurel.haak@thomsonreuters.com
Discussant(s):
Laurel Haak, Discovery Logic, laurel.haak@thomsonreuters.com
Abstract: Funding organizations involved in research and technology development have many approaches to evaluate program effectiveness and mission impact. A common program evaluation tool used across US federal agencies is the grant progress report, prepared by program participants usually on an annual basis but at least to close out a project. This session will explore reporting guidance provided to program participants, prospective longitudinal collection of progress report data, and efforts to create data standards to support cross-agency reporting. A discussant will provide a perspective on effectiveness of approach and moderate a discussion on opportunities for creating a shared set of data elements to support program evaluation.
Using the Logic Model Process to Guide Data Collection on Outcomes and Metrics
Helena Davis, National Institutes of Health, helena.davis@nih.gov
Grantees and community partners in the Partnerships for Environmental Public Health (PEPH) programs of the National Institutes of Environmental Health Sciences (NIEHS) have identified the lack of standardized evaluation tools and metrics as one of the biggest challenges for the PEPH program. In response, the NIEHS PEPH team developed an evaluation metrics manual. The manual uses a logic model approach to guide grantees in identifying and measuring their project activities, outputs, and impacts. When working with partners, a wide range of values can be presented; logic models and clear metrics can help to ensure that the values are explicitly discussed. The manual identifies potential activities, outputs and impacts and provides example metrics for these. The manual addresses five different thematic areas: partnerships, leveraging, products and dissemination, education and training, and capacity building. This presentation will focus on capacity building.
Evaluating Collaboration and Team Science in the National Cancer Institute's Physical Sciences: Oncology Consortium
Larry Nagahara, National Institutes of Health, larry.nagahara@nih.gov
The Physical Sciences-Oncology Centers (PS-OCs) program was founded by the National Cancer Institute to unite the fields of physical sciences and cancer biology by creating trans-disciplinary teams and supporting infrastructure. Ultimately, the success of the program will be measured by the generation of new knowledge and new fields of study to better understand the physical and chemical forces that shape and govern the emergence and behavior of cancer. To support a prospective program evaluation, PS-OC program staff have implemented a comprehensive bi-annual progress report, and are currently developing a data model, database, and reporting user interface to mine these data. The progress report collects information on a number of activities, including curriculum development, training, research methods, collaborations, scientific progress, new projects, and publications. This presentation will cover how the progress report was developed, challenges to implementation, and opportunities for evaluation when structured data are collected from program outset.
Creating a Shared Core Set of Reporting Elements
David Baker, Consortia Advancing Standards in Research Administration Information, dbaker@casrai.org
While one might try to envision a single grant progress report for use across all federal agencies, this presents issues for programs with different goals and different audiences. An alternative approach would achieve the efficiencies of the above approach while avoiding the issues of a 'one-size-fits-all' model. Defining and implementing standards and core reporting elements for progress data enables program participants to use a single data model to generate and combine global and specific reporting elements. This presentation will address the current status of efforts to create a set of reporting standards, outline stakeholders, and address what data fields and ontologies are included, as well as approaches to encourage their continued development and use by the community.

Session Title: Valuing Indigenous Rights: Implications of the UN Declaration on the Rights of Indigenous Peoples for Evaluation
Think Tank Session 732 to be held in Manhattan on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Indigenous Peoples in Evaluation
Presenter(s):
Katherine Tibbetts, Kamehameha Schools, katibbet@ksbe.edu
Discussant(s):
Bonney Hartley, Seva Foundation, bhartley@seva.org
Nicole Bowman, Bowman Performance Consulting, nicky@bpcwi.com
Morris Lai, University of Hawaii, Manoa, lai@hawaii.edu
Kalyani Rai, University of Wisconsin, Milwaukee, kalyanir@uwm.edu
Abstract: One of the key values underlying the UN Declaration on the Rights of Indigenous Peoples is the right to self-determination. A key issue in conducting an evaluation is "Whose values?" That is, what values provide the genesis for the evaluation and the evaluation questions? Whose values determine merit and worth? What ways of knowing and communicating guide the design and implementation of the evaluation? To whom is the evaluator accountable? And, who owns the evaluation, data generated, and findings? This session explores implications of the Declaration for evaluation. After an introduction by a panelist who participated in its development, the panel (representing 4 of the 7 geo-cultural regions named by the UN Permanent Forum on Indigenous Issues) will share challenges and lessons learned related to these value-laden questions. A minimum of 30 minutes will be reserved for session participants to explore this topic with the panelists and each other.

Session Title: Meta-Analysis De-Mystified: A Step-by-Step Workshop
Demonstration Session 733 to be held in Monterey on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Graduate Student and New Evaluator TIG
Presenter(s):
Pedro Mateu, Western Michigan University, pedro.f.mateu@wmich.edu
Kristin A Hobson, Western Michigan University, kristin.a.hobson@wmich.edu
Robert McCowen, Western Michigan University, robert.h.mccowen@wmich.edu
Abstract: Since the mid-1980s, researchers have employed meta-analysis with the aim of synthesizing results on the effects of interventions, supporting policy and practice, and confirming results from primary studies. With increasingly scarce resources and greater accountability requirements, meta-analysis could become a powerful tool for evaluators to produce objective, defensible, and largely value-neutral evidence, which policy- and decision-makers could reference when forming and revising policies and programs. The purpose of this session is to present a simple road map for conducting a meta-analysis, including lessons learned from our own early meta-analyses. The session will walk participants through formalizing a research question, defining keywords, searching databases, reviewing literature, creating a database, calculating effect sizes, and interpreting results. Gaining an understanding of the lessons learned will help evaluators to avoid committing similar mistakes and anticipate future potential constraints.

Session Title: What Counts as Ethnography?: Valuing Ethnographic Methods in Evaluation
Think Tank Session 734 to be held in Oceanside on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Qualitative Methods TIG
Presenter(s):
Eve Pinsker, University of Illinois, Chicago, epinsker@uic.edu
Mary Odell Butler, University of Maryland, maryobutler@verizon.net
Michael Lieber, University of Illinois Chicago, mdlieber@gmail.com
Discussant(s):
Jacqueline Copeland-Carson, Copeland Carson & Associates, jackiecc@aol.com
Rodney Hopson, Duquesne University, hopson@duq.edu
Abstract: Although ethnography is used in multiple disciplines, evaluation anthropologists view ethnographic tools as a central part of their toolkit as they seek to develop a methodology that effectively combines the strengths of evaluation and anthropology to address evaluation questions. Yet ethnography is both a valued asset and a challenge for evaluators. We value ethnographic methods because of their utility for addressing questions of meaning, including the relationship of program activities to consciously intended and unintended outcomes. However, traditional ethnographic approaches must be adapted to evaluation timelines and client's needs for specified kinds of information. We will discuss the challenges in adapting ethnography for evaluation anthropology using rapid assessment, participant observation and open-ended questioning in ways that will allow us to meet client needs while protecting the scientific integrity of evaluations. Discussing this issue with both anthropologists and evaluators will, we trust, generate new perspectives on applying ethnography in evaluation.

Session Title: Valuing Collaboration: Lessons Learned Through Evaluations of a Federally Funded Kinship Initiative
Panel Session 735 to be held in Palisades on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Human Services Evaluation TIG
Chair(s):
Kate Lyon, James Bell Associates Inc, lyon@jbassoc.com
Abstract: Collaboration can be the context for evaluation and a subject of evaluation. Panelists will discuss ways to incorporate collaborative activities into evaluation designs and share lessons learned through three federally-funded Kinship Navigator projects. These grants and a national cross-site evaluation were funded by the Fostering Connections to Success and Increasing Adoptions Act of 2008. The first panelist will explore the challenge of navigating internal and external relationships in conducting the project evaluation. The second will discuss how Social Network Analysis informs interactions and collaboration among kinship service providers. The third panelist will focus on challenges and successes of collecting data on collaborative efforts of Kinship Navigator projects in the context of relationship-building between the evaluator and program. The fourth panelist will present preliminary findings on collaboration from the cross-site evaluation and will set the tone for a discussion on the practicality and usefulness of various approaches to assessing collaborative efforts.
Internal and External Collaboration in a Multi-Agency Grant Project
Suzanne Sutphin, University of South Carolina, sutphist@mailbox.sc.edu
South Carolina Connecting for Kids, a three-year federal grant awarded to the South Carolina Department of Social Services, is a Kinship Navigator program which provides access to community services to relatives caring for children in open treatment cases. The project evaluator from The Center for Child and Family Studies (The Center) at the University of South Carolina has pursued a collaborative process with the many community partners involved in providing the navigator services, including: United Way 211 system, HALOS, and the South Carolina Association of Children's Homes and Family Services. In addition to external collaboration, The Center navigates among its internal partners that provide training for kinship caregivers, information and technology services, and Spanish to English translation for the project. The lead evaluator of the Connecting for Kids project will discuss the challenges of conducting an evaluation in collaboration with multiple community partners and internal divisions of The Center.
Visualizing Collaboration: Approach and Preliminary Results From the San Diego YMCA Social Network Mapping Project
Jennifer James, Harder+Company Community Research, jjames@harderco.com
Cristina Magana, Harder+Company Community Research, cmagana@harderco.com
Sophia Lee, Harder+Company Community Research, sophialee@harderco.com
The San Diego YMCA Kinship Navigator is a three-year federally funded project to build a regional support system for San Diego's kinship caregivers. One of the primary goals is to evaluate how effective navigator services are in supporting kinship families and improving outcomes for children while strengthening the network of kinship care in San Diego County. Harder+Company Community Research used Social Network Analysis (SNA) to examine interactions and collaboration among kinship providers during the first year of program implementation. SNA provides a quantifiable basis for understanding the effect of inter-organizational collaboration on individual- and program-level outcomes. This presentation will provide an overview of preliminary findings and recommendations for utilizing SNA in evaluation research.
Valuing Collaboration in Ohio's Enhanced Kinship Navigator Project
Kimberly Firth, Human Services Research Institute, kfirth@hsri.org
Human Services Research Institute (HSRI) is evaluating the Ohio Enhanced Kinship Navigator project, a grantee under the Fostering Connections to Success Act. This presentation focuses on the interplay between building collaboration and evaluating collaboration. Early in the design and implementation stages, HSRI and the seven project sites recognized the important link between Kinship Navigators' success in helping families make connections and the Navigators' own community connections. As the project sites increased their attention to community outreach and networking, HSRI expanded its focus on the sites' collaborative efforts. Kimberly Firth is the liaison between HSRI and the project sites, leading the evaluation of community connections. This presentation will discuss why and how HSRI approached evaluation of project sites' community collaboration and the challenges and successes encountered. Preliminary findings from a survey that uses a collaborative scale to generate social network maps will help illustrate lessons learned.
Assessing Collaborative Relationships and Their Impact on Service Delivery in the Family Connection Discretionary Grants
Kate Lyon, James Bell Associates Inc, lyon@jbassoc.com
Jennifer Dewey, James Bell Associates Inc, dewey@jbassoc.com
Grace Atukpawu, James Bell Associates Inc, atukpawu@jbassoc.com
Chi Connie Vu, James Bell Associates Inc, vu@jbassoc.com
James Bell Associates, Inc. is conducting a cross-site evaluation of the Family Connection Discretionary Grants awarded with funds authorized by the Fostering Connections to Success and Increasing Adoptions Act of 2008. These projects help reconnect family members with children who are in or at risk of entering foster care. The cross-site evaluation examines program processes and outcomes within and across four program areas, one of which is Kinship Navigator. One focus of the evaluation is the collaborative relationships that exist between grantees, evaluators, community service providers, and other stakeholders, particularly public child welfare agencies. The evaluation explores the extent to which collaboration enhances services and influences systems change. This presentation will highlight preliminary results of the cross-site evaluation, specifically the forms of collaboration within the Kinship Navigator projects. Ms. Lyon is a member of the cross-site evaluation team and is the evaluation technical assistance liaison to the Kinship Navigator grants.

Session Title: From Survey Data to Network Mapping and Beyond: Describing Inter-Organizational Coordination and Collaboration Networks
Demonstration Session 736 to be held in Palos Verdes A on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Social Network Analysis TIG
Presenter(s):
Michelle Magee, Harder+Company Community Research, mmagee@harderco.com
Gary Resnick, Harder+Company Community Research, gresnick@harderco.com
Sae Lee, Harder+Company Community Research, slee@harderco.com
Raul Martinez, Harder+Company Community Research, rmartinez@harderco.com
Abstract: This session will demonstrate one method for measuring inter-organizational networks using a modified version of the Levels of Collaboration Scale as part of a Funded Provider survey. Based on our experiences evaluating First 5 programs in several California counties, we will demonstrate how survey data from agency respondents are transformed into two-dimensional inter-agency network maps using NetDraw, and then how features of the network (strength of ties, closeness, density) can be distilled from these network maps and analyzed statistically. We will show how to use the maps to describe the nature of collaboration between agencies, how to expand the survey to a broader network of agencies, and how to employ this procedure to measure change over time. Caveats and limitations of this method and how to interpret and communicate the maps to stakeholders and agencies to support program monitoring and improvement will also be discussed.

Session Title: New Evaluator Training and Capacity Building: Lessons From the Field
Multipaper Session 738 to be held in Redondo on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Teaching of Evaluation TIG
Chair(s):
Linda Schrader,  Florida State University, lschrader@fsu.edu
When Theory is Not Sufficient: Intersections Between Teaching Evaluation and Teacher Training
Presenter(s):
Serafina Pastore, University of Bari, serafinapastore@vodafone.it
Abstract: Evaluation represents a fundamental task for teacher's activities. However, in Italy we are not able to go beyond such recognition. Teachers' evaluation is reduced to the mere production of models and tools. As regards to the scholastic system evaluation is now divided into different branches: students' assessment; teachers' self-evaluation; school evaluation, quality evaluation. But with respect to the topic of the teaching-evaluation relation the lack of a systematic reflection on different aspects is really emerging. What do teachers think about evaluation? How do they experience evaluation in the didactic practice? What are the more frequent difficulties, and, in particular, how do they learn to evaluate? In the light of this framework the current paper, which is the result of a first step of a more articulated research project, will analyse teachers' representations of evaluation and will explore the value of the teaching evaluation proposal.
Using Evaluation Training to Create Change: The Influence of the Evaluation Fellows Program
Presenter(s):
Amelia E Maynard, University of Minnesota, mayn0065@umn.edu
Jean A King, University of Minnesota, kingx004@umn.edu
Abstract: The purpose of this paper is to examine the extent to which an evaluation training program can be used to create influence and lead to social betterment in a specific geographical context. The Evaluation Fellows Program (EFP) at the University of Minnesota is a unique training that brings together evaluators, practitioners, policy makers, and funders around a specific content area such as school reform to develop an understanding of each role and thus improve future evaluations, policies, and programs in that topical area. Unlike other training programs, EFP does not focus on creating professional evaluators, but rather in developing knowledge of how evaluation can be used for improvement. This paper expands Kirkhart's theory of evaluation influence to training and provides an example of such influence through research on the outcomes of the first two EFP cohorts.
Helping New Evaluators Conceptualize, Manage, and Reflect Upon Initial Evaluation Experiences: Easing the Transition from Classroom to Practice
Presenter(s):
Gary Skolits, University of Tennessee, Knoxville, gskolits@utk.edu
Jennifer Morrow, University of Tennessee, jamorrow@utk.edu
Patrick Barlow, University of Tennessee, Knoxville, pbarlow1@utk.edu
Ann Cisney-Booth, University of Tennessee, acisneybooth@utk.edu
Eric Heidel, University of Tennessee, rheidel@utk.edu
Brenda S Lenard, University of Tennessee, blenard@utk.edu
Amadou B Sall, University of Tennessee, asall@utk.edu
Tiffany Smith, University of Tennessee, tsmith92@utk.edu
Abstract: This paper introduces the development and application of evaluator roles as a framework for students to conceptualize, manage, and reflect on their early evaluation experiences. Evaluation students beginning their first semi-independent evaluation can be overwhelmed with initial and often unanticipated implementation challenges. While early student hands-on evaluation efforts offer maximum learning opportunities, unanticipated initial challenges can be frustrating or disheartening. These initial challenges can be misinterpreted by students as an indication that they are not suited to an evaluation career or that the practice is much more problematic than anticipated. Faculty and students in a seminar class have refined and applied the roles approach to evaluation as a framework for initial practice that helps students ground and manage each aspect of the evaluation in terms of the whole evaluation process, and develop a common language for communicating and reflecting upon initial professional experiences.

Session Title: Redesign: What Happens When Old Approaches Fail
Multipaper Session 739 to be held in Salinas on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Stanley Varnhagen,  University of Alberta, stanley.varnhagen@ualberta.ca
Course Evaluation Redesign Process, Outcomes, & Consequences: How One School Simplified Its Tool to Find Out It Wasn't That Simple
Presenter(s):
Tanya Ostrogorsky, Oregon Health & Science University, ostrogor@ohsu.edu
Abstract: This presentation focuses on outcomes experienced during the redesign of the Oregon Health & Science University School of Nursing (OHSU SON) end-of-term course and teaching effectiveness evaluations. The presentation will focus on fostering stakeholder engagement in the redesign process, increasing utility of results, reducing respondent burden, incorporating evaluations into our on-line learning management system (LMS), and determining criteria in which courses are referred to curriculum committee review. Additionally, the presenter will discuss how the concept of 'simplify' was used to guide the redesign process and how that touchstone word helped maintain focus towards the goals of the redesign, but also how that guiding principle unintentionally made other aspects of the evaluation more complex. Other issues to be discussed include how an increasing reliance on team teaching and/or heavy guest lecturer participation in courses has impacted the evaluation process and the challenges of integrating into an open source LMS.
The Value of Employing Descriptive Performance Levels in the Learning Assessment of Army Commanders
Presenter(s):
Linda Lynch, Instructional Systems Specialist, Quality Assurance Office, linda.l.lynch@us.army.mil
Abstract: The purpose of the Army School for Command Preparation (SCP) - Tactical Commander Development Program is to instruct new Brigade and Battalion Commanders in leadership and tactical skills prior to taking over a new command. In the past a Pre - Post Course Survey using nominal 1 - 5 item responses demonstrated that significant learning had taken place for each learning objective. Upon changing the item responses to progressive performance descriptions has presented new opportunities and valuable outcomes for SCP, faculty and students. For SCP significant learning is now specifically tied to performances associated with each learning objective. Faculty can assess learning nuances within each class, and students are clear about progressive performance levels associated with their new role in command.
Qualitative Feedback in Online Course Evaluations: A standardized analysis of written comments
Presenter(s):
David Nelson, Purdue University, nelson8@purdue.edu
Abstract: Despite significant research on the role of gender in student course evaluations, analyses of qualitative student feedback occupy a minor place in the literature. This paper explores the influence of gender on the type of qualitative feedback that instructors receive. Using a rubric, written comments on over 15,000 student course evaluations, conducted exclusively online at a large research university, were analyzed to determine if male or female instructors were more likely to receive comments that were personalized or unrelated to course instruction. Comments were also examined to determine if the gender of the student influenced the likelihood of such feedback.
Problems with Multi-purpose Postsecondary Course Evaluations
Presenter(s):
Stanley Varnhagen, University of Alberta, stanley.varnhagen@ualberta.ca
Jason Daniels, University of Alberta, jason.daniels@ualberta.ca
Brad Arkison, University of Alberta, brad.arkison@ualberta.ca
Abstract: In order for course evaluations in postsecondary education to be appropriately valued, the process and instruments need to support effectively different evaluation goals. At our post-secondary institution, a single, end-of-course evaluation is conducted, with the results serving at least three distinct functions: assisting with promotion and tenure decisions, providing the instructor formative feedback, and providing information to students to assist in course and section selections. Of these three uses, the most prevalent is information to assist with promotion and tenure decisions. This summative emphasis, including instituting evaluation procedures (i.e., timing, questions, etc.) that are best suited for this purpose, has implications for its broader usefulness. Specifically, this summative focus negatively affects the usefulness of the formative feedback to the course instructor. In this paper we present a case study of how summative and formative uses of course evaluation data are affected by a one-size-fits-all approach, and suggest an alternative approach.

Session Title: Expanding Organizational Advocacy Capacity and Creating a Legacy for Change
Panel Session 740 to be held in San Clemente on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Claire Brindis, University of California, San Francisco, claire.brindis@ucsf.edu
Discussant(s):
Astrid Hendricks, Hendricks Consulting, ahendricks2@gmail.com
Abstract: In 2001, The California Endowment funded 19 statewide clinic associations to strengthen their capacity to support the policy and operational needs of community clinics in California. The Endowment partnered with the Philip R. Lee Institute for Health Policy Studies, UCSF to implement a 3-part, participatory evaluation approach: 1) Mixed-method evaluation toolkit that used an outcomes framework; 2) Creation of a learning environment to strengthen grantee evaluation activities; 3) and Communications of the findings more broadly. The evaluation findings suggest that nearly all grantees made significant progress in achieving the Program outcomes, resulting in a strengthened health care safety net and access to care for millions of Californians. This panel is an opportunity to step back and reflect on the evaluation findings and their applicability to the field. Additionally, we will explore the evaluation experience from the funder, grantee and evaluator perspective and discuss what's useful to whom.
Evaluating the California Endowment Clinic Consortia Policy and Advocacy Program Evaluation: Lessons for Evaluators of Advocacy and Policy Change
Annette Gardner, University of California, San Francisco, annette.gardner@ucsf.edu
Sarah Geierstanger, University of California, San Francisco, sara.geierstanger@ucsf.edu
Claire Brindis, University of California, San Francisco, claire.brindis@ucsf.edu
Annette Gardner was the Principal Investigator of the 8-year evaluation of The Clinic Consortia Policy and Advocacy Program. In addition to designing and executing the mixed method evaluation, she and her UCSF colleagues have linked the design and findings to the broader discourse on expanding advocacy capacity. She will provide an overview of the evaluation findings, as well as the methods and tools that evaluators and advocates can readily use, including monitoring advocacy strategies, assessing impact, and providing real-time feedback on policy change strategies. Last. Annette will discuss the relevance of the evaluation findings for researchers and evaluators of advocacy and policy change.
The Grantee Perspective: The Evaluation Experience and Leveraging the Findings
Louise McCarthy, Community Clinic Association of Los Angeles County, lmccarthy@ccalac.org
Louse McCarthy is the Vice President of Governmental Affairs and her organization was funded under The Endowment's Clinic Consortia Policy Program. The Community Clinic Association of Los Angeles County (CCALAC) represents 45 non-profit community and free clinics that operate 132 primary care sites throughout Los Angeles County. In addition to participating in the longitudinal data collection activities, CCALAC's grant funded activities were described in a detailed case study, Expanding The Public-Private Partnership Program (PPP) to Meet the Needs of the Medically Underserved. Louise will speak to her role as a partner in the evaluation as well as the gains to her organization and member clinics from the evaluation findings and participating in the evaluation.
The Funder Perspective: Supporting and Learning From a Multi-year Advocacy Capacity and Policy Change Initiative
Lori Nascimento, The California Endowment, lnascimento@calendow.org
Lori Nascimento, Evaluation Manager, will describe The California Endowment's overall theory of change and programmatic goals for its Clinic Consortia Policy and Advocacy Program. She will explain The Endowment's interest in supporting clinic consortia as part of a larger initiative, the Community Clinics Initiative. Lori will summarize the ways that the UCSF evaluation has helped TCE advance it's thinking on health policy change and organizational advocacy capacity. She will also provide practical suggestions for others interested in developing and sustaining advocacy capacity. Last, she will describe the shifting context for foundation support in this arena and how TCE has evolved it's theory of change under The Patient Protection and Affordable Care Act (PPACA).

Session Title: Valuing Voice: Considerations for the Use of Concept Mapping in Communities to Express and Activate Diverse Perspectives in Planning and Evaluation
Panel Session 741 to be held in San Simeon A on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Sarita Davis, Georgia State University, saritadavis@gsu.edu
Discussant(s):
Sarita Davis, Georgia State University, saritadavis@gsu.edu
Abstract: This panel examines the use of concept mapping for surfacing and incorporating the values and perspectives of diverse stakeholders in planning and evaluation. Three participative inquiry and practice exemplars are presented, each illuminating how concept mapping aided in addressing the ontological, methodological, and participatory issues associated with articulating stakeholder concerns and values. In the first exemplar, concept mapping was used to articulate what affects health disparities from the perspective of those living in distressed communities in ways that provided new understanding for researchers and providers. The second exemplar integrated and sequenced concept mapping with another participatory research method to expand the dialogue, collaboration, and action of immigrants from diverse cultural and linguistic backgrounds. The third exemplar involved the use of concept mapping for seeking, facilitating and organizing the participation of program beneficiaries to make informed decisions about assessing the outcomes of early childhood interventions.
Whose Voices Describe Reality in a Community's Effort to Reduce Health Disparities?
Mary Kane, Concept Systems Inc, mkane@conceptsystems.com
Beneta Burt, Jackson Roadmap to Health Equity Project, benetaburt@bellsouth.net
African American communities traditionally experienced greater health risks than other groups. In 2003, a community authorship process using concept mapping sought the voices of the community of Jackson, Mississippi to build a Health Disparities Elimination Roadmap. Over 100 people answered this question: "A specific thing that causes African Americans to get sick more and die sooner than people in other groups isGǪ." Ensuring that community members trusted that the project was designed for them to be the project's "authors" opened the door to the development of over 400 responses. The resulting concept map showed the final 132 statements in seven conceptual areas. Priorities focused on what the community could do immediately. The resulting neighborhood models are integrative, creative and effective in changing the community's health status. This presentation will describe how the community's voices combined with agencies and medical service providers to create policy, practice and community health changes.
The Methodological Integration of Participatory Methods to Amplify Immigrant Resident Perspectives
Scott Rosas, Concept Systems Inc, srosas@conceptsystems.com
Nasim Haque, Wellesley Institute, nasim@wellesleyinstitute.com
This presentation focuses on the methodological and technical adaptations of participatory methods that gave rise to the diverse stakeholder interests and enhanced the capacity to make a difference in their neighborhood. This inquiry successfully sequenced and integrated photovoice and concept mapping, resulting in a conceptual framework of factors influencing immigrants' health and well-being, supported by images with captions describing their experiences. This emergent stakeholder-produced model fostered new opportunities for immigrant residents from very diverse cultural and linguistic backgrounds to convey a sophisticated and nuanced view of how their neighborhood influenced health and well-being. The model enabled in-depth dialogue regarding factors impacting health in a densely populated, low-income, urban neighborhood and means for initialing action for change. Immigrant residents were able to clearly and accurately identify areas where values and potential for action intersected; yielding a consensus perspective that maximized buy-in and support, thereby increasing the likelihood of success.
Engaging Program Beneficiaries to Ensure Voice Equity in Program Evaluation
Katy Hall, Concept Systems Inc, khall@conceptsystems.com
Donna Noyes, New York State Department of Health, dmn02@health.state.ny.us
The NY State Department of Health's Early Intervention Program (EIP) for infants and toddlers with disabilities serves 70,000 children and families. Given the program's scope, demand for outcome data is high. Beginning in 2005, a three-phase project was conducted to develop an outcome measurement system. In Phase I, concept mapping was used with stakeholders, including beneficiaries, to develop an outcome framework from which measures to assess outcome could be designed and tested. In Phase II, two impact scales (child/family) were developed and tested based on these outcomes. In Phase III, data collected from stakeholders found the project was successful in identifying meaningful outcomes, and developing an effective measurement approach for statewide implementation. This presentation will describe how the values and decisions of parents of children in the early intervention program were given credibility and combined with those of service providers, state and county EIP staff, and experts throughout the project.

Session Title: Using an Evaluation Logic Model to Drive Integration of Statewide Initiatives
Demonstration Session 742 to be held in San Simeon B on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Karen Childs, University of South Florida, kchilds2@usf.edu
Jose Castillo, University of South Florida, jmcastil@usf.edu
Abstract: Calls for integrating academic, behavioral, and social-emotional services to meet the needs of students have increased in recent years. One approach gaining momentum involves integrated multi-tiered systems of student supports (MTSSS) that matches the focus and intensity of instruction to student needs based on data across all three domains (i.e., academic, behavioral, social-emotional). However, little information is available on how to integrate existing initiatives across these domains into one unified system of service delivery. This demonstration will describe how one state is using an evaluation approach referred to as logic modeling to inform the development, implementation, and evaluation of an integrated MTSSS. The process used to involve key stakeholders in the development and use of the evaluation model; the components of the model including evaluation questions, tools, and procedures; and next steps to be taken will be shared.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Clashing Hats: Moving From External to Internal Evaluation Roles
Roundtable Presentation 743 to be held in Santa Barbara on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Graduate Student and New Evaluator TIG
Presenter(s):
Corrie Whitmore, Southcentral Foundation, cwhitmore@scf.cc
Wendi Kannenberg, Southcentral Foundation, wkannenberg@scf.cc
Abstract: The presenters of this roundtable have served as internal evaluators, external evaluators, and researchers across a variety of health and education projects. While these diverse roles required similar skills they are 'clashing hats,' each reflecting different expectations and needs. We will share our perspectives on the similarities and differences between life as internal and external evaluators with those new to the field and invite participants to reflect on the importance of context in their own work. This roundtable will contribute to the body of knowledge in the field of evaluation by offering participants the chance to explore the hats they wear and increase our understanding of the factors affecting evaluation across contexts.
Roundtable Rotation II: Straddling the Internal and External Evaluator Role: Issues of Independence
Roundtable Presentation 743 to be held in Santa Barbara on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Graduate Student and New Evaluator TIG
Presenter(s):
Norma Martinez-Rubin, Evaluation Focused Consulting, norma@evaluationfocused.com
Abstract: Evaluators with the option to select the projects or programs they evaluate value the flexibility afforded them based on seniority, expertise, or economic stability. The economic recession has created an atmosphere with fewer evaluation positions where independence, valued by most evaluators, truly exists. Internal evaluators in particular straddle the roles of and external, expert consultant on evaluation with a concurrent role as internal, project implementer. This undoubtedly influences the direction that a project or program design might have as well as the findings reported from evaluation studies. This roundtable session will explore how best to adhere to values of independence, adherence to truth and disclosure, and collegiality when the evaluator has multiple roles within an organization.

Session Title: Exploring Data Visualization for Evaluation
Multipaper Session 744 to be held in Santa Monica on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Data Visualization and Reporting TIG
Chair(s):
Amy A Germuth, EvalWorks, LLC, amygermuth@evalworks.com
Discussant(s):
Tarek Azzam, Claremont Graduate University, tarek.azzam@cgu.edu
Abstract: Data visualization is the process of graphically or visually representing data or concepts. Media, web design, and marketing have all created an environment where evaluation users - clients, program participants, funders - expect high-quality, memorable graphics when evaluation results are communicated. Thus, as an evaluator it is useful to have data visualization skills (e.g., an understanding of the variety of ways to visually represent data and the knowledge of visualization software) as part of one's evaluation repertoire. In this session the three presentations will provide an overview of data visualization tools, demonstrate a specific use of data visualization, and explore ways to visualize qualitative data. In addition to highlighting the value of data visualizations, these presentations will explore the potential challenges and concerns of using visualization tools in evaluation. The session will interest evaluators who want an introduction to the area as well as those who are more advanced and want inspiration.
What to Consider When Choosing a Data Visualization Application or Approach
Christopher Lysy, Westat, cplysy@gmail.com
Do you have an interest in data visualization but do not know where to start? Applications like Tableau, ManyEyes, Geocommons, Google Fusion Tables, Excel, Illustrator, SPSS, R, Flare, Processing, Protovis, and countless others can be used in the process of creating data visualizations. Which application, or applications, should you choose for your own work? Project needs, technical expertise, and budget requirements will all play significant roles in the decision. During this presentation I will showcase a variety of data visualization tools classified by most common user (Analyst, Designer, Programmer, and Practitioner), highlighting the benefits and shortcomings of each class. The goal is to provide you with an awareness of the vast array of tools at your disposal and give you enough information to help you start your search and make an informed decision.
Using Interactive Data Visualization to Increase Transparency, Engagement, and Understanding: Doing the Data Dance with Tableau
Susan Kistler, American Evaluation Association, susan@eval.org
Tableau Software creates data visualizations and promises "Data in. Brilliance out." You know what? They're pretty on target. I used to teach statistics. We'd spend hours doing what Tableau can do better in a matter of minutes. Tableau's tools have allowed AEA to share interactive real-time data visualizations with stakeholders. Our hope is to lead the way in terms of transparency, information access, and demonstration of ways to present and engage with data. During this presentation I will demonstrate the basic functions of Tableau, discuss its strengths and limitations, and show before and after examples of Tableau data visualizations. Perhaps more importantly, I'll use the demonstration of Tableau to raise larger questions such as, What are the implications for data privacy? What guidance do stakeholders need to understand a data set? And, how can using a tool that facilitates data visualization improve or hinder analyses?
Visualizing Qualitative Data
Stuart Henderson, University of California, Davis, stuart.henderson@ucdmc.ucdavis.edu
Much data visualization software and attention is centered on visually representing quantitative data. Less attention has been paid to how to display qualitative or textual data in a visual form. In this presentation, I will introduce current ways that qualitative data are being visualized, for example through word clouds, tree representations, and mind maps. In addition, I will discuss ideas on and possibilities for moving qualitative data visualization forward. The presentation will focus mainly on the benefits of visually representing qualitative data, but I will also consider critical questions that may arise with its use. For instance, how does an evaluator maintain the integrity, tone, depth and essence of textual data in visual representations? And how might visual representations strengthen or weaken qualitative data?

Session Title: Understanding Visitors Through Institution-wide Evaluation Studies
Panel Session 745 to be held in Sunset on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Evaluating the Arts and Culture TIG
Chair(s):
Kathleen Tinworth, Denver Museum of Nature and Science, kathleen.tinworth@dmns.org
Abstract: This panel of four internal evaluators will present various institution-wide studies from science museums. The studies seek to answer questions that span the breadth and depth of the experiences the museums offer their visitors. Internal evaluation departments seek to balance both the interests of various stakeholder groups as well as attending to the needs of project funders. The information gained through the exit survey has been used to demonstrate the perceived value of the institution, allowing the organizations to become stronger institutions within their communities. The panelists will discuss how the institution-wide studies have generated respect and space for the evaluation of outputs and have supported system-wide culture shifts in the value of the visitor's voice in making decisions.
Understanding and Using Visitors' Values of the Science Museum of Minnesota
Sarah Cohn, Science Museum of Minnesota, scohn@smm.org
The Science Museum of Minnesota has undertaken its second audience study. Initially driven by the marketing and development departments, many departments within the organization wanted to ask visitors questions. The study was focused to understand visitor motivations and drivers for attending the museum, membership decisions, how to get more adult only groups to attend, and with what other leisure and entertainment destinations the museum is in competition. Though not everything about the museum is included in the interview, many departments appreciate the study as it increases their understanding the museum's visitors. The value of different questions and information varies across museum departments, but everyone sees potential in the data of making their programs better for the organization's audience. This presentation will discuss how various departments have adopted data for their own purposes and how the study has generated cross-institutional discussion around what is considered valuable by the institution.
Tracking the Visitor Experience at the Museum of Science, Boston
Anna Lindgren-Streicher, Museum of Science, Boston, alstreicher@mos.org
The Visitor Experience Monitoring (VXM) project at the Museum of Science in Boston monitors the quality of the visitor experience and stewards the Museum's relationships with visitors. VXM is intended to enhance the capabilities of the Museum's leadership team responsible for the visitor experience to make decisions regarding institutional priorities and opportunities for improvement. Email addresses are collected from a random sample of visitors, who then receive an online survey following their visit. This survey includes items on the visitor's perception of the museum's quality and value, specific aspects of the visit, activities the visitor experienced, and demographics. Yearly reports provide in-depth analysis and discussion of data, while quarterly reports allow for tracking of visitor perception of value and quality on a regular basis. The project also provides a rich database for further analyses, such as an audience segmentation study undertaken earlier this year.
Getting a Picture of the Visiting Public at the Saint Louis Science Center
Elisa Israel, St Louis Science Center, eisrael@slsc.org
Since 1992, the Saint Louis Science Center has been collecting data about the demographics and visitation trends of its general public visitors. This presentation will describe the Science Center's Topical Visitor Survey (TVS), a seasonal exit interview, and provide examples of the ways in which the Science Center uses this data to better understand its audience and make decisions. The data collected via the TVS serve as a cornerstone for describing Science Center visitors - who they are and what they do during a typical visit. The TVS is also used to collect data on specific topics relevant to pending decisions, such as the selection of films for the OMNIMAX(tm) theater. This tool, one component of a large array of evaluation activities, is key in helping Science Center staff to better understand the visitors they serve.
$2 Days: Attracting Under-represented Communities to the Oregon Museum of Science and Industry
Hever Velazquez, Oregon Museum of Science and Industry, hvelazquez@omsi.edu
The Oregon Museum of Science and Industry provides a reduced admission price on the first Sunday of each month ($2 Days) in order to reach under-represented audiences (e.g., Hispanics) who typically do not have access to informal learning institutions such as science museums. The OMSI Evaluation and Visitor Studies division wanted to conduct formal evaluation of this program in an effort to measure its success and establish the groundwork for conducting future visitor research using more effective sampling techniques and to apply cultural competence guidelines for staff recruitment, data collection and data analysis. This presentation shows how targeted efforts can be effectively used to address larger institutional goals with respect to understanding the diversity of its visitors and having a culturally competent evaluation staff to address those visitors.

Session Title: The Role of Evaluation in Informing Program Functioning and Public Perception
Panel Session 746 to be held in Ventura on Friday, Nov 4, 2:50 PM to 4:20 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Leigh D'Amico, University of South Carolina, damico@mailbox.sc.edu
Abstract: The beliefs, values, and expectations of multiple stakeholders drive an initiative's policies and practices and impact the evaluation process and perceptions of success. The South Carolina Writing Improvement Network, which receives state funding and submits an annual report documenting impact, provides professional development and technical assistance to improve students' language arts and writing proficiency. Professional development is customized to the identified needs of school districts, schools, and teachers across South Carolina. In an accountability climate with limited funding, it is critical to effectively and appropriately understand and report program impact. Evaluations of the professional development focus on multiple levels of impact including student achievement. Panelists will explain how the values of those associated with the program including the program director, evaluator, teachers, administrators, and state-based entities have shaped the provision of services, implementation and use of evaluation plan, and measures of success.
Evaluation to Inform Program Planning and Delivery and Provide Evidence of Impact
Leigh D'Amico, University of South Carolina, damico@mailbox.sc.edu
As the lead evaluator on this project through the Office of Program Evaluation at the University of South Carolina, Leigh will share information about the evaluation design and implementation as well as the process of working with the South Carolina Writing Improvement Network staff through a utilization-focused evaluation approach. She will discuss the complexities of understanding and defining progress, challenges in analyzing student impact, and implications of communicating results to stakeholders.
The Impact of Evaluation on the Operation of a Statewide Initiative
Ellen James, South Carolina Writing Improvement Network, jamesw2@mailbox.sc.edu
As the director of the South Carolina Writing Improvement Network, Ellen will provide insight into the provision of high quality professional development that is focused on improving student achievement in writing and language arts. She will share her experiences in working in a utilization-focused evaluation model, detail how she has used evaluation information to shape organizational priorities, and discuss opportunities and challenges in sharing program performance and impact results with state officials.
The Impact of Evaluation in Program Planning and Delivery
Hannah Baker, South Carolina Writing Improvement Network, hbaker@mailbox.sc.edu
As a full-time consultant of the South Carolina Writing Improvement Network, Hannah plans and facilitates professional development designed to improve student achievement in writing and language arts. She has been extensively involved in the evaluation process. She will share her experiences in implementing the evaluation plan and understanding the impacts of the program.

Return to Evaluation 2011
Search Results for All Sessions