2010 Banner

Return to search form  

Session Title: Tools for Improving the Quality of Evaluations: Four Examples From the Field
Panel Session 862 to be held in Lone Star A on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Presidential Strand and the Environmental Program Evaluation TIG
Chair(s):
Britta Johnson, United States Environmental Protection Agency, johnson.britta@epa.gov
Abstract: Like many federal agencies, evaluators at the U.S. Environmental Protection Agency (EPA) face a number of constraints that can hamper the quality of our evaluation efforts, including poor data quality, no data, and resource and time constraints. The U.S. EPA’s Evaluation Support Division has used several tools to help mitigate the impact of these constraints. Through the examination of four case studies, this panel session will provide practical examples that describe how evaluability assessment, expert/peer review, and integrating evaluation into the design of a program are valuable tools for improving: 1) the quality of measures, 2) data collection strategies and outcome data, 3) evaluation design, and 4) our understanding of the quality and availability of data for evaluation. This panel session will discuss how each tool was applied during the conduct of an evaluation and discuss which aspects of evaluation quality were improved.
Integrating Evaluation Into Program Design
Matt Keene, United States Environmental Protection Agency, keene.matt@epa.gov
Building evaluation into the design of programs presents the U.S. Environmental Protection Agency (EPA) with opportunities to improve the quality of its evaluations. In cooperation with the Paint Product Stewardship Initiative (PPSI), the U.S. EPA established an evaluation committee to systematically integrate participatory evaluation into the design of the Oregon Paint Stewardship Pilot Program. In this presentation we review the process that the evaluation committee used to integrate evaluation into the program’s design; summarize the positive and negative affects on the development of questions and measures, evaluation design and data collection; and assess the challenges and benefits of working collaboratively to investigate the effectiveness and impact of management strategies. Finally, we draw a relationship between the history and status of this evaluation and some criteria that determine which programs warrant the resources necessary to build evaluation into their design…and which ones do not.
Using Evaluability Assessment to Understand Data Limitations and Help Design an Appropriate Evaluation
Michelle Mandolia, United States Environmental Protection Agency, mandolia.michelle@epa.gov
In order to increase awareness of the rules and regulations governing the construction and operation of ethanol plants in its Region, EPA Region 7 staff published a compliance assistance manual for these facilities. Region 7 was interested in evaluating the success of the manual in improving industry compliance with relevant rules and regulations. To begin the evaluation, Region 7 requested that an evaluability assessment (EA) be conducted to determine if there was enough information available to answer the desired evaluation questions. The EA helped to identify the evaluation’s information collection plan given information availability and allowable and feasible information collection approaches. The results of this assessment were used to inform the more detailed evaluation methodology.
Using Expert/Peer Review to Improve the Quality of an Evaluation Methodology: Tribal General Assistance Program Case Study
Yvonne Watson, United States Environmental Protection Agency, watson.yvonne@epa.gov
Tracy Dyke Redmond, Industrial Economics Inc, tdr@indecon.com
The primary purpose of the EPA’s Tribal General Assistance Program (GAP) is to help federally recognized tribes and intertribal consortia build the basic components of a tribal environmental program, which may include planning, developing, and establishing the administrative, technical, legal, enforcement, communication, and outreach infrastructure. An evaluation was conducted to determine how effective GAP has been in building Tribal environmental capacity. To improve the rigor and quality of the evaluation, two expert review panels were convened; the first academic and the second Tribal, to identify concerns with the methodology. As the evaluation advisor, Yvonne Watson will contribute to the session by providing an overview of the peer review process and the results of the academic peer review and a tribal peer review highlighting, similarities and differences in the reviews and the importance of using the peer review process to ensure cultural sensitivity in addressing concerns related to the quality of the evaluation.
Using Expert/Peer Review to Improve the Quality of an Evaluation Methodology: Compliance Assistance Outcomes Case Study
Terell Lasane, United States Environmental Protection Agency, lasane.terell@epa.gov
EPA provides compliance assistance to the regulated community, including local governments and tribes, to help them understand their regulatory obligations and to prevent violations. Some of the assistance activities lead to behavior changes that result in compliance improvements and environmental benefits. OECA’s Office of Compliance (OC) is implementing a pilot project to evaluate compliance assistance (CA) outcomes, using a quasi-experimental design to determine whether there is a statistically significant correlation between CA and behavior change for auto-body repair shops in Massachusetts that are offered CA for the new Clean Air Act Area Source Rule related to paints and coatings (Subpart 6H), and limited assistance on Resource Conservation and Recovery Act (RCRA) regulations. As the advisor for the evaluation, Terell Lasane will share the results of the expert review of the statistically valid pilot noting how the results of the feed back were instrumental in identifying how results were used to address data quality and design issues.

Session Title: Integrating Evaluation Into Everyday Organizational Practice: A Complex Systems Perspective
Think Tank Session 863 to be held in Lone Star B on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Systems in Evaluation TIG
Presenter(s):
Srik Gopalakrishnan, New Teacher Center, srik2004@gmail.com
Discussant(s):
Royce Holladay, Human Systems Dynamics Institute, rholladay@hsdinstitute.org
Abstract: Making evaluation an integral part of an organization’s everyday operations has long been held as an ideal in the field. This would mean that evaluation is integrated into organizational norms and culture and becomes a part of the organization’s work ethic. However, organizations are complex systems and making evaluation embedded in organizational culture entails a deep understanding of how complex systems, especially complex human systems, function. This session will explore complexity from a human systems dynamics perspective and engage participants in the question, “what would it take to integrate evaluation into ongoing practice in a complex human system?” Break-out groups will address various facets of complexity such as self-organization, simple rules and dynamical change and implications of those facets for specific approaches to integrating evaluation as part of organizational practice. The session will be structured in a modified “world café” format so that participants have a chance to rotate through several discussions.

Session Title: Advances in Stakeholder Consultation for Evaluation Quality
Panel Session 864 to be held in Lone Star C on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Laura Leviton, Robert Wood Johnson Foundation, llevito@rwjf.org
Discussant(s):
Laura Leviton, Robert Wood Johnson Foundation, llevito@rwjf.org
Abstract: Evaluation quality depends in part on including consultation with the many stakeholder groups that have an interest in a program or its evaluation. Yet more concrete guidance is needed for identifying stakeholders, for engaging them in meaningful ways, for efficient ways to incorporate their suggestions into evaluation planning, and for reengaging them to make sense of findings. The three presentations represent many years of practitioner experience in doing so. Hallie Preskill will present experience to date in using a concrete, step-by-step process to engage stakeholders. Bill Trochim will reflect on years of experience in using concept mapping for this purpose. Amelie Ramirez will describe the latest in a long-established series of Delphi surveys with Latino community leaders and researchers, setting priorities for research and evaluation on childhood obesity prevention in Latino children. Discussion will focus on the cross-cutting principles and practices that affect evaluation quality.
Strategies for Involving Stakeholders in Developing Evaluation Questions
Hallie Preskill, FSG Social Impact Advisors, hallie.preskill@fsg-impact.org
By soliciting the opinions, concerns and priorities of stakeholders early in the evaluation process, the results are more likely to address stakeholders’ information needs and be useful for a range of purposes, among them to improve program effectiveness, to affect policy decisions and/or to instigate behavioral change. Engaging a wide range of stakeholders in the question development process also provides opportunities to question assumptions, explore competing explanations, and develop consensus around what it is the evaluation should address. Finally, recommendations that result from an evaluation in which stakeholders have been involved are more likely to be accepted by a broader constituency and implemented more fully and with less resistance. This presentation will describe the following five-step process for involving stakeholders in developing evaluation questions: 1) preparing for stakeholder engagement, 2) identifying potential stakeholders, 3) prioritizing stakeholders, 4) considering potential stakeholders’ motivations for participating, and 5) selecting a stakeholder engagement strategy.
Mapping Stakeholder Views of Evaluation Questions and Plans
William M Trochim, Cornell University, wmt1@cornell.edu
There is consensus in evaluation that stakeholders need to be integrally involved in the development of evaluation questions and plans but it is not clear how we might do this most effectively. One strategy is to use structured methodologies that enable stakeholders to create visual representations such as maps or models that reflect their thinking. Two such modeling approaches are considered. The first, structured concept mapping, involves stakeholders in brainstorming a set of ideas (such as the questions that might be addressed in an evaluation), individually sorting and rating those ideas and then developing maps using multivariate statistical methods (multidimensional scaling and hierarchical cluster analysis). The second uses a structured “protocol” to generate a traditional “columnar” logic model and causal pathway model of the program and its relationship to outputs and outcomes. The challenges to accomplishing these kinds of structured methods as a foundation for stakeholder-driven evaluation are discussed.
A Nationwide Delphi Process to Set Priorities for Research and Evaluation to Prevent Obesity Among Latino Children
Amelie Ramirez, University of Texas Health Science Center at San Antonio, gallion@uthscsa.edu
Kipling Gallion, University of Texas Health Science Center at San Antonio, gallion@uthscsa.edu
Latino children have high obesity rates, and there is an urgent need for research and evaluation to address the epidemic. Salud America! is the Robert Wood Johnson Foundation Research Network to Prevent Obesity Among Latino Children, focusing on policy and environmental solutions to Latino childhood obesity. To identify priority research and evaluation on effective strategies, Salud America! undertook a Delphi survey with over 1,000 Latino community leaders and researchers interested in the issue. The Delphi survey, a widely used method for consensus-building, went through five main steps: identifying main research areas to be assessed; selecting participants; designing and pilot-testing the questionnaire; administering the three-round survey between May 1 and July 30, 2008 (monitoring participation, analyzing data, and providing feedback); and reporting results (http://www.salud-america.org/Files/Delphi_Executive_Summary.pdf). The Delphi survey results helped establish a research and evaluation agenda that guided a competitive grant process and funded 20 U.S. scientists to conduct pilot research projects.

Session Title: Striving for Quality During Organizational Change: Three Aspects of Responsible Evaluation
Multipaper Session 865 to be held in Lone Star D on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Judy Lee,  Independent Consultant, judymlee@msn.com
Discussant(s):
Christian Connell,  Yale University, christian.connell@yale.edu
Ensuring Quality: Learning While Doing
Presenter(s):
Benjamin Kerman, Annie E Casey Foundation, bkerman@aecf.org
Abstract: Learning while doing, an approach to ongoing reflective organizational development, involves the proverbial ‘plane building aloft.’ In 2004, Casey Family Services set out ambitiously to transform itself midstream from a successful provider of long-term foster care to an agency that made sure every child would have a permanent family and exit foster care as soon as safely possible. With new organizational goals and a commitment to pilot and refine a set of leading edge permanency promoting practices, the agency adopted the ‘learn while doing’ approach. A critical component of this approach was the linking of management and evaluators in a close collaboration that would help direct the organizational change plan and refine the emergent practice model. This paper summarizes insights gained during the initial five year implementation evaluation, discussing evaluation ‘quality assurance’ when practice models, organizational structures, service populations and information needs were all subject to change.

Session Title: What Am I Supposed to Do With Three-Way Crosstabs? An Introduction to Log Linear Models
Demonstration Session 866 to be held in Lone Star E on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Eric Canen, University of Wyoming, ecanen@uwyo.edu
Nanette Nelson, University of Wyoming, nnelso13@uwyo.edu
Abstract: A common analysis method for cross-tabulated data is a Chi-Square test for independence. When considering only two factors, the choice of Chi-Square is straightforward; however, with three or more factors, the analysis of independence is more complicated. For instance, you can test whether all the factors are independent or whether one factor is independent of two other factors. Chi-Square methods can be used in these analyses however, such analysis is not easy completed using common statistical software. Log-linear models offer an easily understood and easily implemented alternative. This demonstration will provide step-by-step guidance on how to implement this analysis technique. The presenters will teach by example using data from an investigation of differences in smoking-related behaviors and attitudes pre- and post-implementation of smoke free ordinances. Participants will have time to ask questions and will receive a handout to guide them through their own analyses.

Session Title: Meeting Needs of Multiple Stakeholders in a High-Scrutiny Multi-site Evaluation: Evaluation of the Communities Putting Prevention to Work (CPPW) Initiative
Panel Session 867 to be held in Lone Star F on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Abstract: The Centers for Disease Control and Prevention (CDC) has dedicated $650 million in American Recovery and Investment Act (ARRA) funds to a large-scale initiative, Communities Putting Prevention to Work (CPPW). The goal is to implement supportive policies, systems, and environments in states and communities that will drive changes in behavior to reduce risk factors, and prevent/delay chronic disease. Funded recipients—all states and 44 selected communities—have 24 months to implement these strategies and to accomplish the intended policy and environmental change outcomes related to the strategy; in addition, states received funds to improve their tobacco quitlines and enhance media related to tobacco cessation. In this session, presenters will discuss the multi-faceted evaluation approach, the stakeholders, and how different stakeholder needs have been reconciled in the design. Other presentations will discuss the challenges and implementation of two facets of the CPPW effort—the policy/environmental change component and the quitline component.
On Rowing in the Right Direction: Creating an Evaluation Design for the CPPW Initiative
Tom Chapel, Centers for Disease Control and Prevention, tchapel@cdc.gov
The evaluation of CPPW has many components, employing both quantitative and qualitative evaluation methods and recipient-, as well as aggregate-level, analysis. Core to the evaluation are performance measures to monitor state and community success in implementing strategies, the status of their intended policy or environmental changes, and, in the case of the quitline, increases in calls and callers. In addition, selected case studies will be conducted to determine the context of successful implementation and the factors that affect differential outcomes. Like all ARRA efforts, CPPW is a high-scrutiny initiative, with multiple government and public stakeholders. The many stakeholders bring diverse opinions about the intent of CPPW efforts, the outcomes that constitute “success”, and the timeframe within which success should be achieved. This session will discuss the challenges in developing an evaluation approach, and the ways in which the different needs of these stakeholders have been reconciled in the evaluation design.
Sense and Chaos in Multi-site Evaluations
Rene Lavinghouze, Centers for Disease Control and Prevention, shl3@cdc.gov
Programs are often simultaneously encouraged to develop innovative, context-specific strategies while providing information that can be synthesized across sites. With this juxtaposition in mind, how do we evaluate program implementation and results toward improved practice while simultaneously meeting the needs of stakeholders and funded sites? A methodology that focuses on generating information for accountability and program improvement in widely varied settings is an ideal approach because variation is not only permitted, but celebrated. In this approach, the focus is on documenting and assessing a continuum of change and the linkage of program implementation to desired outcomes. It is through collaboration that successful designs can be co-created to allow for the generation of information that promotes cross-site analysis as well as furthers the exploration. This approach is an ideal methodology for identifying promising practices and encouraging adaptation to program context. It facilitated sense from what can often be chaos.
Navigating the Complexities of Evaluating Quitlines: Design and Methodology
Lei Zhang, Centers for Disease Control and Prevention, fpv4@cdc.gov
Because tobacco cessation quitlines vary in capacity, services, and contexts, evaluating quitlines in multiple states can be challenging. CDC’s Office on Smoking and Health is in the process of developing a National Quitline Data Warehouse (NQDW) that provides data for ongoing quitline evaluation and the evaluation of CDC’s ARRA expenditure on quitlines. The NQDW will standardize data collected by state quitlines using three questionnaires (based on the North American Quitline Consortium’s Minimum Data Set): an intake questionnaire, a 7-month follow-up questionnaire, and a quitline services questionnaire. The Intake and Follow-up questionnaires are administered to quitline callers and collect information on their tobacco use behavior and quit status. The Services questionnaire provides contextual information about the quitlines (e.g., type and availability of services, eligibility criteria, etc) that is critical to the proper understanding and interpretation of Intake and Follow-up data. This presentation focuses on the design and methodology of the NQDW, and how it addresses the challenges in multi-site evaluations.
The Role of Technical Assistance and Training in Ensuring Evaluation Quality in a Multi-site, Multi-level Evaluation
Marti Engstrom, Centers for Disease Control and Prevention, cpu5@cdc.gov
The evaluation of CDC’s CPPW initiative is complex and challenging. Some challenges include the following: a) it is a large multi-site, multi-level initiative: all states and 44 selected communities were funded, b) funded sites are implementing selected policy and environmental change strategies that vary across sites, c) evaluation of policy and environmental change is inherently complex due to differences in context, d) an expectation that important outcomes will be observed within a very short amount of time, and e) a need for integration of utilization-focused (and bottom-up) evaluation into an accountability (and top-down) evaluation framework. In this challenging environment, technical assistance and training related to programmatic reporting, monitoring, and evaluation is critical to ensuring a high-quality and useful evaluation. This presentation will use the CPPW to provide an example of how technical assistance and training can be used to promote a high quality evaluation of a diverse, multi-site, multi-level initiative.

Session Title: Building Evaluation Capacity in Nonprofit Organizations Serving Lesbian, Gay, Bisexual and Transgender (LGBT) and HIV+ Clients
Panel Session 868 to be held in MISSION A on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Lesbian, Gay, Bisexual, Transgender Issues TIG
Chair(s):
Anita Baker, Anita Baker Consulting, abaker8722@aol.com
Discussant(s):
Anita Baker, Anita Baker Consulting, abaker8722@aol.com
Abstract: The Building Evaluation Capacity (BEC) program was initiated in the fall of 2006 by the Hartford Foundation for Public Giving’s Nonprofit Support Program (NSP). BEC is a multi-year program operating in two-year cycles. Each cycle is designed to provide comprehensive, long-term training and coaching to increase both evaluation capacity and organization-wide use of evaluative thinking for participating organizations. Through this session, the Evaluation Trainer and representatives from three trainee organizations will present details about what they learned through the actual evaluations they conducted while in training, how they have used their experiences to enhance evaluative thinking in their organizations, and why this is important for organizations serving LGBT and HIV+ clients.
Evaluative Thinking and Organizational Change at Latino Community Services
Anita Baker, Anita Baker Consulting, abaker8722@aol.com
Erica Roggeveen, Latino Community Services, eroggeveen@lcs-ct.org
Latino Community Services (LCS) provides HIV/STD testing, prevention services, and services for people with HIV/AIDS. LCS has participated in the Building Evaluation Capacity (BEC) Initiative for four years, including initial training and two-years as members of the BEC Alumni study group. LCS was able to leverage initial work in BEC into support from a grantmaker for a Program and Institutional Advancement Director position, filled by a key BEC team member. This allowed LCS to both use its enhanced internal evaluation capacity on multiple programs (including two that are publicly funded), and to continue efforts to integrate evaluative thinking into agency planning processes. The processes and decision-making related to maintaining and extending evaluation quality at LCS as well as examples of how evaluation findings have been used will be described and discussed in this session.
Yvette Bello, Latino Community Services, ybello@lcs-ct.org
Erica Roggeveen, Latino Community Services, eroggeveen@lcs-ct.org
Learning From and About Evaluation at the Hartford Gay and Lesbian Health Collective
Jamie Bassell, Hartford Gay and Lesbian Health Collective, jamieb@hglhc.org
Linda Estabrook, Hartford Gay and Lesbian Health Collective, lindae@hglhc.org
Since its inception, the hallmark of the Hartford Gay and Lesbian Health Collective (HGLHC) has been the provision of quality services by professional and highly skilled staff and volunteers in a safe and welcoming environment, free of judgmental attitudes or prejudice. In 2009, HGLHC joined the BEC class of 2010 and recently completed its study of Safety Net, a program in which volunteers receive training to electronically send safer sex messages and HIV prevention education messages to their peers and social networks. The presenter will share details on how the program and evaluation were designed and undertaken, what HGHLC learned, and how it applied the findings. Ongoing benefits and challenges related to enhancing evaluation capacity will also be presented and discussed.
Learning From and About Evaluation at AIDS Project Hartford
Ed Paquette, AIDS Project Hartford, edp@aphct.org
AIDS Project Hartford is a private, non-profit organization that is dedicated to improving the quality of life of all people in Connecticut who are impacted by HIV/AIDS. In 2009, APH joined the BEC class of 2010 and recently completed its study of its Medical Case Management services. APH provides medical case management services to more than 600 HIV-positive clients each year in an effort to maximize clients’ health outcomes as determined by stable or improved CD4 counts and Viral Loads and, connect and maintain clients to medical care. The presenter will share details on how the evaluation was designed and undertaken, what APH learned, and how it applied the findings. Ongoing benefits and challenges related to enhancing evaluation capacity will also be presented and discussed.

Session Title: Evaluating Special Education Personnel Development Initiatives in Three Predominately Rural States: Emphasis on Fidelity of Implementation Measures
Multipaper Session 869 to be held in MISSION B on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Special Needs Populations TIG and the Pre-K - 12 Educational Evaluation TIG
Chair(s):
David Merves,  Evergreen Educational Consulting LLC, david.merves@gmail.com
Discussant(s):
Patricia Gonzalez,  United States Department of Education, patricia.gonzalez@ed.gov
Measuring Fidelity of Implementation for Wisconsin’s State Personnel Development Grant Activities
Presenter(s):
James Frasier, University of Wisconsin, Madison, jfrasier@education.wisc.edu
Abstract: Session attendees will learn how Wisconsin is using online enumeration surveys to collect data from individuals within school-level work environments to inform fidelity of implementation within and across special education personnel development initiatives in over 150 locations statewide. Dr. Frasier is a Senior Researcher at the Center on Education and Work at the University of Wisconsin-Madison and is the Lead Evaluator of Wisconsin’s annual 1.4 million dollar Office of Special Education Programs training grant (2002 – present).

Session Title: Ensuring Quality in Our Work: Techniques Used by Independent Consultants
Multipaper Session 870 to be held in BOWIE A on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Independent Consulting TIG
Chair(s):
Susan M Wolfe,  Susan Wolfe and Associates LLC, susan.wolfe@susanwolfeandassociates.net
Discussant(s):
Michelle Baron,  The Evaluation Baron LLC, michelle@evaluationbaron.com
Avoiding Getting in Over Your Head
Presenter(s):
Frederic Glantz, Kokopelli Associates LLC, fred@kokopelliassociates.com
Abstract: Nobody likes to turn down potential contracts. However, this may be sometimes necessary to avoid getting in over your head. Most independent consultants are probably pretty good at assessing their pre-existing time commitments before taking on new work. However, experience has shown that sometimes independent consultants are ill equipped to conduct some aspects of a potential evaluation contract. While most evaluators tend to be quite comfortable with qualitative data collection and analysis, this is often not the case with survey research and quantitative analysis. Unfortunately, many independent evaluators don’t know what they don’t know. This session is designed to help independent evaluators do a self-assessment of their skill set in survey research and quantitative analysis, especially as it pertains to impact evaluation. Topics covered include selecting the appropriate design; determining appropriate sample sizes; and determining the appropriate analysis. Real world examples will be used as illustrations.

Session Title: Avoiding Evaluator Role Confusion: The Case of Evaluating a Complex National and Multi-state, Multi-partner Policy-Change Effort to Improve The Productivity of Higher Education
Think Tank Session 871 to be held in BOWIE C on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Advocacy and Policy Change TIG
Presenter(s):
Melanie Hwalek, SPEC Associates, mhwalek@specassociates.org
Mary Grcich Williams, Lumina Foundation, 
Discussant(s):
Julia Coffman, Center for Evaluation Innovation, jcoffman@evaluationexchange.org
Gerri Spilka, OMG Center for Collaborative Learning, gerri@omgcenter.org
Patricia Patrizi, Evaluation Roundtable, patti@patriziassociates.com
Abstract: Recently, Lumina Foundation for Education began a large, multi-state and national effort to support policy change to improve productivity in higher education at several different levels and in many different ways. The foundation engaged many national partners to provide fiscal and programmatic oversight, technical assistance, a communications campaign, topic relevant research, a Web portal for information sharing, and a national evaluation. Each of seven funded implementation states also has its own evaluator. Conducting a national evaluation of this work, with its many levels, layers, complexities, potential redundancies and tensions can challenge even the experts. This think tank will engage the audience in discussions with three seasoned policy change evaluators about what would be considered “best practice” for how the evaluation of this complex endeavor could be designed and managed so that it truly adds value and distinguishes the role of the evaluator from other data providers.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: The Mentor/Mentee Relationship: Perspectives and Suggestions for Maintaining Successful Relationships
Roundtable Presentation 872 to be held in GOLIAD on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Graduate Student and New Evaluator TIG
Presenter(s):
Jennifer Morrow, University of Tennessee, Knoxville, jamorrow@utk.edu
Margot Ackermann, Homeward, margot.ackermann@gmail.com
Erin Burr, Oak Ridge Institute for Science and Education, erin.burr@orau.org
Krystall Dunaway, Eastern Virginia Medical School, dunawake@evms,edu
Abstract: In the proposed roundtable an evaluation faculty member (mentor) and three of her former students who earned their Ph.D.’s under her direction (mentees) will lead a discussion on the importance of mentorship in graduate school. Both the mentor and the mentees will offer suggestions to the audience on how to form and maintain a good mentor/mentee relationship as well as discuss the benefits and possible pitfalls of this relationship. We will also engage the audience in an active discussion about what works/doesn’t work for them when it comes to mentoring/being mentored. Lastly, we will work together to come up with strategies to have a successful relationship with our mentor/mentee.
Roundtable Rotation II: Evaluating A Rite of Passage Program as a Vehicle for Systemic Change in At-Risk Female Youths Attitudes and Beliefs
Roundtable Presentation 872 to be held in GOLIAD on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Graduate Student and New Evaluator TIG
Presenter(s):
Kathryn Wilson, Western Michigan University, kathryn.a.wilson@wmich.edu
Mark Kirkpatrick, Western Michigan University, mark.c.kirkpatrick@wmich.edu
Abstract: This round table discussion will provide an overview of the Rite of Passage Program. Discussion will center on the advantages and disadvantages of the Rite of Passage program as an effective method for changing attitudes and beliefs in At-Risk female youth. The discussion will present the theoretical framework for the evaluation model, as well as the design and methods of the new data sets used by the evaluation team. The various challenges faced by the evaluation team will be described as well as the strategies used to overcome them, while assisting clients in reaching clarity based on stated program objectives, to determine program effectiveness, merit and worth. During the conclusion, the methodological and operational issues regarding the evaluation process will be examined and discussed.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: The Unfocused Focus Group: Evaluation Benefit or Bane?
Roundtable Presentation 873 to be held in SAN JACINTO on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Extension Education Evaluation TIG
Presenter(s):
Nancy Franz, Virginia Tech, nfranz@vt.edu
Abstract: Many program evaluators use focus group as a method to collect evidence of process or impact data on programs. However, successful focus groups often rely on the skills of the facilitator. Some facilitators find working with focus groups to be a combination of science and art. This can be especially true when focus group conversation wanders from the interview protocol. This session will explore the benefits and bane of allowing focus groups to wander away from the protocol into uncharted territory. Personal experiences will be shared to spur discussion on the potential best practices for this aspect of working with focus groups.
Roundtable Rotation II: eXtension Evaluation Community of Practice (CoP) Grows Up
Roundtable Presentation 873 to be held in SAN JACINTO on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Extension Education Evaluation TIG
Presenter(s):
Michael Lambur, Virginia Tech, lamburmt@vt.edu
Benjamin Silliman, North Carolina State University, ben_silliman@ncsu.edu
Abstract: The eXtension Evaluation Community of Practice, established early in 2009 by EE-TIG members, created a variety of online educational resources and established several streams of dialogue for community-based Extension educators and evaluation specialists. Formative evaluations of the CoP monthly webinar indicate that it fosters evaluation knowledge that is disseminated and applied with diverse populations. Simultaneously, informal feedback from diverse users of CoP resources (webinar, web sites, consultations) and reflection by CoP leaders on non-participation among the broader Extension community is yielding insights on how the CoP can build individual and organizational evaluation capacity. In response, CoP leaders are collaborating with eXtension and university distance education specialists, CoP leaders are expanding use of diverse technologies to educate and engage clients via self-directed and collaborative learning. This Roundtable will update participants on CoP activities and responses and engage discussion on building evaluation capacity through online networks and resources.

Session Title: The Importance of Critical Thinking in Assessment in Higher Education
Multipaper Session 874 to be held in TRAVIS A on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Leigh D'Amico,  University of South Carolina, kale_leigh@yahoo.com
Understanding Student Mastery of Higher Education Curriculum Standards
Presenter(s):
Leigh D'Amico, University of South Carolina, kale_leigh@yahoo.com
Grant Morgan, University of South Carolina, praxisgm@aol.com
Tammy Pawloski, Francis Marion University, tpawloski@fmarion.edu
Janis McWayne, Francis Marion University, jmcwayne@fmarion.edu
Abstract: Francis Marion University developed six Teaching Children of Poverty Standards to better align the teacher preparation curriculum with needs expressed by surrounding schools and to promote teacher retention and student achievement. These standards are embedded within more than 20 courses across the curriculum. To understand student mastery of these standards, a 48-item assessment is administered to students shortly before graduation. A bookmarking process was completed by experts in the field and allows faculty members to gauge proficiency in the area of Teaching Children of Poverty. Through this presentation, evaluators and faculty members will detail the process used to develop and score the assessment and describe how assessment results are used to inform curriculum and instruction. In addition, understanding of student mastery better enables evaluators to explore the relationship between proficiency in teaching children of poverty and factors such as teacher retention and student achievement.
Is Critical Thinking the All-Purpose Outcome in Higher Education?
Presenter(s):
John Stevenson, University of Rhode Island, jsteve@uri.edu
Abstract: How can evaluators in higher education work with administrators and faculty to select, implement, and learn from institution-level measures of crucial learning outcomes? Critical thinking is a pervasive choice, and this paper explores issues in its definition and measurement, drawing on experiences of one university along with the published literature.
Science, Technology, Engineering, And Mathematics (STEM) Evaluations: Best Practices from a Multi-site, Multi-national Research Program
Presenter(s):
Courtney Brown, Indiana University, coubrown@indiana.edu
Christina Russell, Indiana University, chriruss@indiana.edu
Abstract: This paper provides best practices for evaluating undergraduate STEM programs. A current literature review of STEM evaluations provides the context; however the paper focuses on practice using a long-term, multi-site, multi-national summer undergraduate STEM research initiative program. The purpose of which is to provide a reflective case narrative of how program evaluation was implemented, evolved and improved over a four year time span. Over this time period a wide variety of evaluation techniques were utilized and perfected in order to provide the individual programs as well as the funder accurate, useful, and timely evaluative information. Best practices and lessons learned are offered for evaluating the entire program. The evaluation practices begin in the application process and continue to acceptance and participation and subsequently short-term and long-term follow-up.
Equity, Social Justice, and Quality in School Leadership Preparation: A Critical Self-Assessment to Build Criteria for Candidate Selection
Presenter(s):
Aarti P Bellara, University of South Florida, abellara@mail.usf.edu
Zorka Karanxha, University of South Florida, karanxha@usf.edu
Vonzell Agosto, University of South Florida, vagosto@usf.edu
Abstract: The various components of the policies and procedures for candidate recruitment and selection to a Master’s Degree in Educational Leadership program are vital aspects that influence the diversity and subsequent quality of preparation of future administrators. This paper outlines the current applicant selection process of a leadership department’s faculty, and the subsequent need that arose to assess and create a systematic approach to the application and selection processes of masters degree students. This self assessment for equity incorporated a transformative lens to analyze, determine the program theory of the department and restructure the application and selection process to increase the critical mass of diverse candidates. This self assessment’s aim was to serve as a critical self-assessment of the processes and policies that may appear fair on the surface but in reality are unsystematic and driven more by demands for efficiency than equity.

Session Title: Positionality Matters: Understanding Culture and Context From the Perspective of Key Stakeholders
Think Tank Session 875 to be held in TRAVIS B on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Alyssa Na'im, Education Development Center, anaim@edc.org
Discussant(s):
Araceli Martinez Ortiz, Sustainable Future Inc, araceli@sustainablefuturenow.com
Shannan McNair, Oakland University, mcnair@oakland.edu
Carol Nixon, Edvantia Inc, carol.nixon@edvantia.org
David Reider, Education Design LLC, david@educationdesign.biz
Angelique Tucker Blackmon, Innovative Learning Concepts LLC, ablackmon@ilearningconcepts.com
Pam Van Dyk, Evaluation Resources LLC, evaluationresources@msn.com
Karen L Yanowitz, Arkansas State University, kyanowit@astate.edu
Abstract: This think tank will explore the challenges and best practices of working with and relating to stakeholders in program evaluation. Presenters will share their experiences with engaging stakeholders from various fields, disciplines, or populations. Central to this discussion will be the notion that various stakeholder groups bring a unique and often integrated culture (or way of doing things) and perspectives that should inform the evaluation process; evaluators must be equipped with certain knowledge and skills to navigate and facilitate understanding within and across stakeholder groups while ensuring and balancing standards of quality. This session poses two questions: 1) How do evaluators begin to understand the stakeholders’ perspectives involved in the program evaluation?; and 2) What role do or should stakeholders play in the design of the evaluation? Participants will be assigned to one of three stakeholder groups – decision makers, implementers, or recipients – to explore these questions in small group discussions.

Session Title: Practicing Culturally Responsive Evaluation: Graduate Education Diversity Internship (GEDI) Program Intern Reflections on the Role of Competence, Context, and Cultural Perceptions - Part I
Multipaper Session 876 to be held in TRAVIS C on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Michelle Jay,  University of South Carolina, mjay@sc.edu
Discussant(s):
Rita O'Sullivan,  University of North Carolina at Chapel-Hill, ritao@email.unc.edu
Designing a Culturally Responsive Evaluation Plan for the Race Matters Toolkit
Presenter(s):
Frances Carter, University of Maryland, Baltimore County, frances2@umbc.edu
Abstract: The Race Matters Toolkit (RMT) was designed by the Annie E. Casey Foundation to effectively analyze, address, and produce racially equitable results and opportunities for disadvantaged children, families, and communities. By using the RMT, organizations and public systems are better prepared and able to make the case, shape the message, and do their work from the perspective that race matters. However, little is known regarding the impact of the RMT. This paper includes an evaluation plan based on Coffman’s framework for evaluating systems initiatives and results of pilot data from one component of the plan. With an evaluation plan designed for a culturally responsive initiative and by a culturally responsive evaluator of color, the goal of this study is to illustrate how a culturally responsive lens can be applied to systems initiative frameworks to illustrate the impact of the RMT.

Session Title: Translating Visitors' Experiences Through Evaluation
Multipaper Session 877 to be held in TRAVIS D on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Evaluating the Arts and Culture TIG
Chair(s):
Tara Pearsall,  Savannah College of Art and Design, tpearsal@scad.edu
Discussant(s):
Kirsten Ellenbogen,  Science Museum of Minnesota, kellenbogen@smm.org
Visitors Within and Across Art Museums: Creating a Baseline for Comparison While Building Capacity
Presenter(s):
Joe E Heimlich, Ohio State University, heimlich.1@osu.edu
Abstract: Destination museums have a different relationship with visitors/members than do regional and local museums. Three regional art museums collaborated to evaluate and segment their audiences. The dominant questions asked related to visit expectations, comfort in a museum, engagement with this and other museums, engagement with other cultural and scientific institutions, a segmentation built around experience with museums, perceptions of this museum, and comparisons of this museum to all other art museums. The goals for the evaluator were 1) to help build capacity within the museums for future evaluation work; and 2) to provide baseline data against which each museum could evaluate change efforts. The presentation will examine the process of capacity building used in this project and then shift toward the findings of interest within each of the three museums as identified by each of the institutions. Finally, the paper will present representative data across the institutions.
Establishing a Framework for Evaluating Public Value at the Smithsonian's National Museum of Natural History
Presenter(s):
Bill Watson, Smithsonian's National Museum of Natural History, watsonb@si.edu
Mary Ellen Munley, MEM and Associates, maryellen@mem-and-associates.com
Shari Werb, Smithsonian's National Museum of Natural History, werbs@si.edu
Abstract: The Smithsonian’s National Museum of Natural History is the most visited museum in the United States and the largest natural history museum in the world. However, evaluation efforts at NMNH had been unsystematic until a recent effort to develop an integrated, museum-wide effort to measure public value. The museum’s leadership recognized that evaluation that serves as a common thread through all stages of program development and implementation also provides the widest lens for understanding the impact of programs, exhibits, and websites, but it found implementation of that view to be challenging. This paper describes the process through which we developed a set of metrics and protocols for evaluating public value, presents samples of the protocols, and describes the framework being used.
Using the Depth of Learning Framework in Exhibit Evaluation at a New Science Center
Presenter(s):
Heather Harkins, Connecticut Science Center, hharkins@ctsciencecenter.org
Abstract: Three months after opening, staff at a new science center in the Northeast United States identified exhibits for systemic evaluation using Barriault’s Depth of Learning Framework. The framework is based on Falk and Dierking’s Contextual Model of Learning in informal settings. Twelve staff members from across the institution (educational programs, visitor services, exhibits) were trained by an in-house evaluator to conduct data collection, entry and analysis. This paper reports on the process used to train this diverse group and lessons learned regarding the application of the Depth of Learning Framework. The presentation will highlight how the findings of the team benefit the work of exhibit evaluation by demonstrating how to effectively apply a “noninterventionist observational framework” (Rennie, 2007). Reference: Rennie, L. J. (2007) Learning Science Outside of School. In S. K. Abell & N. G. Lederman (Eds.), Handbook of research on science education (pp. 125-167). Mahweh, NJ: Erlbaum.

Session Title: Community-Derived Research Partnerships: Working Together to Improve Human Services
Panel Session 878 to be held in INDEPENDENCE on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Human Services Evaluation TIG
Chair(s):
Cynthia Flynn, University of South Carolina, cynthiaf@mailbox.sc.edu
Discussant(s):
Diana Tester, South Carolina Department of Social Services, diana.tester@dss.sc.gov
Abstract: Using a new model, Community-Derived Research and Evaluation, The Center for Child and Family Studies in the College of Social Work at the University of South Carolina has re-envisioned the traditional process where universities identify the evaluation and research questions. With the Community-Derived Research model, the research and evaluation questions come directly from our partner, the SC Department of Social Services. Working side by side, the evaluation study is developed and implemented. Findings are immediately applied to practice and policy. Three papers describing evaluation projects conducted as part of this partnership are discussed by research faculty at the Center. Each presentation highlights how the Center and the state agency work together. The research director at our partner agency will serve as discussant describing the partnership from the agency perspective for each of the projects presented.
Establishing a Professional Development Consortium of Social Work Programs in South Carolina: Needs Assessment for Stakeholder Engagement
Dana DeHart, University of South Carolina, danad@mailbox.sc.edu
The South Carolina Department of Social Services (DSS) has partnered with seven colleges and universities throughout the state to create a Professional Development Consortium (SCPDC). The purpose of the consortium is to provide training and professional development to new and current DSS employees to ensure that best practices are taught and followed. The Center for Child and Family Studies at the University of South Carolina conducted a needs assessment consisting of 50 interviews of key stakeholders at DSS and those involved in the social work programs at the colleges and universities. This part of the panel will focus on the results of the needs assessment, specifically how to establish a collaborative partnership and how to engage those stakeholders to promote the quality of future education.
Connecting for Kids: Navigating Community Partnerships for Family-finding Services
Suzanne Sutphin, University of South Carolina, sutphist@mailbox.sc.edu
South Carolina Connecting for Kids is a three-year federal grant awarded to the South Carolina Department of Social Services (DSS). The Center for Child and Family Studies at the University of South Carolina wrote, and is subsequently conducting, the evaluation. The grant consists of two parts: (1) kinship navigator services to provide relatives caring for children in open treatment cases access to needed community resources and (2) a family finding service to aid foster youth ages 16-18 in establishing permanent family connections before they leave the foster care system. Many other community partners, such as the SC Association of Children’s Homes and Family Services and the SC Guardian ad Litem program, are working with DSS to provide the described services. This part of the panel will focus on conducting an evaluation while navigating relationships between multiple community partners and present preliminary findings from both the navigator and family finding services.
Family Group Conferencing for Children in Foster Care: Assessing the Effectiveness of a New Model for Engaging Families
Cynthia Flynn, University of South Carolina, cynthiaf@mailbox.sc.edu
The South Carolina Department of Social Services (DSS), in collaboration with Casey Family Programs, is implementing Families First, a family group decision making model that allows families to plan for their children’s care and protection. The goal of this approach is to develop a plan that addresses safety, permanence, and well-being for children in foster care. Individuals from the South Carolina Association of Children’s Homes and Family Services and other organizations contract with DSS to facilitate these family group conferences. The Center for Child and Family Studies in partnership with DSS is conducting an evaluation to assess the effectiveness of the South Carolina Family Group Conference model. This part of the panel will focus on using evaluation to make informed decisions regarding the implementation of new programs. Preliminary findings from the evaluation will also be reported.

Session Title: Online Learning in Adult and Postsecondary Education: Theory and Practice
Multipaper Session 879 to be held in PRESIDIO A on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Distance Ed. & Other Educational Technologies TIG
Chair(s):
Talbot Bielefeldt,  International Society for Technology in Education, talbot@iste.org
Pushing Buttons and Breaking Code: Evaluating Instructional Technology Pilots in Higher Education
Presenter(s):
Joel Heikes, University of Texas, Austin, joel.heikes@austin.utexas.edu
Stephanie Corliss, University of Texas, Austin, stephanie.corliss@austin.utexas.edu
Erin Reilly, University of Texas, Austin, erin.reilly@austin.utexas.edu
Abstract: Introducing a new instructional technology into any learning environment is a complex process involving many stakeholders who often have highly divergent interests—technicians wanting technology to work perfectly, administrators seeking cost savings, instructors demanding efficiency, and students expecting easy use. Add into this mix the evaluator’s desire to produce a quality evaluation and the potential for stressful outcomes is heightened. Based on our experiences evaluating instructional technology pilots over the past six years, we identify some common challenges, stunning successes, and lessons learned to develop a model for conducting quality instructional technology evaluation in a higher education context.
Real World, Real Use: The Impact of Integrating Student-Centered Learning in Adult Online Instruction in Mathematics and Science
Presenter(s):
Jane A Rodd, State University of New York at Albany, jr937855@albany.edu
Dianna L Newman, State University of New York at Albany, dnewman@uamail.albany.edu
Patricia J Lefor, Empire State College, pat.lefor@esc.edu
Abstract: This paper documents evaluation findings on the integration of constructivist methods into technology-supported distance learning with adult learners in the fields of mathematics and science. The overarching goal of the project was to promote content relevancy by creating authentic learning experiences for students. To achieve this goal, selected existing online courses were modified, and new courses were developed to reflect a more problem-based approach to learning relevant to students’ lives and careers. A multi-phased, mixed-methodology evaluation design was developed and utilized to support the objectives, which were to evaluate ability to serve diverse students, changes in course related affect, and changes in course related content knowledge. Participants were 1458 (Math, n=938; Science, n=520) adult learners enrolled in mathematics and science courses offered on-line to under-graduate students at a 4-year public college. Results indicated that students made significant gains in content, transfer of content, content-specific affect, and generalized learning affect.
Evaluation of an Online Training Program for Informal Science Educators: The FETCH! with Ruff Ruffman Program
Presenter(s):
Christine Paulsen, Concord Evaluation Group LLC, cpaulsen@concordevaluation.com
Christopher Bransfield, Concord Evaluation Group LLC, cbransfield@concordevaluation.com
Abstract: This paper describes an evaluation of an online training program developed with National Science Foundation funding (WGBH’s FETCH! with Ruff Ruffman). The program was designed for informal educators leading science activities with elementary-age kids (including after-school providers, teachers, camp counselors, librarians, and museum staff). We used random assignment in the treatment-control group, pre- and post-test design. Fifty-four programs from across the country participated in the study. Findings show that the program helped leaders to be more prepared and more comfortable leading hands-on science activities with kids; it enhanced leaders’ ability to convey science concepts and processes; and it enhanced leaders’ ability to engage kids and get them excited about doing science activities. This paper will present a summary of key findings regarding program impact, as well as a summary of methodological lessons learned, with a discussion of the implications for other evaluators who are performing evaluations in informal learning environments.

Session Title: Increasing Evaluation Capacity Through Different Levels of Training and Support
Multipaper Session 880 to be held in PRESIDIO B on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Stephanie Evergreen,  Western Michigan University, stephanie.evergreen@wmich.edu
Discussant(s):
Stanley Capela,  HeartShare Human Services of New York, stan.capela@heartshare.org
Improving the Quality of Program Evaluation Reporting: A Capacity Building Event in California's Tobacco Control Programs
Presenter(s):
Jeanette Treiber, University of California, Davis, jtreiber@ucdavis.edu
Abstract: The Tobacco Control Evaluation Center (TCEC) at UC Davis supports more than 100 grantees that are comprised of county health departments and local organizations throughout California involved in local tobacco policy work. TCEC uses training and on-demand technical assistance to build the evaluation capacity of these grantees that have varying levels of evaluation expertise. It also scores and provides feedback on grantees’ three year final evaluation reports. The evaluation reports of the funding cycle that ended in 2007 showed large gaps in proper analysis and reporting. For this reason, TCEC increased its analysis and report writing training effort in 2010, before the final evaluation reports of the next funding cycle were due. Capacity building occurred through a threefold effort: regional one-day trainings at various sites in California, a life webinar that was recorded and posted on the TCEC website, and on-demand one-on-one technical assistance via phone. This paper relates the capacity building activities and their results by tracking the extend of grantee participation in these capacity building events, by analyzing training satisfaction surveys, and by comparing final evaluation report scores of 2007 and 2010.
Am I My Brother's Keeper? Coaching Communities an Evaluation of Process and Outcome
Presenter(s):
Gina Weisblat, Cleveland State University, g.weisblat@csuohio.edu
Abstract: Coaching communities is a practice where a single coach (for organizations, peer group, or students) creates communal goals and achieves these goals through the sharing of ideas/practices. This paper will use the Success Case Method (SCM) to evaluate differences in training models from traditional professional development to coaching communities created within one organization and multiple sites. SCM is designed to facilitate organizations’ leverage learning and performance improvement via training. This study will investigate organizational change and culture over a three year period, using data from prior staff experiences to compare to the coaching community model. Specific outcomes will be addressed: perception change, use of existing assets, development of social capital between staff and program, dependence on outside coach versus internal coaching community, and development of a pedagogical approach.
Evaluation Capacity Building in the Corporate Sector: Using eLearning and Traditional Methods to Increase Organizational Capacity
Presenter(s):
Michele Graham, KPMG LLP, magraham@kpmg.com
John Mattox, Knowledge Advisors, jmattox@knowledgeadvisors.com
Peter Sanacore, KPMG LLP, psanacore@kpmg.com
Abstract: This paper demonstrates how eLearning programs can enhance evaluation capacity building (ECB) efforts. It describes how the learning measurement group within a large private corporation used an ECB model to facilitate organizational learning related to training evaluation. Cost-effectiveness was important given resource constraints of the measurement group, a flagging economy, and the diverse audience needs; thus, traditional ECB methods alone could not be used. Therefore, the ECB model focused on a blended learning curriculum designed to increase knowledge and skills for performing training evaluation with a focus on quality methods. The blended solution included eLearning, instructor-led-training, technical assistance, use of technology, and written materials. Measures of success included knowledge gain, performance of evaluation tasks, and development of an evaluation culture. Findings indicate gains in capacity were achieved largely because eLearning provided a foundation of knowledge transfer; results revealed important successes with regard to organizational learning by using the blended model.

Session Title: Evaluating a National Medicaid Children's Mental Health Demonstration Grant Program
Multipaper Session 881 to be held in PRESIDIO C on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Garrett Moran,  Westat, garrettmoran@westat.com
Overview of the Home and Community-based Services Demonstration Waiver Grant Program
Presenter(s):
Oswaldo Urdapilleta, IMPAQ International, ourdapilleta@impaqint.com
Effie George, United States Department of Health and Human Services, effie.george@cms.hhs.gov
Ron Hendler, United States Department of Health and Human Services, ronald.hendler@cms.hhs.gov
Abstract: With authorization from the Deficit Reduction Act of 2005, CMS awarded 5 year grants totaling $217 million to 10 States (MS, VA, KS, MT, SC, IN, AK, MD, FL and GA). The Medicaid Community-based Alternatives to Psychiatric Residential Treatment Facilities Demonstration Grant Program is currently being implemented in nine States. The goal is to test the efficacy and cost neutrality of this home and community-based waiver program. Client enrollment started in 2008, but some states were slow to make the program operational. Following a wraparound services model, states are including an array of services such as respite care, peer supports, psychosocial rehabilitation, family therapy, mental health services, and crisis intervention. Client age groups vary across states, as do the concentration on diversion from or enrollment following institutional care. All states are collecting a common minimum data set, and five states have included comparison groups in their local evaluation designs.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Adaptations to Evaluation Design: Two Examples of Ensuring Quality in Practice
Roundtable Presentation 882 to be held in BONHAM A on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Stephanie Feger, Brown University, stephanie_feger@brown.edu
Elise Laorenza, Brown University, elise_laorenza@brown.edu
Abstract: Often in external evaluation, quality is initially established through development of an appropriate design, selection of instruments, and planning for data collection and analysis. However, evaluation plans often require adaptations based on program modifications. Evaluation adaptations are frequently needed to clarify key program components discovered over the course of implementation and also to determine benchmarks of program impact. Evaluation adaptations can improve evaluation relevance, an indicator of overall evaluation quality, and provide new tools and/or data to identify program activities that effectively contribute to program goals and outcomes. Through discussion of two evaluation studies of statewide science programs this roundtable will explore; (1) the development of a student reflection instrument as an evaluation adaptation in the context of program benchmarks, (2) the process for aligning evaluation adaptations with original methods and the integration of results, and (3) the utilization of evaluation adaptations to support program goals and improve program impact.
Roundtable Rotation II: Assessing Program Implementation in Multi-site Educational Evaluations: The Development, Alignment, And Incorporation of Evidence-based Rubrics in Rigorous Evaluation Design
Roundtable Presentation 882 to be held in BONHAM A on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Amy Burns, Brown University, amy_burns@brown.edu
Tara Smith, Brown University, tara_smith@brown.edu
Abstract: This roundtable will share a multi-step approach to program implementation assessment developed by the Education Alliance at Brown University. Presenters will provide examples from rigorous evaluations of four districts which received federal Magnet School Assistance Program funding to describe: the development of implementation rubrics that align with districts’ logic models; data sources for evidence-based measures that are used to identify compliance with logic models; rubric scoring processes; and the incorporation of these rubric data into multivariate statistical models. The presenters will promote discussion with the roundtable group on ways to address challenges in “quantifying” implementation data.

Session Title: 2010 Report: State of Evaluation Practice and Capacity in the Nonprofit Sector
Demonstration Session 883 to be held in BONHAM B on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Johanna Morariu, Innovation Network, jmorariu@innonet.org
Myia Welsh, Innovation Network, mwelsh@innonet.org
Lily Zandniapour, Innovation Network, lzandniapour@innonet.org
Abstract: Nonprofits hear a lot of talk about evaluation, and everyone seems to want evaluation results. But there’s a gap in the conversation: What are nonprofits really doing to evaluate their work? How are they using evaluation results? Innovation Network seeks to answer these questions through the first project that seeks to systematically and repeatedly collect data from nonprofits about their evaluation practices: the State of Evaluation project. In this session Innovation Network will present findings from the 2010 State of Evaluation project. Findings will be drawn from national survey data, and will discuss nonprofit evaluation practices, evaluation capacity, challenges to evaluation, and recommendations for strengthening evaluation practices. Results shared in this session will build understanding for nonprofits (e.g. comparison with peers), for donors and funders (e.g. understand capacity of grantees to collect and use data), and for evaluators (e.g. inform design of evaluation projects re: existing evaluation practices and capacities).

Session Title: Evaluating the State of Charter Schools and Public Schools of Choice
Multipaper Session 884 to be held in BONHAM C on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Juanita Lucas-McLean,  Westat, juanitalucas-mclean@westat.com
Discussant(s):
Antionette Stroter,  University of Iowa, a-stroter@uiowa.edu
School Choice and its Impacts on Student Well-being and Academic Achievement in Post-Katrina New Orleans
Presenter(s):
Paul Hutchinson, Tulane University, phutchin@tulane.edu
Lisanne Brown, Louisiana Public Health Institute, lbrown@lphi.org
Nathalie Ferrell, Tulane University, natferrell@gmail.com
Marsha Broussard, Louisiana Public Health Institute, mbroussard@lphi.org
Sarah Kohler, Louisiana Public Health Institute, skohler@lphi.org
Abstract: The New Orleans school system post-Katrina has followed the national trend of adopting "school choice" policy, permitting students to apply to any Orleans Parish public school. While many students attend their closest public school, others travel out of neighborhood in hopes of better learning educational opportunities. This study examines data from the 2009 School Health Connection Survey, which focused on experiences with violence, drug and alcohol use, sexual behaviors, and scholastic achievement. We estimate the effects of school choice on academic performance and student mental well-being, controlling for potential confounding from non-random school choice, i.e. students opting to bypass their closest school may differ in important unmeasurable characteristics (e.g. motivation, supportive environments) that may bias estimates of school choice impacts. The results show that “traveling” students perform better scholastically and are less prone to fears about their safety or to feelings of suicide than students attending their “home neighborhood” school.
National Evaluation of Knowledge Is Power Program (KIPP) Middle Schools: Impacts on Student Achievement
Presenter(s):
Brian Gill, Mathematica Policy Research, bgill@mathematica-mpr.com
Philip Gleason, Mathematica Policy Research, pgleason@mathematica-mpr.com
Ira Nichols-Barrer, Mathematica Policy Research, inichols-barrer@mathematica-mpr.com
Bing-ru Teh, Mathematica Policy Research, bteh@mathematica-mpr.com
Bing-ru Teh, Mathematica Policy Research, bteh@mathematica-mpr.com
Christina Tuttle, Mathematica Policy Research, ctuttle@mathematica-mpr.com
Abstract: What is the effect of the Knowledge is Power Program (KIPP) network of charter schools on student achievement? In this study, the first rigorous evaluation of a national sample of KIPP middle schools, we employ two analytic methods to address this question. First, we perform ordinary least squares regressions on students in districts in which the KIPP schools are located, controlling for prior test scores and demographic characteristics. Second, we use propensity score matching techniques to identify a particular comparison group of non-KIPP students within those districts to compare to the sample of students who attended KIPP. Incorporating up to seven years of longitudinal state assessment data, we estimate the effect of 20 different KIPP schools on reading and math scores one, two, three, and four years after students first attended KIPP.
Lottery-based Estimates of Charter School Impacts and Factors Related to Impacts
Presenter(s):
Christina Clark Tuttle, Mathematica Policy Research, ctuttle@mathematica-mpr.com
Philip Gleason, Mathematica Policy Research, pgleason@mathematica-mpr.com
Melissa Clark, Mathematica Policy Research, mclark@mathematica-mpr.com
Abstract: In this study, conducted in 36 charter middle schools across 15 states, we compare outcomes of students who applied and were admitted to these schools through randomized admissions lotteries with the outcomes of students who also applied to these schools and participated in the lotteries but were not admitted. This analytic approach produces the most reliable impact estimates. But because the study could only include charter middle schools that held lotteries, the results do not necessarily apply to the full set of charter schools in the United States. Key findings from the evaluation include (1) the impact of charter schools on student achievement and behavior and student and parent satisfaction; (2) the variation in those impacts across schools, and (3) factors related to impacts.
The Art and Science of Picking Comparison Schools
Presenter(s):
Agata Jose-Ivanina, ICF Macro, agata.jose-ivanina@macrointernational.com
Helene Jennings, ICF Macro, helene.p.jennings@macrointernational.com
Abstract: In quasi-experimental studies of educational interventions, one of the greatest challenges facing evaluators is the selection of a closely matched comparison group. Evaluators often have to balance the utilization of rigorous selection methodologies with additional available information and feedback from clients and school-based personnel about other factors to take into consideration. In this paper, we discuss our experience selecting comparison schools for a study of charter schools for the Maryland State Department of Education. To identify comparison schools, we first used the quantitative needs statistic method used by the New York State Education Department (NYSED). After obtaining a number of possible matches, we made an informed choice that took into account other relevant contextual factors. As a result, we were able to develop a methodology that was part science in that it was based on a rigorous and consistent approach and part art in that it reflected school environment.

Session Title: Cost Studies in Health Care
Multipaper Session 885 to be held in BONHAM D on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
Chair(s):
Mustafa Karakus,  Westat, mustafakarakus@westat.com
Nadini Persaud,  University of the West Indies, npersaud07@yahoo.com
Always Late: Why Some Multilateral Development Bank Projects Delay so Much While Others do Not?
Presenter(s):
Guy Blaise Nkamleu, African Development Bank, b.nkamleu@afdb.org
Abstract: The influence and importance of time delays in project performance emphasizes the need for a systematic effort to understand why some projects delay so much. This study attempts to identify the characteristics of a project which affect its probability to experience start-up delays. Statistical and econometric analyses of a dataset of more than 500 projects reveal that: delays at project start-up are prominent and are a potential bottleneck. Half of the time delay is due to the delay between commitment and loan effectiveness. Multinational projects experienced less time delays. The smaller the operation, the greater will be the probability to experience long delays. The longer the planned implementation period, the higher the start-up delay will be. Projects with many components have lower probability of experiencing delays. The paper concludes by outlining a number of implications for effective strategies to mitigate long delays encounter throughout projects cycle for international development operations.
Comprehensive Economic Evaluation of a Colorectal Cancer Screening Demonstration Program: A Multi-site, Multi-level Cost Analysis
Presenter(s):
Maggie Cole Beebe, RTI International, mbeebe@rti.org
Sujha Subramanian, RTI International, ssubramanian@rti.org
Florence Tangka, Centers for Disease Control and Prevention, ftangka@cdc.gov
Sonja Hoover, RTI International, shoover@rti.org
Abstract: In 2005, the CDC started a 3-year colorectal cancer (CRC) screening demonstration project at five sites. As part of an overall program evaluation, we completed a comprehensive economic evaluation of the demonstration. We examined program-level cost data collected with a cost assessment tool (CAT) as well as annual program reimbursement data (PRD) on the clinical costs of CRC screening. The CAT was designed to collect data on the start-up period and the annual costs of maintaining a CRC screening program. It was tailored to the demonstration so that programs could assign costs to various program activities. In most cases PRD was extracted from a program’s billing database. These two types of cost data allow us to analyze the start-up costs for each program, the costs to each program of providing CRC screening and diagnosis, and the variation in the distribution of costs among the key program components. This analysis provides insight on how to best allocate future funds.
Exploring the Economics of Quality Improvement Education in Healthcare
Presenter(s):
Daniel McLinden, Cincinnati Children's Hospital Medical Center, daniel.mclinden@cchmc.org
Stacey Farber, Cincinnati Children's Hospital Medical Center, stacey.farber@cchmc.org
Abstract: What are the economics associated with a program intended to influence large scale organizational change in a healthcare setting? This work reports on the exploration of the economic linkages among the resources and the benefits. The target of the evaluation is a training program intended to develop quality improvement skills among training participants in a medical center. The economic evaluation encompasses application of utility analysis to value the costs of the program and to estimate the benefit as the value of trained individual. While this approach to evaluating the economics of training has not been widely used it does offer a methodological approach that complements other economic methods. Of additional interest is the extension and validation of utility analyses by quantifying the linkage between interventions with learners and the impact of large scale change.
Evaluating Directions Home: A Cost-Benefit Study of Supportive Housing for People Who Are Homeless in Fort Worth, Texas
Presenter(s):
James Petrovich, University of Texas, Arlington, james.petrovich@mavs.uta.edu
Emily Spence-Almaguer, University of Texas, Arlington, spence@uta.edu
Abstract: This study evaluated the use of critical service systems by people who are homeless before and after they were placed in permanent supportive housing in Fort Worth, Texas. Participants were asked to allow researchers from the University of Texas at Arlington School of Social Work access to records from local medical, emergency medical, and mental health / substance abuse service providers. Criminal justice data was also obtained from the Fort Worth Police Department to determine the level of involvement with the criminal justice system before and after housing. While not an unprecedented study, this project sought to provide a localized assessment of utilization trends and the fiscal costs of critical service use before and after being placed in housing. It also sought to contribute to a developing national body of knowledge regarding the efficacy and efficiency of permanent supportive housing.

Session Title: Improving Proposals and Programs by Improving Peer and Stakeholder Review
Multipaper Session 886 to be held in BONHAM E on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
George Teather,  George Teather and Associates, gteather@sympatico.ca
Using Fishbone Analysis to Improve the Quality of Science and Technology Program Proposals
Presenter(s):
Shan-Shan Li, National Applied Research Laboratories, ssli@mail.stpi.org.tw
Ling-Chu Lee, National Applied Research Laboratories, lclee@mail.stpi.org.tw
Abstract: The paper mainly discusses that using the tool of “fishbone diagram” will assist S&T program managers and staffs improve the evaluation quality of S&T program proposals. By using the tool, they can make “good” connection between problems, goals, objectives, and measurable indicators. The paper will have several parts: (1) to realize the importance of fishbone diagram in program planning and quality improvement of the proposals; (2) to know the essence of using fishbone diagram, by combining “AusAID’s logical framework approach (LFA)” with “MECE principle in MintoPyramidPrinciple” ;(3) to develop seven steps from fishbone diagram to reverse fishbone diagram, and involve stakeholders in the process ;(4) to take one of Taiwan S&T programs proposals as an example to demonstrate the usage of fishbone diagram. In the manner, the paper hopes to emphasize fishbone analysis is an necessary program planning tool so as to produce the significant outputs, outcomes, even impacts in the future.
Improving the Professionalism of Peer Review Panel in Research and Development (R&D) Evaluation : The Korean Case
Presenter(s):
Chan Goo Yi, Pukyong National University, changoo@pknu.ac.kr
Boojong Gill, Korea Institute of Science & Technology Evaluation and Planning (KISTEP), kgjok@kistep.re.kr
Abstract: It has been discussed that R&D evaluation system in Korea has been well established, so that it provided consultations to the South-Eastern countries including Vietnam. However, in contrast to the institutionalization of the evaluation system, stake-holders have not satisfied with the R&D evaluation itself. While scientist evaluated cannot trust R&D evaluation results because of its low professionalism, the governments to commission the evaluation have no willings to actively utilize evaluation findings due to its low professionalism and lacks of details of recommendations. Thus, the negative perception of the professionalism in R&D evaluation may cause the low credibility of evaluation and its under utilization, which, in turn, can make to deepen the distrust of the R&D evaluation itself. For this reason, this work aims to investigate which are critical factors to low professionalism in Korean R&D evaluation and discuss how to solve these problems.
Interactive Heuristic Reviewing Mechanism: A New Method of Assessing Exploratory Pioneering Research Projects for National Nature Science Foundation of China (NSFC)
Presenter(s):
Yue Wang, Chinese Academy of Sciences, wy71800@yahoo.com.cn
Xiaoxuan Li, Chinese Academy of Sciences, xiaoxuan@casipm.ac.cn
Jianzhong Zhou, Chinese Academy of Sciences, jzzhou@casipm.ac.cn
Yonghe Zheng, National Nature Science Foundation of China, zhengyonghe@gmail.com
Guoxiang Xiong, Chinese Academy of Sciences, gxxiong@cashq.ac.cn
Abstract: In this proposal, an interactive heuristic reviewing mechanism is proposed that could be used for National Nature Science Foundation of China (NSFC) to assess exploratory pioneering research projects. First, exploratory pioneering research projects are defined and the characteristics of such projects are summed up. Moreover, the problems of existing related reviewing mechanisms at NSFC are analyzed. Second, several typical mechanisms which are applied for science funding agencies in other countries to assess related research projects are analyzed and compared. Finally, based on preliminary research, taking mechanism design theory as macro theoretical guidance, an interactive heuristic reviewing mechanism is presented, which include the main principle of this assessing method, the model of this reviewing mechanism and its specific process. This new mechanism which could provide instant interactive environment for both reviewers and applicants is in a proactive mode to identify worthy of funding projects efficiently for NSFC.

Session Title: Straw, Bricks, Construction: Improving Quality of Education Data, Performance Measures, and Evaluation to Enhance Student Achievement, Reduce Gaps and Increase College Access and Retention
Multipaper Session 887 to be held in Texas A on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Government Evaluation TIG and the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Stephanie Shipman,  United States Government Accountability Office, shipmans@gao.gov
EDFacts Data Quality, Usability, and Limitations
Presenter(s):
Marian Banfield, United States Department of Education, marian.banfield@ed.gov
Margaret Cahalan, United States Department of Education, margaret.cahalan@ed.gov
Abstract: EDFacts has been described as a “U. S. Department of Education initiative to put performance data at the center of policy, management and budget decisions for all K-12 educational programs.” EDFacts combines performance data supplied by K-12 state education agencies (SEAs) with additional data sources, including information on federal program funding. ED’s Performance Information Management Service (PIMS) and the Policy and Program Studies Service (PPSS) are collaborating to increase the usability of the data in analysis, performance measurement, and decision-making at the federal, state, and local levels. This paper describes the EDFacts system, data and efforts to make greater analytical use of the data (e.g., in comparisons of high school cohort survival across states). It also addresses issues in assessing the limitations of the data and proposals to further augment the data.

Session Title: Web Dialogues: A New Tool for Evaluators
Demonstration Session 888 to be held in Texas B on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Qualitative Methods TIG
Presenter(s):
Jerome Hipps, WestEd, jhipps@wested.org
Laurie Maak, WestEd, lmaak@wested.org
Abstract: Focus groups are a staple in the evaluator’s toolkit that we value in gathering responses from stakeholders about key. Imagine a scenario where participating stakeholders are situated in multiple locations and engage in the discussion at a time convenient to them. Enter a new evaluation resource that harnesses the advantages of Web 2.0 technology, Web Dialogues, that facilitate virtual focus groups. The demonstration examines the rationale behind Web Dialogue-based focus groups and details steps necessary to build a dialogue. These steps include framing the agenda, facilitating discussions, maintaining participant engagement, and summarizing key themes. We explore back-end features that help evaluators support communication and facilitate qualitatively coding discussions for later analysis. We review a three-day Web Dialogue used in a needs assessment process involving over 150 contributing participants. This review will show how available tools were used as the dialogue progressed.

Session Title: Challenges and Promises for Using Mixed Methods: Lessons From Implementing Mixed Methods Evaluation
Multipaper Session 889 to be held in Texas C on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the
Chair(s):
James Riedel,  Girl Scouts of the United States of America, jriedel@girlscouts.org
Discussant(s):
Donna Mertens,  Gallaudet University, donna.mertens@gallaudet.edu
Mixed Method Evaluations Quality and Balance: A Case Study From New Zealand Public Sector
Presenter(s):
Yelena Thomas, Ministry of Research Science and Technology, yelena.thomas@morst.govt.nz
Abstract: Mixed methods and evaluation quality are topics of interest for many. There are a lot of approaches and arguments in favour of one or another method. This presentation describes that the mixed method approach and explains why this is the most successful approach for evaluating government policies in New Zealand. The mixed method approach uncovers the multifaceted interventions of any public policy and show the impacts on different user groups. It also provides cost effective and comprehensive impact evaluations. There are, of course, challenges with this approach. This presentation discusses the challenges the author has encountered when implementing the approach and the risk mitigation strategies employed. The presentation also describes how the New Zealand context compares to other countries and discusses whether or not the same approach would work elsewhere.
Evaluating Migrant Education Programs: Quality and Inquiry
Presenter(s):
Karen Vocke, Western Michigan University, karen.vocke@wmich.edu
Carl Westine, Western Michigan University, carl.d.westine@wmich.edu
Brooks Applegate, Western Michigan University, brooks.applegate@wmich.edu
Ilse Schweitzer, Western Michigan University, ilse.a.schweitzer@wmich.edu
Abstract: Addressing the needs of migrant children and their families at a variety of levels--social, linguistic, economic, and educational--is the focus of migrant education programs across the country. Migrant programming is varied in its focus and success. Even more noteworthy is the lack of consistent, cohesive evaluation. Our research initiative investigates the multiple stakeholder groups associated with migrant farm workers in Michigan and Texas and serves as the foundation for quality evaluation. This initiative employs a mixed-method, multi-tiered strategy in collecting data on the existing migrant education programs in Michigan from multiple informant perspectives, thus providing a holistic and in-vivo understanding of the summer Michigan Migrant Education Programs. Findings from our migrant director survey findings, covering 79 percent of all the migrant directors in Michigan, are presented. These data serve as the foundation to develop an effective, responsive evaluation to inform timely decision making for migrant education policy makers.
Evaluating a Narrative Intervention Model (NIM) based HIV/STI (Sexually Transmitted Infection) Prevention Intervention in Urban India
Presenter(s):
Minakshi Tikoo, University of Connecticut, tikoo@uchc.edu
Stephen Schensul, University of Connecticut, schensul@nso2.uchc.edu
Abstract: A great majority of women in the world are exposed to HIV/AIDS and sexually transmitted infections (HIV/STI) through the behavior of their husbands. However, few efforts have been made to directly address HIV/STI risk within the marital relationship. There is increasing recognition that to decrease married women’s HIV/STI risk it is necessary to address the couples interaction and social dynamics. This paper will present evaluation result from a culturally-based narrative intervention model (NIM) delivered using a randomized control design to assess the impact of couples versus individual versus those that get a combination of couples and individual counseling with standard treatment. This project is a five-year NIMH-funded project initiated in May 2008 that focuses on developing a culturally-appropriate and relationship-centered intervention aimed at the marital dyad to promote primary prevention of HIV/STI among married women, ages 18-40 living in an urban poor community of over 400,000 in Mumbai, India.
Using a Mixed Methodology to Evaluate an Entertainment-Education Intervention Directed to the Spanish-Speaking Latino Community of Colorado
Presenter(s):
Mariana Enriquez-Olmos, Independent Consultant, marianaenriquez@hotmail.com
Cristina Bejarano, Independent Consultant, bejaranocl@gmail.com
Abstract: This presentation will describe the results of the evaluation of an entertainment-education intervention targeting the Spanish-speaking community of Colorado. Funded by the Colorado Health Foundation, “Encrucijada: Sin Salud No Hay Nada” (Crossroads: There Is Nothing Without Health), was a twelve 30-minute episodes TV miniseries that aired in a Spanish TV network in Colorado from spring to summer of 2009. The evaluation of this intervention used a mixed methodology that included analysis of primary and secondary data. The evaluation found Encrucijada to be a highly successful intervention with statistically significant increase in disease management behaviors, healthy living behaviors, and behaviors seeking enrollment in public health insurance. The results of this evaluation are being used for decision-making purposes to possibly continue funding and/or expansion of this project.

Session Title: Evaluation System of Research, Technology, and Development (RT&D) to Induce Innovation: Strategy, Process, and Reflection
Multipaper Session 890 to be held in Texas D on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Naoto Kobayashi,  Waseda University, naoto.kobayashi@waseda.jp
On the Feedback of Evaluation Results to the Management of Advanced Industrial Science and Technology (AIST)
Presenter(s):
Yoshiaki Tamanoue, National Institute of Advanced Industrial Science and Technology, y.tamanoue@aist.go.jp
Hidenori Endo, National Institute of Advanced Industrial Science and Technology, h.endo@aist.go.jp
Shigeko Togashi, National Institute of Advanced Industrial Science and Technology, s-togashi@aist.go.jp
Shuichi Oka, National Institute of Advanced Industrial Science and Technology, s.ako@aist.go.jp
Hiroyuki Suda, National Institute of Advanced Industrial Science and Technology, h.suda@aist.go.jp
Kenta Ooi, National Institute of Advanced Industrial Science and Technology, k-ooi@aist.go.jp
Kanji Ueda, National Institute of Advanced Industrial Science and Technology, k-ueda@aist.go.jp
Abstract: We have carried out the evaluation of the research units during the 2nd mid-term (2005 - 2009) of AIST from the perspective of the outcome to promote innovations. We have summarized the effectiveness of the evaluation system as follows. It is effective to improve the research unit performance and management, and is effective to understand the research management concept, which consists of "Full research", "Outcome of research", "Road map", etc. However, the evaluation has not contributed to the AIST top management satisfactorily due to the insufficient feedback loop. For the 3rd mid-term evaluation, we will propose an additional feedback process to strengthen the PDCA cycle for the total AIST management with a systematic analysis of reviewers' comments to feedback to the AIST management. We will present some results on the new feedback process and discuss its effectiveness.

Session Title: Thinking Outside the Evaluation Report Box: Transforming Evaluation Results Into a Structural Change Grantmaking Toolkit
Panel Session 891 to be held in Texas E on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Evaluation Use TIG and the Non-profit and Foundations Evaluation TIG
Chair(s):
Steven LaFrance, Learning for Action Group, steven@lfagroup.com
Abstract: We all hope that our evaluations will inform practice. But many evaluation reports are not used by the client, let alone a wider audience. This panel presentation will share how the project leadership, facilitator and evaluator worked together throughout the span of the initiative being evaluated, and how the resulting integration of evaluation frameworks and processes into the project led to the development of a final product, a toolkit that incorporated the lessons learned and will be distributed and hopefully used by a much larger audience than a conventional evaluation report might have reached. The project of focus is Common Vision, initiated by Funders for LGBTQ Issues, whose goal was to lead a group of grantmakers through a process to discover how they can support structural change in the service of widespread equity, and transfer those lessons learned to the field of philanthropy to advance similar efforts.
Introducing the Common Vision Process
Karen Zelermyer, Funders for LGBTQ Issues, karen@lgbtfunders.org
Ellen Gurzinsky, Funders for LGBTQ Issues, ellengurz@yahoo.com
Robert Espinoza, Funders for LGBTQ Issues, robespinoza@yahoo.com
Karen Zelermyer is the president and CEO of Funders for LGBTQ Issues (Funders), which conceptualized the Common Vision process and will be the lead disseminator of the toolkit. Funders has been actively working towards structural change funding in their own work, producing collaborative research, frameworks, and initiating an LGBTQ Racial Equity Campaign. Funders envisioned a uniquely collaborative process for Common Vision, within the work of the cohorts as well as by creating a deep partnership between the facilitation, leadership and evaluation teams. This presentation will include background and context on the process, descriptions of the project model and cohorts, as well as insight into how the evaluation model was chosen.
Sharing Best Practices for Facilitating a Formative Process
Jara Dean-Coffey, jdcPartnerships, jara@jdcpartnerships.com
Jara Dean-Coffey, Founder and Principal of jdcPartnerships, was the facilitator of the Common Vision process, designing and leading meetings to move the funder cohorts from learning about the building blocks of structural change to issuing RFPs and funding projects that address structural barriers to equity in two regions. As an experienced facilitator as well as an evaluator, this presenter provides a comparative perspective on the value of collaboration among program designers, implementers and evaluators, who are typically more removed from each other than in the Common Vision experience. This presentation will also provide lessons learned about facilitating a process that is formative in nature and informed by ongoing evaluation efforts, as well as suggestions for useful practices in similar processes.
Exploring an Integrated Evaluation Approach
Steven LaFrance, Learning for Action Group, steven@lfagroup.com
Steven LaFrance, principal and founder of LFA Group, was the director of the Common Vision evaluation team, tasked with assessing the pilot program’s best practices and replicability. By being present at each cohort meeting, collecting quantitative and qualitative data on an on-going basis, and engaging in a partnership with the facilitator and leadership, Steven was able to share lessons learned in real time. This presentation will provide a description of the evaluation team’s role and the benefit of having evaluation integrated into the project from the beginning. The presentation will also detail how evaluation frameworks were used throughout the process and how this model of evaluation elevated the level of rigor and discipline in the grantmaking process.
Transforming Evaluation Results Into a Structural Change Grantmaking Toolkit
JT Taylor, Learning for Action Group, jt@lfagroup.com
JT Taylor, senior consultant at LFA Group, was the manager of the Common Vision evaluation team, and her central role was to lead the development of a final Common Vision product. The constant feedback loop between evaluators, cohort members, leadership and facilitation allowed the evaluation team to be flexible, and adapt as needed. Towards the close of the process, the evaluation team recognized that a practical but malleable toolkit would be the most useful product, for both cohort members and the funding field at large. This presentation will introduce the toolkit, its components and purpose, and discuss how each piece evolved out of Common Vision’s engagement with the original tools and processes. The presentation will also share plans for use and dissemination of the toolkit going forward.

Session Title: How to Write an Evaluation Plan
Skill-Building Workshop 892 to be held in Texas F on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Graduate Student and New Evaluator TIG
Presenter(s):
LaTisha Marshall, Centers for Disease Control and Prevention, lmarshall@cdc.gov
Abstract: Learning Objectives: At the end of the presentation, the participant will create an evaluation outline and understand what elements facilitate a useful plan. An evaluation plan is a document used to guide the planning of activities, process, and outcomes of a program. It is dynamic tool that can change over time, as needed, to complement program changes. It creates directions for accomplishing program goals and objectives by linking evaluation and program planning. Further, it facilitates getting stakeholders buy-in and commitment to program goals, documenting programmatic changes, and identifying and utilizing key resources to accomplish evaluation activities. This workshop will introduce new evaluators to a guidance document and tools created by the Centers for Disease Control and Prevention, Office on Smoking and Health, that provides practical guidance on writing an evaluation plan. Each participant will write an evaluation plan outline based on sample exercises.

Session Title: Say Goodbye to Power Point and Hello to Gallery Walks!
Skill-Building Workshop 893 to be held in CROCKETT A on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the
Presenter(s):
Cassandra ONeill, Wholonomy Consulting, cassandraoneill@comcast.net
Allison Titcomb, ALTA Consulting, atitcomb@cox.net
Abstract: In our work we often need to present information to other people. A Gallery Walk is a fun and engaging way to present information in a meeting, a planning session, during a training, or when teaching. The skill we will be teaching is how to use a Gallery Walk to present information to groups. Posters are made with highlights of the information to be presented, either in advance or during a meeting. In this skills building session, participants will have the experience of making a poster to share highlights of an article they read and discuss in small groups. This will be followed by a Gallery Walk of all the posters created by the group. Resources will be given to learn more about using Gallery Walks after the session. Say Hello to high engagement learning with Gallery Walks.

Session Title: Expanding Our Knowledgebase: Current Research on Teaching Evaluation
Panel Session 894 to be held in CROCKETT B on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Teaching of Evaluation TIG
Chair(s):
Christina Christie, University of California, Los Angeles, tina.christie@ucla.edu
Abstract: Recent calls for research on evaluation highlight the importance of exploring professional issues, including evaluation training (Mark, 2008). Although information has been published about teaching evaluation, existing studies tell us little about how individuals who are currently practicing evaluation were trained to do their jobs, the type of evaluation-related training individuals within specific substantive disciplines (e.g., public health and education) receive, or the promise of unique instructional approaches for acquiring competence in evaluation. Such information is valuable for individuals who design academic coursework and professional development trainings. Current research covering each of the aforementioned topics will be presented in an effort to begin filling gaps in the existing knowledgebase and to stimulate ideas for future research.
A Descriptive Study of Evaluator Course Taking Patterns and Practice
Christina Christie, University of California, Los Angeles, tina.christie@ucla.edu
Leslie Fierro, SciMetrika, let6@cdc.gov
Tarek Azzam, Claremont Graduate University, tarek.azzam@cgu.edu
Discussions about evaluation practice have mainly focused on descriptions of strategies and methods used. Less is known about peripheral determinants of practice such as who is conducting evaluations and their training to do so. Data regarding courses completed by practitioners who self-reported evaluation as being their primary or secondary professional identity were analyzed to understand: What courses do practitioners report completing? Where do they complete this coursework? What constellations of coursework are practitioners most likely to complete? What factors are associated with course-taking patterns? Study findings are useful for practitioners and those en route to careers involving evaluation. Persons seeking guidance about what courses might enhance their evaluation practice and identify courses and professional development that experienced practitioners completed. Findings are also useful for academic and professional development planning; coordinators can use our findings to shape curriculum and offer courses for practitioners in areas that are not taken with frequency.
Evaluation Coursework in Schools and Programs of Public Health
Leslie Fierro, SciMetrika, let6@cdc.gov
Christina Christie, University of California, Los Angeles, tina.christie@ucla.edu
Evaluation is one of ten essential services of public health, yet little is known about how public health academe prepares students to design and conduct evaluations. In this presentation, findings from a study that systematically examined the prevalence and content of required coursework in program evaluation for students acquiring a MPH in epidemiology or social and behavioral sciences will be shared. Schools and programs of public health accredited by the Council on Education in Public Health (CEPH) as of August 2008 which offered both a MPH in epidemiology and a MPH in an area of social and behavioral sciences (N = 51) were included in the study. Findings from a review of over 1,000 courses descriptions associated with completing a MPH in epidemiology or social and behavioral sciences will be discussed and details regarding the specific content of evaluation-related coursework will be shared.
Contexts and Teaching Evaluation: An Example From Educational Administration Programs
Tara Shepperson, Eastern Kentucky University, tara.shepperson@eku.edu
Findings from a recent study of evaluation courses in doctoral programs for educational administrators, supports the prevalence of evaluation courses in schools of education. Yet, findings suggest that evaluation practice is understood within specific educational constructs that contextually focuses coursework. Study into evaluation within disciplines may require careful consideration of multiple influences. In educational administration, these include academic reform initiatives, national policy requirements, and professional standards. These and other issues impact how evaluation is taught. Methods for collecting data and relevant background issues will be discussed as possible ways to frame future studies to develop better intra-disciplinary or cross-disciplinary approaches to investigate the teaching of evaluation.
Informal Discussion as Socialization and Teaching Tools in Program Evaluation
Anne Vo, University of California, Los Angeles, annevo@ucla.edu
What is involved in the process of mastering the art of program evaluation? How do program evaluators learn their craft? This study addresses these questions by describing how an evaluation instructor uses an informal learning space to apprentice student evaluators into their work. Data collection was completed through participant-observation, interviews, and collection of site documents. Analysis was grounded in conversation analytic and ethnographic methods. Results suggest practical strategies that evaluation instructors can use to facilitate learning through discussion in informal settings and how to create a safe and productive space in which such learning can occur. Overall, study findings provide a useful model for how to infuse a traditional evaluation training program, which usually consists of coursework and practica, with alternative learning opportunities.

Session Title: The Network of Network Funders: Evaluating Networks and Evaluating with a Network Lens
Think Tank Session 895 to be held in CROCKETT C on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the
Presenter(s):
Astrid Hendricks, California Endowment, ahendricks@calendow.org
Discussant(s):
Gale Berkowitz, David and Lucile Packard Foundation, gberkowitz@packard.org
Mayur Patel, John S and James L Knight Foundation, patel@knightfoundation.org
Gigi Barsoum, California Endowment, gbarsoum@calendow.org
Abstract: Over the past year, a community of practice of national and local funders has been collaboratively learning to address the challenges of defining, funding, assessing, and documenting networks as a philanthropic investment strategy. This think tank session will begin by introducing the core findings from the 2010 Grantmakers for Effective Organizations report on foundations and networks and engage participants in a discussion of the specific issues of evaluating networks and evaluation within networks. Participants will be asked to discuss the principles and capacity needed to evaluate networks effectively, best methods and frameworks for defining and assessing networks, examples of effective evaluations of networks, and recommendations for additional demonstration and documentation of network evaluations.

Session Title: Impact Evaluation of Approaches to Affect Local Resiliency to Disasters, Enhanced Public Health Emergency Peer Networks, and First Responder Psychological Recovery
Multipaper Session 896 to be held in CROCKETT D on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Disaster and Emergency Management Evaluation TIG
Chair(s):
Karen Pendleton,  University of South Carolina, ktpendl@mailbox.sc.edu
Evaluating Peer Networks Among New Local Health Officials: Assessing Relationships in Responding to Local Health Crises Following Leadership and Managerial Training
Presenter(s):
Sue Ann Sarpy, Sarpy and Associates LLC, ssarpy@tulane.edu
Alicia Stachowski, George Mason University, astachow@gmu.edu
Seth Kaplan, Geroge Mason University, skaplan1@gmu.edu
Abstract: Unlike evaluation efforts focusing only on proximal outcomes, the Survive and Thrive training program evaluation for new Local Health Officers (LHOs) assesses more distal measures of program impact. A component of this evaluation focuses on assessing the extent to which the program fosters relationship-building among LHOs. These social and informational relationships are critical to the LHO’s success, especially during public health crises. Thirty-eight LHOs participating in this nationwide program reported their use of fellow trainees and coaches as sources of knowledge, advice, and support six months following training. Participants also reported the extent to which these relationships were utilized in responding to the H1N1 crisis. Social network analyses were conducted to identify and examine these networks. Qualitative information also was gathered regarding how to foster these relationships over time. Implications of these findings will be discussed with respect to developing and evaluating social networks in workforce development and leadership training initiatives.
Disaster Resilience Evaluation: an example from Miami-Dade faith and community-based organizations
Presenter(s):
Michael Burke, RTI International, mburke@rti.org
Joe Eyerman, RTI International, eyerman@rti.org
Brian Burke, RTI International, bjb@rti.org
Abstract: During Hurricanes Katrina and Rita, the role of local faith-based and community organizations received attention due to the scale, speed, and extent of their response efforts. As a result, local organizations are now recognized in federal policies for their potential role in helping communities prepare for and respond to disasters. However, because federal guidelines focus on flexibility and local planning, specific guidance about developing partnerships to structure planning and response activities is not provided. An example of a multifaceted approach to assessing and evaluating a Department of Homeland Security funded resilience building project in Miami will be presented. In addition to a project overview, issues related to the justification of the sample selected, the survey questions asked, and the organizations to be included in a social network map will be examined. A draft management model to promote resilience will be presented and issues related to evaluation quality will be discussed.
First Responder Immediate Psychological Trauma: How Are We Helping? A Meta-analysis
Presenter(s):
Lynne Wighton, Vanderbilt University, lynne.g.wighton@vanderbilt.edu
Abstract: This meta-analysis examines the effects of Psychological Debriefing on Post-Traumatic Stress Disorder (PTSD) and related symptoms reported by First Responders (e.g., firefighters, police, and emergency medical technicians). Psychological Debriefings are meetings held to help ameliorate emotional consequences soon after events that may be traumatic. Psychological Debriefing is endemic in the First Responder culture; however research on its effects is mixed. Preliminary analysis shows no measurable effect of Psychological Debriefing on subsequent measures of PTSD and related symptoms. To date there is no other meta-analysis on Psychological Debriefing that limits the subject pool to First Responders. This meta-analysis highlights gaps in the literature about Psychological Debriefing and its effects on First Responders: lack of demographic data, debriefing protocols, and shortcomings in research design (no randomized trials and unreported data on the comparability of treatment and control groups). Further investigation regarding the efficacy of psychological debriefing for First Responders is warranted.

Session Title: Feminist Evaluation and Gender-Specific Programs
Multipaper Session 897 to be held in SEGUIN B on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Feminist Issues in Evaluation TIG
Chair(s):
Linda Thurston,  National Science Foundation, lthursto@nsf.gov
Discussant(s):
Jan Middendorf,  Kansas State University, jmiddend@ksu.edu
Feminist Evaluation: Practical Application in Nonpractical Situations
Presenter(s):
Abstract: The presentation shares a practical understanding of how feminist evaluation resulted in strengthening more traditional and mainstreamed evaluation approaches in a context where LBGT stakeholders were often overlooked. A case narrative is provided that demonstrates how using a feminist driven approach allowed for an evaluator to surmount the barriers and constraints that she faced in the evaluation and how this mixed approach resulted in useful and used evaluation findings.
Post Traumatic Stress Disorder (PTSD) and Co-occurring Mental Health and Substance Abuse Disorders: Does Gender Matter?
Presenter(s):
Kathryn A Bowen, Centerstone Research Institute, kathryn.bowen@centerstone.org
Abstract: This paper discusses the evaluation of Project for Recovery, Encouragement and Empowerment. Project FREE provides community-based treatment/recovery services for adults with substance abuse disorders or a co-occurring substance abuse/mental health disorder and involvement in justice systems. Treatment emphasizes client engagement through education and therapeutic alliance and complemented by recovery support services. Some clients are enrolled in Seeking Safety simultaneously. Seeking Safety helps clients understand both their mental illness and substance abuse and why they so frequently co-occur; teaches safe coping skills that apply to both; explores the relationship between the two disorders in the present (e.g., using substances to cope with flashbacks); helps clients understand that healing from each disorder requires attention to both. The intent of this evaluation was to assess differences in outcomes among women with PTSD and co-occurring mental health/substance abuse disorders enrolled in treatment as usual with women enrolled in Project FREE treatment protocols and Seeking Safety.
Meta-evaluations of HIV/AIDS Prevention Intervention Evaluations in Sub Saharan Africa With Specific Emphasis on Implications for Women and Girls
Presenter(s):
Tererai Trent, Western Michigan University, tererai.trent@heifer.org
Abstract: Despite numerous attempts by international agencies to halt the spread of Human Immunodeficiency Virus (HIV) and Acquired Immunodeficiency Syndrome (AIDS), nowhere has the impact of HIV/AIDS been felt more acutely than among women and girls in Sub-Saharan Africa (SSA). SSA women account for 59% of adults over the age of 15 living with HIV/AIDS and 76% of those 15-24 who are infected [UNAIDS], 2007). The evidence on gender disparities in infection rates is indisputable; there is an urgent need to identify what is missing in HIV/AIDS prevention interventions: What is the evidence base upon which programs are grounded? Program evaluations should influence and inform policy and funding and provide a critical feedback mechanism for the design of HIV interventions that work for girls and women. This presentation will explore the actual HIV/AIDS prevention evaluation practices by bilateral and multilateral agencies in Sub Sahara and the implications for women and girls.

Session Title: Case Studies in International M&E
Multipaper Session 898 to be held in REPUBLIC A on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Jim Rugh,  Independent Consultant, jimrugh@mindspring.com
Using Collaborative and Systemic Approaches to Evaluate a Health Promotion Initiative Involving Canada and Brazil
Presenter(s):
Eduardo Marino, Independent Consultant, eduardo.marino@yahoo.com.br
Thomaz Chianca, COMEA Evaluation Ltd, thomaz.chianca@gmail.com
Abstract: In planning social projects, logic models can be very useful to define priorities and focus the work at hand. However, evaluations that are guided exclusively by this tool can become irrelevant as they will only address what was “planned” as opposed to what actually “happened”. Thus disregarding the complex environments in which these social projects are implemented. In this session, the presenters will critically discuss the application of a mix of collaborative and systemic approaches to evaluate a health promotion program involving six Brazilian universities in partnership with the Canadian Public Health Association. The evaluation was facilitated by external evaluators and involved the following steps: i) identifying the target systems of the evaluation; ii) mapping and analyzing the local systems; iii) recognizing the common patterns (results and critical aspects in the implementation of the intervention) and iv) mapping and analyzing the project coordinators network as a system.
Horizontal Evaluation: An Institutional Learning and Knowledge Building Case From Africa
Presenter(s):
Hubert Paulmer, Harry Cummings and Associates Inc, hubertpaulmer@gmail.com
Harry Cummings, University of Guelph, cummingsharry@hotmail.com
Abstract: The paper looks at the concept and components of Horizontal Evaluation and presents the recent experience and the collaborative process used to design and conduct this evaluation in Africa. It also shares the evaluation design, methods and approaches used to collect data, and the evaluation criteria considered. The paper also presents how evaluation issues, such as capacity building, partnerships and sustainability, were assessed during this evaluation. The paper reflects on how this evaluation enhanced organizational and stakeholder learning, and facilitated knowledge building. It also describes the groups of stakeholders and their level of participation in this horizontal evaluation. It also presents what the advantages and critical success factors are for the horizontal evaluation over traditional evaluations. This was an end-of-project evaluation for a project funded by Canadian International Development Agency and implemented in five countries across Africa.
Integration of a Safe Water and Hygiene Program With Routine Childhood Immunization Services: Design Strategies and Lessons Learned From a Mixed Method Evaluation of a One-Year Pilot Project at 18 Health Facilities in Nyanza Province, Kenya
Presenter(s):
Karen Schlanger, University of Georgia, kschlang@uga.edu
Tove Ryman, Centers for Disease Control and Prevention, cnu8@cdc.gov
Margaret Watkins, Centers for Disease Control and Prevention, maw8@cdc.gov
Brian Otieno, Safe Water and AIDS project, brayos5@yahoo.com
Cliff Ochieng, Safe Water and AIDS project, ochieng_cliff@yahoo.com
Patricia Richards, University of Georgia, plr333@uga.edu
Abstract: This paper presents results of a mixed-method evaluation of a pilot project in Kenya designed to increase on-time completion of childhood vaccinations and improve household water treatment and hygiene. Globally, there is increasing interest in integrating various health services into established immunization programs. While integration appears advantageous, its impact has not been systematically evaluated, and appropriate methods for evaluating service integration are needed. This evaluation tests the effectiveness and impacts of two integration strategies (interventions implemented by nurse vs. lay health worker) for providing household water treatment and hygiene products, and education at routine childhood vaccination visits. The quantitative evaluation included population-based household surveys of approximately 1200 children at baseline and follow-up in each of the intervention and comparison arms of the study. The qualitative component, 30 interviews and 9 focus groups, enhances quantitative findings by assessing health worker and parent perceptions regarding program acceptability and integration strategy preferences. Methods and results will be presented with an emphasis on unique and complementary aspects of this mixed method analysis.
The Influence of Cooperative Structure on Commitment and Member Satisfaction: A Case of the Murang’a Nutribusiness Cooperative in Kenya
Presenter(s):
Mary Marete, Pennsylvania State University, mmm455@psu.edu
Abstract: This study investigates in a women’s owned cooperative in Kenya two components of cooperative success: member commitment and member satisfaction. Both are key factors critical to the success of organizations (Cotteril, 2002; Fullerton, 2005; Bhruynis, 2001). Member commitment and member satisfaction influence and are in turn influenced by the structure of an organization. Using Allen and Meyer’s (1996) measure of organizational commitment, the extent of commitment of Murang’a Nutri-business cooperative members to their organization is examined. Previous research (Goo & Huan, 2008; Fullerton, 2005; Harun and Hasrul, 2006; Morgan & Hunt, 1994) has documented that affective commitment contributes positively to the success of an organization while continuance commitment does not contribute positively to organizational success. The Schmid (2004) situation structure performance theoretical framework for evaluation of organizations is used to identify key variables likely to increase member commitment of members. Alternative emerging cooperative structure proposed by Chaddad and Cook (2004)are used to identify an optimal cooperative structure for increased member commitment for the nutribusiness cooperative.

Session Title: Policy Evaluation and Public Health: Multifaceted Approaches and Examples From the Field
Multipaper Session 899 to be held in REPUBLIC B on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Karen Debrot,  Centers for Disease Control and Prevention, kdebrot@cdc.gov
The Role of Rigorous Policy Evaluation in Violence Prevention
Presenter(s):
Jennifer Matjasko, Centers for Disease Control and Prevention, jmatjasko@cdc.gov
Abstract: Public policy analysis and evaluation can take many forms—from the analysis of a proposed policy to assessing the impact of an existing policy on various outcomes. This presentation will cover the different types of policy analysis and the various methods that can be employed in rigorous policy evaluations. In addition, the presentation will provide examples of policy evaluations currently funded by CDC’s Division of Violence Prevention. The purpose of these projects is to assess the effectiveness of policies designed to change the economic or environmental characteristics of a community to reduce youth violence. They include: (1) an evaluation of a housing voucher policy on youth violence; (2) an analysis of the relationship between school funding reform and school choice policies and community violence; and (3) two large-scale natural experiments of community economic development. The results from these projects will further our understanding of how policies can reduce youth violence.

Session Title: Strengthening Schools and Youth Through the Use of Evaluation: Issues and Perspectives
Multipaper Session 900 to be held in REPUBLIC C on Saturday, Nov 13, 2:50 PM to 4:20 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Liliana Rodriguez-Campos,  University of South Florida, liliana@usf.edu
Teacher Benefits of Collaborative Action Research: Results of a Quantitative Inquiry
Presenter(s):
John A Ross, University of Toronto, jross@oise.utoronto.ca
Catherine D Bruce, Trent University, cathybruce@trentu.ca
Abstract: Studies of action research are overwhelmingly qualitative, in part because the essential characteristics of action research impede quantitative inquiry. The shortage of quantitative studies limits opportunities for methodological triangulation and inclusion of action research studies in meta-analysis. We report two linked studies (N=80 and 105) that found that Collaborative Action Research made a statistically significant contribution to two dimensions of teacher attitudes to educational research and three dimensions of teacher efficacy. The effects of action research were robust across conditions of teacher gender, career stage, and qualifications. Teachers benefited more from action research if they (i) recognized the importance of the data analysis and reflection stages of action research; (ii) participated in action research that was rigorous and/or led to changes in their conceptual understanding; (iii) worked in schools that fostered professional learning, and (iv) had participated in research activities prior to these action research studies.
Youngsters Marked by Involvement in Crime Acting as the Evaluation Team
Presenter(s):
Daniel Brandão, Fonte Institute, daniel@fonte.org.br
Abstract: This paper presents a participative evaluation conducted in Brazil where youngsters with life paths marked by involvement in crime and therefore included in social programs were invited to act as the evaluation team of the program itself. The choice of the evaluators was to include youngsters marked by involvement in crime in the decision-making spheres of the evaluation; therefore, allowing them to influence a program which directly affected their lives. At the same time the evaluation required offending adolescents and interviewers to meet. The interviewer as a youngster generated the possibility of building a quasi-horizontal relationship between the cultural universes of both interviewer and interviewee. Strongly marked by shared language and stories potentially developed in partnership, the dialogue presented in the interview allowed the bringing forth of memories, information and feelings, with an authenticity that is not likely to exist otherwise.
Turning Points: Critical Incidents in the Formation of Academic Mentoring Relationships: An Evaluation of a University Learning Community
Presenter(s):
Nancy Rogers, University of Cincinnati, nancy.rogers@uc.edu
Tommy Chou, University of Cincinnati, anomih@gmail.com
Abstract: Student retention and graduation are important issues in higher education. One way to address these issues is through Peer Learning Communities which provide academic and social support for student members. The formation and development of mentoring relationships has important benefits to both the mentor and the mentee, yet these relationships are difficult to cultivate in a structured environment. Fostering a better understanding of the mentoring process is critical to improving upon existing practices. The Critical Incidence Technique (CIT) is the basis for this evaluation designed to explore and understand successful peer mentoring relationships within academic peer learning communities. Through a series of participant-observer reflections, interviews and guided discussions, critical factors for mentoring relationship development and potential best practices were identified for use in developing mentoring training workshops and activities. Further, increases in participant appreciation for the mentoring relationship through the use of this approach are attained.
Process Evaluation of An Immersion-Learning Experience for Grades K-4 Youth
Presenter(s):
Laurie Ruberg, Wheeling Jesuit University, lruberg@cet.edu
Jackie Shia, Challenger Learning Center, jshia@cet.edu
Cassie Lightfritz, Center for Educational Technologies, clightfritz@cet.edu
Annie Morgan, Challenger Learning Center, amorgan@cet.edu
Abstract: This study explores how a variety of technology tools and systems were employed to support the internal formative evaluation of the design, construction, and implementation of a grades K-4 space mission simulator. The new simulator for early elementary age youth builds on the success of the existing Challenger Learning Center simulator, which has served more than 100,000 students and teachers. The evaluation summarizes the results of a process approach that documented the planning, design, development, and implementation strategies from the assessment of facilities for restructuring to designing lighting, furniture, program activities, and the scenario context. The discussion includes a description of media tools and collaborative systems used to collect and organize the documentation, decision-making, and outcomes of formative testing. The presentation will show how media tools were used to document design, construction, implementation, formative testing processes associated with the physical and programmatic components of the simulator development.

Return to Evaluation 2010
Search Results for All Sessions