2010 Banner

Return to search form  

Session Title: Equity and Quality in Evaluation: Ideas and Illustrations From the Field
Panel Session 502 to be held in Lone Star A on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Presidential Strand and the Research on Evaluation TIG
Chair(s):
Jennifer Greene, University of Illinois at Urbana-Champaign, jcgreene@illinois.edu
Discussant(s):
Valerie Williams, The Globe Program, vwilliams@globe.gov
Abstract: As a judgmental practice, evaluation inherently advances particular values. The values may be those of methodological integrity and defensibility, political independence and credibility, usefulness, cultural responsiveness, democratization, or some combination thereof. These different value stances well reflect the theoretical pluralism of the evaluation field. The values of evaluation show up in evaluation practice through the evaluation’s purpose and audience, the key questions asked, and especially the criteria used to make judgments of program quality. This panel explores the justification for and characteristics of an evaluation practice that intentionally and explicitly advances the value of equity, and its contributions to evaluation quality. Equity refers to the explicit representation of the interests of stakeholders least well served in the context at hand toward greater fairness in opportunity and accomplishment for these stakeholders. The panel features the contexts of STEM education program evaluation.
What is Equity in Educational Evaluation and How Does It Matter in Evaluation Quality?
Jennifer Greene, University of Illinois at Urbana-Champaign, jcgreene@illinois.edu
Equity, in our educative and values-engaged evaluation approach (EVEN), pertains to the fairness with which all members of the context are treated. It rests on a contextualized understanding of the dimensions of diversity that matter in that context. To advance equity means to ask evaluation questions about how well all diverse subgroups in a context are afforded program access and meaningful programs experiences, and have opportunities to attain outcomes of consequence. It also means to attend in particular to subgroups that are under-served and under-represented in the context being studied. Advancing equity in these ways also contributes to evaluation quality. Following Ernie House, truth and beauty in evaluation are neither if not accompanied by justice. Our work on equity-oriented evaluation updates and extends House’s argument about evaluation quality.
Educative and Values-Engaged Evaluation Approach (EVEN) We Can Use Values
Jeremiah Johnson, University of Illinois at Urbana-Champaign, jeremiahmatthewjohnson@yahoo.com
Maria Jimenez, University of Illinois at Urbana-Champaign, mjimene2@illinois.edu
This presentation will highlight the experiences and perspectives of a team of internal evaluators applying the EVEN approach to evaluate a Math Science Partnership (MSP) funded by the National Science Foundation (NSF). Evaluation team members will offer snapshots of the EVEN approach in action and reflect on the ways in which equity (as an intentional values positioning) has influenced their evaluation practice, and consequently, the program at hand. Evaluators will highlight particularly successful and unsuccessful efforts to engage stakeholders with issues related to equity and diversity. Evaluators will also discuss ways in which their efforts fall short of the “ideal” EVEN evaluation. The presentation will conclude with a brief list of lessons learned and recommendations for engaging issues of equity in STEM evaluation contexts, with particular attention to quality of EVEN evaluations.
Forewarned Is Forearmed: A Tale of Two EVEN Evaluations
Jeehae Ahn, University of Illinois at Urbana-Champaign, jahn1@illinois.edu
Ayesha Boyce, University of Illinois at Urbana-Champaign, boyce3@illinois.edu
This presentation shares a tale of two equity-oriented evaluations culled from our recent fieldwork. One was an evaluator-initiated evaluation of a public high school mathematics program serving a diverse student body, including a considerable number of underrepresented and underserved students, and the other, a school-requested evaluation of a private middle school science outreach project involving just as diverse students but with highly involved parents and on the whole, more affluent backgrounds. Set against the backdrop of these different contextual circumstances and constraints, both evaluations endeavored to centrally engage with values of equity and diversity in access to, opportunities and experiences in STEM education, without compromising important evaluation priorities of the given context. In this presentation, we revisit some of the key equity-oriented practice decisions and actions we carried out, reflecting on the meanings of equity in these contexts and their connections to evaluation quality.

Session Title: Model Forms, Program Theory, And Unexpected Behavior: What Are the Implications For Program Implementation and Evaluation?
Think Tank Session 503 to be held in Lone Star B on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Systems in Evaluation TIG
Presenter(s):
Jonathan Morell, Vector Research Center, jonny.morell@newvectors.net
Discussant(s):
Jonathan Morell, Vector Research Center, jonny.morell@newvectors.net
Sanjeev Sridharan, University of Toronto, sridharans@smh.toronto.on.ca
Abstract: Visual forms of logic models (e.g. flowchart, system, input/output) constrain and influence both beliefs about program theory, and choices about methodologies and measures. All this has a powerful influence on expectations about what a program will do and what evaluation can reveal, and thus, for the surprises and unexpected program behaviors that lie in wait for evaluators. This think tank will explore these relationships. Participants will be presented with different logic model forms for the same program, accompanied by a discussion of the presenters’ beliefs about the implications of each form for theory, methodology, and measurement. Our focus will be on what kinds of representations are good enough to assist program implementation and evaluation, and the relative advantages of those different forms. Literature from the field of knowledge translation will inform the discussion. Breakouts will probe, critique and modify the presenters’ assertions. Two scenarios will be presented and discussed.

Session Title: Youth Led Evaluation in Action: Stomping Out the Stigma of Mental Illness
Demonstration Session 504 to be held in Lone Star C on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Cheri Hoffman, Centerstone Research Institute, cheri.hoffman@centerstoneresearch.org
James Martin, Mule Town Family Network, jmartin@tnvoices.org
Abstract: Youth from the Mule Town Family Network System of Care project in Maury County, TN are planning a week-long "research camp" for the summer of 2010. Professional evaluators will train 8-10 young people in a participatory program evaluation curriculum known as “Stepping Stones.” Community experts in poetry/spoken word, music and dance will join with youth to create a performance about the stigma associated with mental illness. Youth will present a community performance of their work, and then lead focus groups exploring the topic of stigma and how the artistic representations of youth's experience of mental illness has changed people’s perceptions. In this session, youth will share their experiences, the evaluation results of their project, and how initiating and carrying out an evaluation project has impacted their personal development. Staff will share the process of engaging youth in evaluation and the successes and challenges in completing a youth-led evaluation project.

Session Title: Strategic Learning: An Embedded Approach for Evaluating Complex Change
Panel Session 505 to be held in Lone Star D on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Gale Berkowitz, David and Lucile Packard Foundation, gberkowitz@packard.org
Discussant(s):
Gale Berkowitz, David and Lucile Packard Foundation, gberkowitz@packard.org
Abstract: This panel will discuss strategic learning, an approach to evaluation that works well with complicated and complex strategies that evolve over time and have multiple causal paths or ways of achieving outcomes. These strategies present unique challenges to conventional program evaluation and require fresh thinking and new approaches to ensure that evaluation is relevant and useful. Strategic learning means using evaluation to help organizations or groups learn in real-time and adapt their strategies to the changing circumstances around them. It means making evaluation a part of the intervention—embedding it so that it influences the process. The panel will describe the concept and principles of strategic learning and how it differs from traditional evaluation approaches. Presenters will describe what strategic learning looks like in practice based on their experiences, and will discuss innovative tools and methods that can be used to promote strategic learning.
The Packard Foundation: Strategic Learning and Systems Change
Julia Coffman, Center for Evaluation Innovation, jcoffman@evaluationexchange.org
In recent years, the David and Lucile Packard Foundation has shifted its evaluation approach toward strategic learning. While previously the Foundation focused on summative evaluation that made retrospective judgments about grantmaking programs, the Foundation has moved toward more real-time assessment and learning. This movement fits with the Foundation’s funding for long-term comprehensive strategies designed to produce significant changes on its priority issues. An example of this type of grantmaking, and the use of strategic learning, can be seen with the Preschool for California’s Children grantmaking program and evaluation. Packard knew that the process for achieving this program’s 10-year goal would unfold without a clear script. That prediction has come true; the grantmaking strategy has evolved over time, adapting to changing conditions and opportunities. This presentation will describe the evaluation used to inform the strategy as it has evolved, along with the unique design and methods created in the process.
The Colorado Trust: Strategic Learning and Advocacy
Ehren Reed, Innovation Network, ereed@innonet.org
The Colorado Trust recently introduced advocacy and systems change in its health care and health coverage grantmaking. With this addition, The Trust adapted its thinking about evaluation to include approaches that would both generate knowledge that grantees could use in real-time and that would inform The Trust’s own strategic learning about effective grantmaking. Funding for a 10-year advocacy strategy to achieve access to health for all Coloradans by 2018 provided the opportunity to try this new approach. This presentation will describe the evaluation connected to this strategy, which is led by Innovation Network in collaboration with a team of local evaluators. The evaluation incorporates informed, evidence-based decision making and evaluative thinking into ongoing strategy development at the local and state levels, and as a result has become a key part of the intervention to build stronger health advocacy in the state.
The California Endowment: Strategic Learning and Multicultural Interventions
Hanh Cao Yu, Social Policy Research Associates, hanh_cao_yu@spra.com
Complex multi-level and multi-dimensional factors affect the health of people in underserved and culturally diverse communities. The multicultural health strategies required to address these factors can be equally complex and are often emergent. This leads to a disconnect with traditional evaluation approaches and demands a shift in how we think about and approach evaluation. The California Endowment has been a leader in pushing the evaluation field toward new thinking about evaluation in diverse communities. This thinking incorporates strategic learning and uses a multicultural lens. It emphasizes partnership with communities to ensure that their voices and learning needs are prioritized. This presentation will describe learning from a research project funded by The California Endowment, focused on building the capacity of advocates to work with communities of color. The research serves to reexamine conventional notions of (1) how advocacy is defined, (2) mainstream groups’ relationship to communities of color, and (3) what constitutes effective capacity building models and approaches in partnership with communities of color.

Session Title: Promoting Quality Impact Studies: Constructive, Context-Appropriate Policies for Strengthening Research Designs for Impact Evaluations
Panel Session 506 to be held in Lone Star E on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
George Julnes, University of Baltimore, gjulnes@ubalt.edu
Abstract: The recent controversy over the role and value of random assignment experiments, particularly with regard to the U.S. Department of Education, has raised the issues of what are strong designs for evaluations (more likely to yield valid impact estimates and judgments of quality) and when they should be employed. As the evaluation community has grappled further with these issues, some tentative resolution of the controversy has resulted. This session, following the four presenters, will provide a framework for assessing the context as it relates to value of alternative evaluation designs, report on a government assessment of when different designs are most appropriate, provide an example of mixing methods to strengthen a particular research design, and suggest multiple dimensions to consider in evaluating the value of different designs.
What's in an Evaluation Design? Matching the Policy Questions to the Program and Evaluation Context Before Making Methodological Choices
Eleanor Chelimsky, Independant Consultant, eleanor.chelimsky@gmail.com
Policy questions posed to evaluators in government may not always be the right questions. I argue in this paper that when evaluators develop their evaluation design, they should delay making methodological choices until they have examined, among other factors: the historical and political context of the program; the quality of prior evaluations, their results, and the difficulties they encountered; controversy over goals, program design, etc.; specific positions of sponsors and stakeholders; and a host of other issues like whether there is a need for participation, the existence of public data-sets; time allotted versus evaluative requirements, and so on. Only then can the evaluators determine the degree of fit between the questions posed and potential methodologies, and whether in fact those questions should stand or be changed.
A Variety of Rigorous Methods Can Help Identify Effective Interventions
Stephanie Shipman, United States Government Accountability Office, shipmans@gao.gov
While program evaluations take various forms to address different questions, federal policymakers are most interested in impact evaluations that help managers adopt effective practices to address national concerns. Concern about the quality of federal social program evaluations has led to calls for greater use of randomized experiments in impact evaluations. The randomized experiment is considered a highly rigorous approach for isolating program effects from other non-program influences, but it is not the only rigorous research design available and is not always feasible. To help congressional staff assess efforts to identify effective interventions, GAO was asked to identify 1) for what types of interventions randomized experiments were best suited for assessing effectiveness, and 2) what alternative evaluation designs are used to assess the effectiveness of other types of interventions. In this paper, we will report our answers drawn from an analysis of the evaluation methodology literature and consultation with evaluation experts.
Mixed-methods Evaluation Design for a Complex, Evolving Systems Initiative
Debra Rog, Westat, debrarog@westat.com
To assist systems in moving from managing to ending homelessness for families, the Gates Foundation is funding an initiative with three counties in the Pacific Northwest. This presentation will describe a longitudinal mixed-methods design for evaluating the Initiative’s implementation and effectiveness at systems, organizational, and family-levels. At the systems level, qualitative and quantitative data will be collected for the demonstration counties as well as two comparison counties. At the organizational level, selected case studies will assess the impacts on individual homeless serving organizations over time. At the family level, effects of the system on families’ experiences and outcomes will be assessed by comparing two cohorts of families – a “no intervention/early intervention” cohort of families identified in the first year and an “intervention” cohort of families identified in Year 3. Each cohort will be tracked for 18 months and compared to a comparison group of families constructed from state data.
Designing for Success With Impact Evaluations: Dimensions of Quality for Evidence to Be Actionable
George Julnes, University of Baltimore, gjulnes@ubalt.edu
There has been considerable controversy over efforts to promote “rigorous” evaluation methods that might yield evidence appropriate for guiding federal programs. While recent efforts at reconciliation among proponents of traditions such as random assignment experiments, qualitative evaluations, and performance management have been useful in reminding us of the contextual influences on appropriate designs, more work remains. This presentation presents a framework for evaluation design that balances a focus on the validity of impact estimates with a complementary focus on methods that support valid valuation of program impacts.

Session Title: The American Evaluation Association and Its Local Affiliates: Shaping Our Future Together
Think Tank Session 507 to be held in Lone Star F on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the AEA Conference Committee
Presenter(s):
Rachel Hickson, Montgomery County Public Schools, rachel_a_hickson@mcpsmd.org
Discussant(s):
Michael Hendricks, Independent Consultant, mikehendri@aol.com
Beverly A Parsons, InSites, bparsons@insites.org
Stewart Donaldson, Claremont Graduate University, stewart.donaldson@cgu.edu
Abstract: Local affiliates continue their dynamic role in AEA. Local affiliate rosters closely reflect overall AEA membership as well as the evaluation profession. The AEA 2010 work plan addresses policy that will shape the future of the relationship between affiliates and AEA. Members of the Board policy work group on affiliates policy will be invited to this sesssion to discuss their work and its status to date. A World Café format will then be used to discuss local affiliates’ strategies and goals for their work, within the context of AEA’s broad policies. Representatives of AEA affiliates at different stages of development will be invited to comment on affiliates’ needs.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Making Decisions About Program Continuation: A Step-by-Step Process
Roundtable Presentation 508 to be held in MISSION A on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Human Services Evaluation TIG
Presenter(s):
Carla Clasen, Wright State University, carla.clasen@wright.edu
Betty Yung, Wright State University, betty.yung@wright.edu
Carl Brun, Wright State University, carl.brun@wright.edu
Katherine Cauley, Wright State University, katherine.cauley@wright.edu
Cheryl Meyer, Wright State University, cheryl.meyer@wright.edu
Abstract: Frequently decisions have to be made about program continuation when initial funding for the program is ended or reduced. When such decisions must be made about multiple programs competing for limited ongoing funding, considerations of program effectiveness, cost, and popularity must be taken into account by decision makers. Evaluators can assist stakeholders to identify the relevant factors and provide data that will assist in making sometimes difficult choices among programs. This presentation will describe a process that assists stakeholders to identify specific factors that should be taken into account, the relative weight of each factor in contributing to decision making, and a method of measuring each factor.
Roundtable Rotation II: New Innovations in Understanding and Measuring Transfer of Learning in Human Services Skills-based Training
Roundtable Presentation 508 to be held in MISSION A on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Human Services Evaluation TIG
Presenter(s):
Robin Leake, University of Denver, rleake@du.edu
Cathryn Potter, University of Denver, cathryn.potter@du.edu
Kathryn Schroeder, University of Denver, kathryn.schroeder@du.edu
Abstract: This roundtable will address the topic of transfer of learning in child welfare training. Because training is considered one of the key drivers for implementing practice and policy changes in an organization, evaluators must have a good understanding of the individual and organizational climate factors that influence learning and the transfer of learning to the job. Facilitators will discuss how Holton’s (1996) model of transfer and Learning Transfer Systems Inventory is being used for a training evaluation of the National Child Welfare Workforce Institute’s leadership academy for supervisors and managers, and describe the design, methods and preliminary results of this ongoing evaluation. Participants will be invited to share models for conceptually understanding transfer of learning and innovative strategies for measuring transfer of learning outcomes.

Session Title: How Evaluation Policies Affect Evaluation Quality in a Texas Public School District
Panel Session 509 to be held in MISSION B on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the
Chair(s):
Karen Looby, Austin Independent School District, karen.looby@austinisd.org
Discussant(s):
Whitsett Maria, Moak, Casey & Associates, mwhitsett@moakcasey.com
Abstract: Federal and state departments of education and local school boards regularly institute policies requiring the evaluation of educational programs. Often, these policies support the development and implementation of high quality program evaluations. However, they also may have unintended consequences which undermine the evaluation quality. Program evaluators from Austin Independent School District will illustrate how evaluation policies affect program evaluation work in the district. Panelists will describe policy influences on district program evaluations and highlight theoretical and practical issues that arise in the work. The establishment of district evaluation priorities and the evaluations of the district’s teacher pay for performance program, American Recovery and Reinvestment Act of 2009 (ARRA)funded programs, and externally provided programs operating within the district will be used as illustrations. The panelists’ presentations will set the stage for a collegial discussion about conducting quality evaluation work that is characterized by integrity, accuracy, and usefulness within a policy-driven environment.
Setting Evaluation Priorities in a Public School District: Who Decides What Gets Evaluated?
Karen Looby, Austin Independent School District, karen.looby@austinisd.org
The Department of Program Evaluation (DPE) in Austin ISD established a process by which district stakeholders identify evaluation priorities to ensure that resources are available to conduct high quality program evaluations and to ensure that the evaluation work will be used to inform decision-making and improve educational practices. In this process, the DPE staff have increased the rigor of their work while responding to evaluation policy requirements. In this portion of the panel discussion, Dr. Karen Looby will describe the process of engaging decision-makers in setting evaluation priorities and discuss how the quality of evaluation work has evolved in the district in terms of employing increasingly rigorous evaluation methods and creating a variety of formats to better communicate evaluation results to an assortment of stakeholders.
Policy Influences on the Evaluation of a Teacher Incentive Pay Program and Program Decision Making
Lisa Schmitt, Austin Independent School District, lschmitt@austinisd.org
In 2006, the Austin ISD Board approved a revision of policy to include differentiated pay that considers performance. The AISD REACH program, currently funded with both local and grant allocations, focuses on developing high quality teachers and principals to reach high student achievement. Potential future funding includes a voter approved local tax rate increase, federal Teacher Incentive Funds, and state District Awards for Teacher Excellence (DATE). A Steering Committee guides and oversees the policies related to the pilot, and the Chamber of Commerce serves as a critical friend. The complex landscape of policy, stakeholders, funding sources, and reporting requirements creates a challenging environment for program evaluation. In this portion of the presentation, Dr. Lisa Schmitt will describe the ways district evaluators have navigated a variety of circumstances to provide accurate, usable, and appropriately targeted information to drive program decisions.
Addressing Challenges Associated With the Mandated Evaluation of Projects Funded by Federal Stimulus Dollars
Martha Doolittle, Austin Independent School District, marthad@austinisd.org
Austin ISD received federal stimulus funds (Title I, Part A and IDEA) for 2009-2011 that support multiple projects aimed primarily at improving student achievement and teacher quality. These funds are time-limited to support short-term or one-time projects to boost efforts that can be sustained when the funding runs out in 2011 (the "funding cliff"). While the state currently asks for quarterly updates on expenditures, and jobs created and saved by these grants, the district's staff, Superintendent, Board of Trustees, and the community want to know who is benefitting from the projects and how effectively these federal funds are being spent. From a program evaluation perspective, Dr. Martha Doolittle will describe challenges to tailoring appropriate interim and long-term progress measures for each project, to designing meaningful evaluation logic models that can be used across projects, and to providing an efficient and effective means to communicate project results to key district decision makers.
Creating a Partnership: How a School District Helps Community Based Service Providers Evaluate Their Programs
Cinda Christian, Austin Independent School District, cchristian@austinisd.org
Austin ISD partners with a variety of community based service providers. These organizations conduct activities during and after school including tutoring, enrichment, individual and group counseling, case management, prevention programming, mental health services, etc. Many of these services are provided as components of grants received by the service agencies. As accountability policies established by grantors become more stringent, service providers are overwhelming the district’s capacity to complete ad-hoc data requests. Further, because service agencies define outcome variables differently, the aggregated data make it difficult for district stakeholders and grantors to compare various programs or service delivery methods across agencies. Consequently, DPE has engaged stakeholders, both in and out of the district, including granting agencies, program managers, and direct service providers in a dialog to develop standardized aggregate reports that can be used across purposes. Dr. Cinda Christian will discuss the pros and cons of using these reports for evaluation purposes.

Session Title: Enhancing the Quality of Evaluation Design, Data Collection, and Reports Through Peer Review
Think Tank Session 510 to be held in BOWIE A on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Independent Consulting TIG
Presenter(s):
Sally Bond, The Program Evaluation Group LLC, usbond@mindspring.com
Discussant(s):
Sally Bond, The Program Evaluation Group LLC, usbond@mindspring.com
Courtney Malloy, Vital Research LLC, courtney@vitalresearch.com
Abstract: This think tank responds directly to two elements of this year’s conference theme of evaluation quality: (1) How is evaluation quality conceptualized and operationalized? (2) How do we ensure evaluation quality in our practice? Since AEA 2004, the Independent Consulting TIG has operated a Peer Review process for its members. Having established an effective process for reviewing evaluation reports, the current co-chairs of the IC TIG’s Peer Review propose to expand the service to include the review of evaluation designs and data collection tools. After a brief presentation about the existing framework for reviewing evaluation reports, the co-chairs will present two new draft frameworks for reviewing and providing feedback on evaluation designs and data collection tools. The purpose of the think tank is to invite comments on the new frameworks and refine them accordingly.

Session Title: Utilizing Evaluation Methods to Provide Quality Health Care Services to Underserved Populations
Multipaper Session 511 to be held in BOWIE B on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Kevin E Favor,  Lincoln University, kfavor@lincoln.edu
Implementation Evaluation of HIV/AIDS Non-government Organization's (NGOs) Legal Assistance Services in Brazil
Presenter(s):
Paula Vita Decotelli, National School of Public Health (ENSP/Fiocruz), paulavita@gmail.com
Marly Cruz, National School of Public Health (ENSP/Fiocruz), marly@ensp.fiocruz.br
Miriam Ventura, National School of Public Health (ENSP/Fiocruz), venturaadv@easyline.com.br
Abstract: There are currently 39 HIV/AIDS legal assistance services (LA) financially and technically supported by Brazilian National STD/AIDS Program (NP). These assistances aim at reducing human rights (HR) violations and promote HR for Brazilian people living with HIV/AIDS (PLWHA). The evaluation was done analyzing official documents, reports, projects, advertisements and interviews with NP representatives as well as coordinators of 5 selected LA. It was possible to describe the picture of promoting and protecting HR for PLWHA in Brazil and investigate to which extent this strategy is implemented and occurring as planned. Results show that lack of monitoring, reduced budget and personnel affect the continuity and enlargement of this initiative. Besides that, LA are not able to reach PLWHA outside major cities or who don’t belong to a good relations network. Therefore, only a reduced number of PLWHA in more vulnerable situation reach legal assistance services and have their rights realized.
Evaluation of an Entertainment-Education Intervention Targeting the Latino Spanish Speaking Community of Colorado: Challenges and Accomplishments.
Presenter(s):
Mariana Enriquez-Olmos, Independent Consultant, marianaenriquez@hotmail.com
Cristina Bejarano, Independent Consultant, bejaranocl@gmail.com
Abstract: This presentation will describe the challenges associated with evaluating a Spanish soap opera aired in Colorado in 2009. Despite the fact that the evaluation work started with the development of the intervention, the reality of television programming lead to a very challenging evaluation. From changes in the broadcast time because of sporting events, to changes in the schedule from monthly to weekly, and an uncertainty about who would be the evaluation participants, we managed to complete a very successful evaluation. This presentation will describe how the evaluation addressed the uniqueness of the intervention, with the hope that other evaluators can learn from our experience, and adapt to similar projects in the entertainment world.
A Culturally Responsive Process: Using Sociocultural Theory as a Guide to Program Development and Evaluation
Presenter(s):
Dominica McBride, Center for African American Health, dmcbride@asu.edu
Abstract: Health disparities have plagued this nation for centuries. One reason for these lingering differences is the lack of cultural responsiveness in health programs. The present research study conceptualized a detailed process for developing a culturally responsive health program. Sociocultural Theory was used as a guide for the study, with a focus on an African American community. Sociocultural Theory requires the study of a community’s culture, including its context, history, and its multiple facets. A review of the literature revealed three predominant cultural-historical factors in African American culture (including religion/spirituality, racial socialization, and extended family). This research ascertained how a health program could respond to: 1) the culture, infusing said factors, 2) the history of the target community, 3) contextual pressures, and 4) members micro ideas and needs. The proposed presentation will cover the process of culturally responsive program development and discuss how this process can be applied to program evaluation.
Out of the Crossfire: Evaluating Fundraising Materials for a Hospital-based Violence Intervention Programs Serving Stigmatized Populations
Presenter(s):
Jennifer Williams, University of Cincinnati, jennifer.williams2@uc.edu
Nancy Rogers, University of Cincinnati, nancy.rogers@uc.edu
Brian Powell, University of Cincinnati, powellbb@mail.uc.edu
Abstract: During these challenging economic times, fundraising is difficult and time consuming. Donors are more careful about their contributions and many variables influence their decisions to donate. One variable that donors consider is the “worthiness” of the recipient. When non-profit programs serve stigmatized populations, potential donors are reluctant to provide financial support. Consequently, understanding what motivates donors – in this case, Out of the Crossfire, a violence intervention program in Cincinnati, Ohio primarily serving African American male gunshot wound survivors, is critical to the development of program materials for fundraising appeals. Three fundraising appeals developed based upon marketing research were evaluated in the field to determine which of them, altruistic or egoistic or neutral, would result in the greatest donations. Presenters will explain the development and evaluation of the fundraising materials and how this information can be used to inform the fundraising efforts of programs serving underserved populations.

Session Title: Improving Quality of Programs and Evaluation: Examples From the Field
Multipaper Session 512 to be held in BOWIE C on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the
Chair(s):
Lennise Baptiste,  Kent State University, lbaptist@kent.edu
Discussant(s):
Wendy DuBow,  National Center for Women and Information Technology, wendy.dubow@colorado.edu
The Impact of Participant Feedback on Program Outcomes: A Program Evaluation Consideration
Presenter(s):
Candace Lacey, Nova Southeastern University, clacey@nova.edu
Barbara Packer-Mutil, Nova Southeastern University, packerb@nova.edu
Jennifer Reeves, Nova Southeastern University, jennreev@nova.edu
Abstract: Athletic coaches have long recognized the importance of feedback on performance. Translating this concept to the field of program evaluation, this presentation focuses on the role of feedback on survey participants’ level of engagement/job satisfaction. Findings from a 3-year study conducted at Nova Southeastern University in collaboration with the Gallup Organization indicated that sharing feedback on prior year’s data played a significant role in increasing employee engagement scores. This presentation focuses on the method of data collection, findings, dissemination, and outcomes of a 3-year initiative to measure employee engagement.
Using Appreciative Inquiry Focus Groups to Engage Members in Planning for the National Network of Libraries of Medicine Middle Atlantic Region
Presenter(s):
Sue Hunter, New York University, sue.hunter@med.nyu.edu
Cynthia Olney, National Network of Libraries of Medicine, olneyc@coevaluation.com
Abstract: This paper will report on a focus group project conducted by a regional office of the National Network of Libraries of Medicine (NN/LM) that used an Appreciative Inquiry (AI) approach. Funded through the National Library of Medicine, NN/LM is a nationwide network of health sciences libraries and information centers (called “network members”) with the goal of advancing the progress of medicine and improving public health through equal access to health information. The focus groups were conducted to gather input from representatives of the network members supported through the NN/LM Middle Atlantic Region (MAR) to serve the health information needs of a four-state region (Delaware, New Jersey, New York, and Pennsylvania). This paper will highlight the process that was implemented and the advantages of the AI approach, including high levels of staff participation, efficient use of staff resources, and quality of the collected data.
Meta-evaluation Quality in Brazil: A Mamdani Hierarchical Fuzzy Inference System
Presenter(s):
Ana Carolina Letichevsky, Cesgranrio Foundation, anacarolina@cesgranrio.org.br
Thereza Penna-Firme, Cesgranrio Foundation, therezapf@uol.com.br
Abstract: This paper presents a meta-evaluation system that makes use of fuzzy sets and fuzzy logic concepts. It comprehends a data collection instrument and a Mamdani (1974) hierarchical fuzzy inference system. The advantages of the proposed system are: the instrument, which allows intermediate answers; the inference process ability to adapt to specific needs; transparency, through the use of linguistic rules that helps both the understanding and the discussion of the whole process. The rules are based on guidelines established by the Joint Committee on Standards for Educational Evaluation (1994) and also represent the view of Brazilian experts. In Brazil there is a great concern about evaluation quality. However the meta-evaluation concept is new. The system here presented can provide support to evaluators that may lack experience in meta-evaluation, which is the situation in some developing countries as Brazil. A discussion of two case studies both in the educational area is included.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Big Money, More Scrutiny: How to Forge Evaluator-Early Childhood Education Program Partnerships in Order to Produce Clear, Relevant, and Useful Data to Inform Policy and Practice
Roundtable Presentation 513 to be held in GOLIAD on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Advocacy and Policy Change TIG and the Research on Evaluation TIG
Presenter(s):
Marijata Daniel-Echols, HighScope Educational Research Foundation, mdaniel-echols@highscope.org
Abstract: As public attention on the importance of early childhood education rises, so has the pressure for preschool programs to show measurable results. This focus on accountability translates into greater demand for research and evaluation projects. This larger context has lead to more opportunities for evaluators and programs to partner in ways they may not have in the past. These partnerships can be both a point of strength and a challenge. Having clear expectations of what each partner has to gain, lose, and must contribute to the evaluation process is essential. This session will use examples from Head Start and state-funded preschool evaluation projects to explore lessons learned on how to forge successful evaluator-program partnerships that produce clear, relevant, and useful data that can be used to inform both policy and practice.
Roundtable Rotation II: A Study on the Indicator of High Quality Papers: The Case of Casualty Actuarial Society (CAS)
Roundtable Presentation 513 to be held in GOLIAD on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Advocacy and Policy Change TIG and the Research on Evaluation TIG
Presenter(s):
Haijun Zheng, Chinese Academy of Sciences, haijzheng@casipm.ac.cn
Zhongcheng Guan, Chinese Academy of Sciences, guan@casipm.ac.cn
Haiyang Hu, Chinese Academy of Sciences, hyhu@cashq.ac.cn
Bing Shi, Chinese Academy of Sciences, bshi@cashq.ac.sn
Abstract: The number of SCI papers is one of the most commonly used indicators in R&D evaluation. Theoretically, Papers published on journals with high impact factors (according to JCR statistic) have high quality. In the evaluation practice of CAS, the papers on top 15% SCI journals ranking by JCR are called "high quality papers". In this study, firstly, we deliberate the consistency between high citation papers and "high quality papers" in CAS, and the consistency between work with important social impact, e.g. rewards, and "high quality papers". Secondly, we describe the distribution of papers in CAS among SCI journals by JCR rank, and study how the distribution pattern changes before and after the indicator is adopted. Furthermore, we compare the pattern with that of other national research institutes. Thus, we can inspect the behavior impact on publishing papers for researchers in CAS after this indicator is adopted.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Practices for Working With and Building Capacity of Local Evaluation Consultants in International Development
Roundtable Presentation 514 to be held in SAN JACINTO on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Elizabeth Hutchinson, Land O'Lakes International Development, erhutchinson@landolakes.com
Meredith Blair, Humanity United, mblair@humanityunited.org
Abstract: Many international development programs work with host-country external consultants who bring valuable localized knowledge and expertise in evaluation. Key to ensuring the quality of these evaluations and mutual satisfaction of the partnership rests on thoughtful and thorough preparations. Successfully working with local evaluators encompasses two main approaches: 1) strong start up systems and strategies and 2) a commitment to strengthening the capacity of local consultants as needed. Managing this process pays off in robust data collection and analysis, as well as further strengthens local capacity, fosters sustainability and ensures quality evaluations. This roundtable aims at providing an opportunity for participants to share valuable insights on different challenges, limitations, practices and opportunities that emerged in their own work in international contexts. The discussion, facilitated by Land O’Lakes International Development and Humanity United, will include recommendations, practices, and lessons learned to improve the practice of working with local evaluators in international settings.
Roundtable Rotation II: Exploring Evaluation Quality in International Development Evaluation: An Examination of How International Development Organizations Issue and Contract Evaluations
Roundtable Presentation 514 to be held in SAN JACINTO on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Anne Cullen, Western Michigan University, anne.cullen@wmich.edu
Daniela Schroeter, Western Michigan University, daniela.schroeter@wmich.edu
Michele Tarsilla, Western Michigan University, michele.tarsilla@wmich.edu
Jim Rugh, Independent Consultant, jimrugh@mindspring.com
Abstract: Recent studies have shown that donor dominance of the international development evaluation process can pose serious limitations to the independence of evaluators. Specifically, rigid evaluation terms of reference (TOR) and requests for proposals (RFP) limit evaluators to determine independently (a) how programs should be evaluated, (b) which evaluation methods are most appropriate for use, (c) how to sample stakeholders for interviews or consultations, and (d) how the evaluation is to be conducted. Moreover, in many cases, access to TORs/RFPs is limited to a selected number of vendors/consultants. This session explores the implications of the issuing and contracting processes of international development evaluations on evaluation quality. We present as an example the results of a 2010 study on TORs and RFPs issued by international development organizations. Presenters will pose a number of questions to roundtable participants to highlight strengths, weaknesses, and areas of improvement for international development evaluation contracting.

Session Title: Beyond the Classroom: Assessment in Non-Traditional Settings
Multipaper Session 515 to be held in TRAVIS A on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Howard Mzumara,  Indiana University-Purdue University Indianapolis, hmzuymara@iupui.edu
A Look at the Efficacy of Guided Self-Placement for First-Year Writing Courses
Presenter(s):
Howard Mzumara, Indiana University-Purdue University Indianapolis, hmzuymara@iupui.edu
Abstract: To what extent are Guided Self-Placement (GSP) methods useful for placing incoming students in first-year writing courses? This presentation will describe and assess the efficacy of using a three-step GSP process for placing students in appropriate levels of first-year writing courses at a large Midwestern University. The presentation will include survey results based on data obtained from a total pool of approximately 14,000 respondents who completed the GSP Survey administered to cohorts of incoming students at a large Midwestern university. To delimit the scope of the presentation, however, detailed survey results including placement distributions for AY2009 student cohorts (N = 3,390) will provide a basis for group discussion. To facilitate an interactive discussion, participants will be asked to reflect on their knowledge or experiences and share their perspectives about the appropriateness and usefulness of GSP or versions of “Directed Self-Placement (DSP)” approaches for writing assessment and placement purposes.
Including Community: Student Development Through Civic Engagement
Presenter(s):
Amy Koritz, Drew University, akoritz@drew.edu
Melissa Sloan, Drew University, msloan@drew.edu
Jonathan Reader, Drew University, jreader@drew.edu
Abstract: Drew University is working to strengthen and redefine the liberal arts tradition by connecting classroom-based learning in the disciplines with knowledge-based action in the world. Our goal is to create civic engagement courses and activities that increase student civic development and learning, while also providing clear benefit to community partners and the larger society. Achieving this goal requires assessment strategies that encompass both student civic development and community benefit. Specifically, we examine the ability of Program Logic Models approaches to community-University partnership assessment to add value to the Bringing Theory to Practice Toolkit of the American Association of Colleges and Universities and other instruments that focus exclusively on student development. Our approach focuses on linkages among student learning, community impact, and campus capacity-building for assessment. We examine the extent to which student well-being is correlated with their participation in campus-community partnerships that also demonstrate positive impact on community issues and goals.
Creating the Global Student: Increasing Competency, Preparation, and Personal Growth of Students in a University International Certificate Program
Presenter(s):
Yuanyuan Wang, University of Pittsburgh, yuw21@pitt.edu
Keith Trahan, University of Pittsburgh, kwt2@pitt.edu
Cara Ciminillo, University of Pittsburgh, ciminill@pitt.edu
Abstract: The University Center for International Studies (UCIS) is the framework supporting the University of Pittsburgh’s multidisciplinary international programs. UCIS attempts to supplement students’ intellectual, professional and personal development by instilling the values of international experience and understanding. Our evaluation efforts focus on alignment of UCIS activities with stated goals of the organization, by measuring the effectiveness of international certificate programs offered through six area-studies centers. Our on-line survey of students pursuing international certificates revealed that participation in international certificate programs had a positive impact on students’ international competency, professional preparation, and personal growth. Notably, our evaluation findings provide other international education programs with a model that might increase their capacity and success in nurturing global citizens.
Creating Valued Field Placement Feedback: Making the Forms Meaningful and Useful for Evaluators and Educators
Presenter(s):
Julia Williams, University of Minnesota, Duluth, jwillia1@d.umn.edu
Abstract: Every year, colleges and universities place many thousands of pre-service students in job sites as apprentices, to observe and to practice. Placements often require supervision from the university, and from cooperating professionals, and the feedback generated from constituents can be utilized in program evaluation. This project, initiated in 2007, attempted to address ineffective practices in an education department’s utilization of feedback and observation forms by creating a common instrument, aligned with the progress of teacher candidates, and specifically reflecting real expectations of cooperating classroom teachers. The project produced developmentally appropriate rubrics, platforms for substantive discourse, and increased inter-rater reliability. Increasingly helpful, specific, and valid inferences regarding program strengths and limitations were the result of assessment created collaboratively by practicing professionals and university faculty. The process and the product may have promising implications for program evaluation across many disciplines that include field placements as part of professional preparation.

Session Title: The Integration of Video and In Situ Simulation Practices to Evaluate Organizational Processes
Skill-Building Workshop 516 to be held in TRAVIS B on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Integrating Technology Into Evaluation
Presenter(s):
William Hamman, William Beaumont Hospital, william.hamman@beaumonthospitals.com
Jill Stefaniak, William Beaumont Hospital, j_stefaniak@hotmail.com
Abstract: Continuing education is a mandatory requirement for many positions across a variety of industries. A key challenge in training and development is to provide training through non-traditional training means. As technological advances are made, simulation is becoming a recurring teaching method that is used for assessment. Utilizing different assessment tools to link training initiatives with appropriate goals and objectives, we have detailed the process to define various curricula that is targeted to trainee developmental levels. These instructional strategies integrate innovative instructional design processes through low and high fidelity simulations with traditional learning methodologies. These innovative methods will allow the participants to analyze current training and debriefing practices, re-evaluate learning outcomes using in-situ simulation to identify risk and process issues, and develop new plans to integrate activities and validate assessment metrics to enhance learning, transference of learning, and ultimately decrease error to improve upon performance.

Session Title: Evaluation of Efforts to Create Safer Environments for Lesbian, Gay, Bisexual, and Transgender (LGBT) Youth
Multipaper Session 517 to be held in TRAVIS C on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Lesbian, Gay, Bisexual, Transgender Issues TIG
Chair(s):
Joseph Kosciw,  Gay, Lesbian & Straight Education Network, jkosciw@glsen.org
An Evaluation of a National Leadership Development Program for LGBT Youth and Their Allies
Presenter(s):
Elizabeth Diaz, Gay, Lesbian & Straight Education Network, ediaz@glsen.org
Abstract: This presentation examines the effects of a leadership development program for LGBT youth and their non-LGBT allies. The year-long program was designed to equip secondary students to act as leaders in their communities with regard to challenging heterosexism, homophobia and transphobia. The evaluation explores whether the program supported sociopolitical development, civic engagement and socio-emotional well-being over time. The research design was quasi-experimental, utilizing pre- and post-program surveys, which were administered to program participants and a comparison group. Program participation was related to increases in civic engagement and community organizing over time, suggesting that the program provided opportunities and skills to engage in these activities. Program participation was not directly related to socio-emotional well-being, although the increase in community organizing was related to lower psychological distress and greater coping skills. In responses to open-ended items, participants reported increased leadership skill development, specifically in the areas of collaboration, project management, and communication.

Session Title: Ecologies of Collaboration in the Arts
Multipaper Session 518 to be held in TRAVIS D on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Evaluating the Arts and Culture TIG
Chair(s):
Min Zhu,  University of South Carolina, helen970114@gmail.com
Discussant(s):
Katie Steedly,  Steedly Consulting, k.steedly@rcn.com
Documenting Collabroation as a Process in an Art and Design Context
Presenter(s):
Ching Ching Yap, Savannah College of Art and Design, cyap@scad.edu
Tara Pearsall, Savannah College of Art and Design, tpearsal@scad.edu
Mary Taylor, Savannah College of Art and Design, mtaylor@scad.edu
Abstract: The Savannah College of Art and Design (SCAD) focuses on promoting an innovative and collaborative culture in all teaching and learning activities. To further enhance the faculty and students’ collaboration capacity, SCAD institutionalized its collaboration efforts by establishing a collaborative learning center to facilitate, sponsor, and support the implementation of collaborative initiatives by engaging faculty, students, and external partners in an art and design context. SCAD developed an evaluation plan with multiple forms of assessments to provide a comprehensive understanding of the relationship between the collaborative learning environment and students’ collaboration expertise development. To examine the usefulness of those tools, SCAD pilot-tested a set of three assessment tools designed to document the effectiveness of the collaborative initiatives and collaboration expertise development process. The purpose of this presentation is to share the information garnered from SCAD’s pilot-testing effort.
Evaluating Collaboration in Design Teams: A Multi Method Approach
Presenter(s):
Robyn Richardson, Savannah College of Art and Design, rorich@mac.com
Dustin Larimer, Savannah College of Art and Design, dustinlarimer@gmail.com
Yushi Wang, Savannah College of Art and Design, strongwong17@hotmail.com
Abstract: At a time when collaboration is both a buzzword and booming organizational initiative, we see it thriving in businesses, schools, and boardrooms, but is there a way to measure its quality and to quantify the success of collaborative efforts? Heightened interest in the nature of collaboration and its multifaceted components proved critical in driving this study of collaboration in design teams. The purpose of this paper is to report the findings regarding key indicators of successful collaboration within design teams based on evaluation of key indicators. The findings will be discussed in detail, as well as the process of designing the study, data collection methods, visualization of data, and directions for further study.
Youth Education in the Arts: New Collaboration Model for Funders, Arts Organizations, and Evaluators
Presenter(s):
Lora Warner, University of Wisconsin, Green Bay, warnerl@uwgb.edu
Abstract: Evaluation played a central role in a multi-faceted new funding initiative, a "community catalyst grant," of the Community Foundation for the Fox Valley Region, entitled "Youth Education in the Arts." Utilizing an empowerment evaluation model and an assessment of needs, the Foundation sought to build evaluation capacity in arts organizations so that they could articulate their benefit to kids (and to the community) and garner its support. A needs assessment raised awareness of the area's existing arts opportunities for youth and highlighted trends. Findings were released to the public in the Fall, 2009. Final qualitative and quantitative assessment continues as to whether this unique collaboration between funder, nonprofit agencies, and evaluator led to increased awareness and ultimately, stronger support for arts organizations. Results will be available fall, 2010.

Session Title: Challenges and Solutions in Implementing and Conducting Quality Evaluations on Children, Youth, and Families
Multipaper Session 519 to be held in INDEPENDENCE on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Human Services Evaluation TIG
Chair(s):
Margaret Polinsky,  Parents Anonymous Inc, ppolinsky@parentsanonymous.org
Field Study Evaluation of a Comprehensive Sexual Abstinence Program: Methodology Challenges, Approaches, and Implications
Presenter(s):
Virginia Dick, University of Georgia, vdick@cviog.uga.edu
Ann Peisher, University of Georgia, apeisher@uga.edu
Amy Laura Arnold, University of georgia, alarnold@uga.edu
Robetta McKenzie, Augusta Partnership for Children Inc, rmckenzie@arccp.org
Katrina Davidson, Augusta Partnership for Children Inc, kaaron@arccp.org
Don Bower, University of Georgia, dbower@uga.edu
Abstract: This paper will examine the field study evaluation of a comprehensive abstinence education effort. A group randomized cluster design was utilized to provide the most rigorous design possible in a real-world environment. This paper will focus on (1) challenges of securing adequate power in a group-randomized design where the pool of potential clusters is limited (Murray, Varnell & Blitstein, 2003), (2) issues raised when randomization yielded significant variation between groups and (3) how to statistically examine results which indicate intervention youth showed greater gains in key outcomes than control group youth. Methodological challenges that are encountered due to baseline variations between groups will also be examined using advanced statistical analysis. Authors will propose hypotheses to explain why the differences may exist and how those differences will be tracked over time.
From Champions to Change: The Role of Quality Evaluation in Implementing a Child Welfare System Change Initiative
Presenter(s):
Margaret Richardson, Western Michigan University, margaret.m.richardson@wmich.edu
Jim Henry, Western Michigan University, james.henry@wmich.edu
Abstract: Broad community change often begins with the inspiration of community champions. A Michigan initiative to help communities understand the impact of trauma on children in the child welfare system is underway in six counties and a tribal community. This initiative has taken a grass roots participatory evaluation approach to system change with the aim of building capacity, changing practice, and influencing policy to include the impact of trauma on children in the child welfare system. Evaluation strategies have been developed in response to the resources and capacity of local communities. The role of the champion in each locale as central to implementation is discussed, as are evaluation methods to support and guide implementation. How to find champions and partner with them to implement and later to sustain system change is presented. Strategies learned through this initiative are summarized and presented for applicability to other related areas within children’s services.
Developing a Shared Understanding of Evaluation Quality: The Case of a Sexual Abuse Prevention Program
Presenter(s):
Beth Johnson, EMJ Associates, bjohnson@emjassociates.com
Abstract: A small, community-based organization is working to prevent sexual abuse through participatory theatrical performances in elementary schools, along with teacher training and crisis intervention services. From an initial focus on evaluating outcomes, the organization moved to a longer-term discovery process, involving multiple stakeholders in developing a theory of change and valid and reliable data collection instruments. This paper describes the collaborative process between the external evaluator and the organization that supported a shared, and evolving, understanding of the purposes of evaluation. The tension between truth and justice is manifest in the commitment to both a rigorous examination of program impact and to the essential mission of preventing sexual abuse. Examples of tools and the analyses of pilot data will be shared. Obstacles to quality will be discussed, including small budgets and a perceived demand to deliver numbers that prove program effectiveness.

Session Title: Does Sensemaker Make Sense? Evaluating Development Initiatives Through Narrative Capture and Tagging in Kenya and Latin America
Demonstration Session 520 to be held in PRESIDIO A on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Systems in Evaluation TIG and the Qualitative Methods TIG
Presenter(s):
John Hecklinger, GlobalGiving Foundation, jhecklinger@globalgiving.org
Irene Guijt, Learning by Design, iguijt@learningbydesign
Abstract: John Hecklinger and Irene Guijt will demonstrate how GlobalGiving in one experiment, and Centro Latinoamericano para el Desarollo Rural (RIMISP) in another, explored the possibility of engaging community members, implementing organizations, researchers, donors, and grantmakers in a cost-effective, real time evaluation effort of small-scale projects and policy influencing research processes in the developing world. We will demonstrate how we used Cognitive Edge’s SenseMaker software on the ground in Africa and Latin America to capture and tag stories gathered from community members, researchers, and social change agents, enabling us to visually depict patterns of impact and change. Subsequent analysis led to strategic questions. Rooted in complexity theory, which looks at systems that are inherently unpredictable and cognitive science, which considers how people make sense, this experiment explores how multiple perspectives illuminate underlying patterns when more traditional means of evaluation are not workable.

Session Title: Evaluation Capacity Building (ECB) Models, Measures, And Outcomes: Taking Stock to Forge Ahead
Panel Session 521 to be held in PRESIDIO B on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Tina Taylor-Ritzler, Dominican University, tina.ritzler@gmail.com
Discussant(s):
Hallie Preskill, FSG Social Impact Advisors, hallie.preskill@fsg-impact.org
Abstract: Evaluation capacity building (ECB) has become an important process for organizations to respond to a myriad of accountability demands. As such, the ECB literature has grown to include models, measures and reported outcomes. This panel provides an analysis of what we know about current ECB efforts and identifies future directions for ECB research. The first presentation by Labin et al. reports the results of a research synthesis of the ECB literature. The second by Taylor-Ritzler et al. reports the results of a mixed-methods ECB model validation study and discusses a validated survey. The third presentation by Suarez-Balcazar et al. reports the results of a qualitative study that was conducted to further specify elements of the model presented in the second presentation and discusses implications of the study findings for future model validation efforts. Finally, discussant Hallie Preskill identifies implications of the presentations for current and future ECB research and practice.
Research Synthesis of ECB Literature: An Evidence-based Review
Susan Labin, Independant Consultant, susan@susanlabin.com
Jen Duffy, University of South Carolina, jenniferlouiseduffy@gmail.com
Abraham Wandersman, University of South Carolina, wandersman@sc.edu
The pressure for evidence-based practice continues to burgeon and create expectations for organizations to engage in evaluation. This leads to a greater demand for evaluation capacity building (ECB) and for evaluating ECB. This study uses research synthesis, a core methodology used for current evidence-based reviews, to systematically code and assess the ECB literature. Existing ECB theory and frameworks provide the conceptual basis and a logic model integrates the concepts and provides the causal structure for study questions. Eighty empirical examples of ECB in the literature were reviewed and coded in an effort to systematically describe ECB strategies, how these strategies were evaluated, what outcomes were reported, and how contextual and implementation factors may have affected these strategies or their outcomes. The presentation builds on our previous research by focusing on synthesis findings related to the evaluation methods used and the variables associated with ECB outcomes.
Results and Implications of a Mixed-methods ECB Model Validation Study
Tina Taylor-Ritzler, Dominican University, tina.ritzler@gmail.com
Yolanda Suarez-Balcazar, University of Illinois at Chicago, ysuarez@uic.edu
Edurne García-Iriarte, Trinity College Dublin, edurne21@yahoo.com
The purpose of this presentation is to describe the development, validation results and implications of a mixed-methods ECB model. The presenters will summarize current ECB models and measures and show how their model and measures represent a synthesis of the literature. Specifically, synthesis model and measurement components include individual factors (awareness of the benefits of evaluation, motivation, and competence), organizational factors (leadership, learning climate and resources), cultural and contextual factors, and evaluation capacity outcomes (use of evaluation processes and findings). Measures include organizational document reviews and staff interviews and surveys. Model validation results differed for quantitative and qualitative methods. Specifically, quantitative results revealed a model that included individual, organizational and outcome factors, whereas qualitative results showed cultural and contextual factors to also be important. The implications of the study findings for further developing a synthesis model and measure of ECB will be discussed.
Using Qualitative Methods to Further Specify Contextual and Cultural Elements of ECB Processes
Yolanda Suarez-Balcazar, University of Illinois at Chicago, ysuarez@uic.edu
Tina Taylor-Ritzler, Dominican University, tina.ritzler@gmail.com
Several evaluation scholars (e.g., Cousins et al., 2007; Preskill & Boyle, 2009) and results of the second presentation in this panel have noted that cultural and contextual elements affect ECB processes. However, the specific elements of culture and context, and how these affect ECB processes, have not yet been adequately specified. As a result, the presenters conducted a review of the evaluation literature to assess how context and culture have been described and assessed. They will describe the literature in this area and the results of a qualitative study wherein staff from community-based organizations who were responsible for evaluation activities was asked to discuss contextual and cultural factors that impact their capacity to document their programs. The presenters will also describe how the study results will be incorporated into their ECB synthesis model and measure and the next steps for validating the revised model.

Session Title: Evaluating National Substance Abuse Prevention Programs
Multipaper Session 522 to be held in PRESIDIO C on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Trena Anastasia,  University of Wyoming, tanastas@uwyo.edu
Effects of the Strategic Prevention Framework State Incentives Grant (SPFSIG) on State Prevention Infrastructure in Twenty Six States
Presenter(s):
Robert Orwin, Westat, robertorwin@westat.com
Alan Stein-Seroussi, Pacific Institute for Research and Evaluation (PIRE), stein@pire.org
Jessica Edwards, Pacific Institute for Research and Evaluation (PIRE), jedwards@pire.org
Ann Landy, Westat, annlandy@westat.com
Abstract: The U.S. Center for Substance Abuse Prevention's Strategic Prevention Framework State Incentives Grant (SPFSIG) is a national public health initiative to prevent substance abuse and its consequences. 26 participating states used a data-driven planning model to allocate resources to 450 communities which in turn launched over 2000 intervention strategies to target prevention priorities in their populations. An additional goal was to build states' prevention capacity and infrastructure to facilitate communities' selection and implementation of intervention strategies. This paper addresses the state infrastructure goal: 1) Was it achieved, and 2) what contextual and implementation factors were associated with success. Results showed significant improvement in most infrastructure domains. Preliminary multivariate analyses showed baseline infrastructure levels to be highly predictive of final levels, but mediating effects of implementation were more ambiguous. Analyses of the reasons for change across domains, and more broadly, the contextual and implementation factors associated with success, are also discussed.
Evaluating Large Scale Technical Assistance Centers: The Case of the Center for Substance Abuse Prevention's Centers for the Application of Prevention Technologies
Presenter(s):
Tom James, University of Oklahoma, tjames@ou.edu
Wayne Harding, Social Science Research and Evaluation, wharding@ssre.com
Abstract: Drug, alcohol, and tobacco use among youth continue to be a major public health concern in most states and communities. In 1997, five regional Centers for the Application of Prevention Technologies (CAPTs) were initiated by SAMHSA’s Center for Substance Abuse Prevention. These regional centers help bridge the gap between research and practice by assisting states and community organizations apply the latest evidenced-based knowledge to prevention policies, programs, and practices. The CAPTs are part of a large federal effort to provide support and assistance to the public and federal grantees through technical assistance centers. Based on the collective experience of the five CAPT evaluators, this paper reports on lessons learned and provides guidelines for designing and implementing evaluations of large scale, federally-supported regional technical assistance centers.
Implementing An Evidence-based Prevention Program Nationally Using A Multi-tier Approach: Helping Youth Stay on Track
Presenter(s):
Melissa Rivera, National Center for Prevention and Research Solutions, mrivera@ncprs.org
Scott Steger, National Center for Prevention and Research Solutions, ssteger@ncprs.org
Abstract: In their quest for an effective drug prevention curriculum, schools and communities nationally have found that the Stay on Track program offers the ability to employ an evidence-based program that is not only effective, but also flexible and culturally appropriate. This session will address the Stay on Track program’s evolution, its adaptability, and the quality measures employed to ensure the efficacy of the curriculum on a national basis. Presenters will outline key components for improved program sustainability and will provide best practices and innovative approaches that have been implemented to enhance programmatic outcomes. Assessment of the impact of unique implementation characteristics, such as those of the students, implementers, and schools, will also be addressed. A snapshot of the multi-tier approach employed throughout the evaluation cycle will be provided.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Cracking Black Box Health System Performance Evaluations: Potential Practices From Field Applications of Management Oriented Evaluation Models
Roundtable Presentation 523 to be held in BONHAM A on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Theories of Evaluation TIG and the Health Evaluation TIG
Presenter(s):
Jacob Kawonga, Management Science for Health, jkawonga@gmail.com
Abstract: Objective: The objective is to demonstrate field level applications of management oriented evaluation models that have potential to improve health systems performance evaluation. Design: Exploratory Literature based study Results: Cases of field level applications of management oriented evaluation approaches in Malawi, Uganda,Kenya, Tanzania, Rwanda and Namibia points to potential approaches that have capacity to demonstrate evidence of effective health systems strengthening interventions, a challenge which has not be resolved by expenditure based evaluation approaches
Roundtable Rotation II: Towards Translational Process Evaluation: Implementation, Fidelity, Integration, and Sustainability – A Roundtable Discussion
Roundtable Presentation 523 to be held in BONHAM A on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Theories of Evaluation TIG and the Health Evaluation TIG
Presenter(s):
Oliver Massey, university of South Florida, massey@fmhi.usf.edu
Abstract: In the last decade behavioral health researchers and practitioners have come to recognize the critical importance of the use of service interventions that have established evidence of their efficacy. Unfortunately, it is now recognized that effective programs are not always readily adopted, and that there are significant gaps in the translation of theoretically sound best practices into workable programs in the field. The translation of research into practice involves recognizing and solving complex problems that deal with the technology of implementation. For evaluators, there are significant leverage points for a renewal of the value of process evaluation interpreted through the lens of implementation science. In this roundtable I will briefly review issues in translational science and its relevance for process evaluation. The roundtable will then provide an opportunity to discuss and explore potential roles for evaluators in the explicit process related roles of implementation, fidelity, integration, and sustainability.

Session Title: Aligning Priorities of Diverse Stakeholders Using Collaborative Evaluation Planning
Think Tank Session 524 to be held in BONHAM B on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Katie Zaback, JVA Consulting LLC, katie@jvaconsulting.com
Discussant(s):
Randi Nelson, JVA Consulting LLC, randi@jvaconsulting.com
Nancy Zuercher, JVA Consulting LLC, nancy@jvaconsulting.com
Julia Alvarez, JVA Consulting LLC, julia@jvaconsulting.com
Abstract: In this Think Tank session, participants will explore the challenges of planning a multipurpose evaluation that meets the needs of diverse stakeholders, specifically the needs of multiple funders. The chair will present a case study and ask participants to engage in collaboration and consensus building to develop an evaluation plan that is responsive to the needs of all parties and succeeds in measuring the outcomes of the initiative. Session participants will join facilitated breakout groups that represent individual stakeholders—a nonprofit parent organization, statewide affiliates and various funders—and will be asked to design the beginning phases of an evaluation plan. Breakout groups will present their plans to the larger audience, and then collaborate to incorporate all of the small group evaluation plans into one. The session concludes with attendees sharing their own experiences designing high quality multipurpose evaluations that meet the needs of multiple audiences.

Session Title: Evaluating Science, Technology, Engineering, and Mathematics (STEM) Initiatives in K-12 Education
Multipaper Session 525 to be held in BONHAM C on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
James P Van Haneghan,  University of South Alabama, jvanhane@usouthal.edu
Discussant(s):
Tom McKlin,  The Findings Group LLC, tom.mcklin@gmail.com
Planning Evaluations for Discovery Research K-12 (DR K-12) Design Projects
Presenter(s):
Kathleen Haynie, Haynie Research and Evaluation, kchaynie@stanfordalumni.org
Abstract: The Discovery Research K-12 (DR K-12) program seeks to enable significant advances in preK-12 learning of the STEM disciplines through the study of new resources, models, and technologies. This initiative seeks to meet immediate challenges and anticipate future educational opportunities. This paper will lay out a general approach to the evaluation of DR K-12 projects. A comprehensive approach to evaluation design can be guided by the expectations of the NSF and analysis of fundamental issues undergirding program evaluation. Based on best evaluative practices and project needs, DR K-12 evaluations can be carried out in five phases: logic modeling, definition of evaluative questions, design of evaluation plan, data collection and analysis, and provision of evaluative information. DR K-12 projects have the potential to significantly contribution to the future of education; informed evaluations are of critical importance. This paper is intended to help prepare evaluators faced with this daunting task.
Evaluation Methodology and Results of the Building Science Teaching Capacity Project
Presenter(s):
Robert Owens, Washington State University, rwowens@wsu.edu
Mike Trevisan, Washington State University, trevisan@wsu.edu
Abstract: Considerable interest exists in STEM educational programs at local and national levels. Evaluation is central in these programs; educators and policy-makers focus on determining and documenting what works in STEM education. Evaluation operates at the national or program level and the local or project level. Reports of program level evaluation can be broad in scope, containing little specific information regarding individual project evaluations. The Mathematics and Science Partnership (MSP) program is a STEM program funded by the Department of Education. The Building Science Teaching Capacity (BSTC) is a MSP project in Washington State. Few published or publicly available evaluation reports exist for MSP projects nationally and locally. We present results of the evaluation of the BSTC, including results from pretest-posttest and post only measures of impact of workshops and trainings. Teachers and administrators benefited from trainings. Evaluators of STEM projects should disseminate findings in various venues, including journals and conferences.
Longitudinal Evaluation of Project Lead The Way in Iowa: Using Interdisciplinary Collaboration as a Method of Quality
Presenter(s):
Melissa Chapman, University of Iowa, melissa-chapman@uiowa.edu
David Rethwisch, University of Iowa, david-rethwisch@uiowa.edu
Tom Schenk Jr, Iowa Department of Education, tom.schenk@iowa.gov
Frankie Laanan, Iowa State University, laanan@iastate.edu
Soko Starobin, Iowa State University, starobin@iastate.edu
'Leaf' Yi Zang, Iowa State University, lyzhang@iastate.edu
Abstract: We take an interdisciplinary approach to evaluate a state-wide implementation of a complex, multi-faceted Science Technology Engineering and Mathematics (STEM) program, Project Lead The Way (PLTW). Despite policy and state-funding to assist the growth of PLTW enrollment in Iowa’s secondary schools, there is little evaluation evidence available about this secondary engineering program. We use the Program Evaluation Standards to guide our work, particularly balancing the feasibility and accuracy standards. Our presentation will use the Program Evaluation Standards as a guide to discuss our current evaluation processes and our outcome evidence to-date, as well as lessons learned and directions for future improvements to our evaluation.
Measuring Change in Middle Schools Girls’ Knowledge and Perceptions of Science, Technology, Engineering and Math (STEM) Related to Intervention: Implications for School Reform
Presenter(s):
Carol Nixon, Edvantia Inc, carol.nixon@edvantia.org
Abstract: Art to STEM, funded by a National Science Foundation grant, aims to increase the rate of enrollment among girls in STEM high school academies in Metropolitan Nashville Public Schools. The project’s mixed-method evaluation includes several strategies to assess shifts in STEM- and career-related knowledge, attitudes, and perceptions of the future. This paper contrasts student assessment results over time, specifically highlighting the Draw An Engineer Test (DAET) and a life timeline exercise. While data were collected to evaluate the project, the baseline findings alone have implications when viewed within a broader, systemic context of school reform. More attention needs to be directed at the cross-cutting implications of single agency- or program-funded evaluations. The baseline data suggest possible negative implications for STEM education and career growth given school reform efforts, such as high school career academies, unless sufficient attention is given to extending school reform downwards into middle and elementary schools.

Session Title: Evaluating Literacy Curricula for Adolescents: Results From Three Years of Striving Readers
Panel Session 526 to be held in BONHAM D on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Stefanie Schmidt, United States Department of Education, stefanie.schmidt@ed.gov
Discussant(s):
David Francis, University of Houston, david.francis@times.uh.edu
Abstract: This panel will discuss the findings from three years of Striving Readers program evaluations from 5 sites across the country. Panel members will report findings based on experimental research designs that provide the most rigorous evaluations to date on a number of adolescent reading interventions. These independent evaluations provide a wealth of detailed information to policymakers and school administrators on the important, but under-researched area of adolescent literacy education. The papers being presented add significantly to our understanding of middle school literacy education and the potential for several intervention strategies to be effective. They also provide insights into the challenges of maintaining a high-quality experimental research design in the field. Despite substantial obstacles to conducting rigorous experiments in school settings, evaluators have been able to negotiate compromises that do not diminish the quality of their evaluation designs. The results advance our understanding of both adolescent literacy and practical research methodology.
Striving Readers: Results From Ohio
William Loadman, Ohio State University, loadman.1@osu.edu
One of the eight Striving Reader awards was made to the Ohio Department of Youth Services (DYS) to investigate enhancing the reading ability of incarcerated youth in the state of Ohio. DYS operates high schools in 7 correctional facilities serving approximately 1300 youth, with an average stay of 10.5 months. Approximately 50% of the youths are reading significantly below grade level at intake. Using a randomized controlled trials design, the evaluation has found that students in Read 180 for four consecutive terms have gained approximately 75 lexile points more than a randomly equivalent comparison group on the Scholastic Reading Inventory (significantly beyond the rate of growth historically achieved by these youths). The analysis consisted of using the CAT as a covariate and then using an HLM longitudinal analysis to examine the growth over the nine data points.
Striving Readers: Results From Newark
Jennifer Hamilton, Westat, jenniferhamilton@westat.com
Matthew Carr, Westat, matthewcarr@westat.com
Newark’s Striving Readers program aims to improve the reading skills of struggling students by implementing Scholastic’s Read 180 curriculum, which utilizes adaptive and instructional software, high-interest literature, and direct instruction. The evaluation used a cluster randomized design that assigned 19 middle schools to either the READ 180 curriculum or to continue using the district’s standard literacy curriculum. Using student-level data, a 2-level HLM regression analysis was employed to examine if exposure to READ 180 had an impact on students’ vocabulary, comprehension, and language arts scores on the SAT10. Students were divided into 6 analytic groups based on their grade level and whether they had received one, two, and three years of treatment. Preliminary analyses show significant effects in all 3 literacy subtests after a minimum of two years of treatment. Significant effects were also found for certain subgroups of students, including special education students, males, and African-American students.
Striving Readers: Results From the Mid-South
Debra Coffey, Research for Better Schools, coffey@rbs.org
In 2005 the Mid-South City Schools system, which serves about 120,000 students, was awarded one of eight Striving Readers grants. Eight MCS middle schools were selected for participation in this Striving Readers project, and all students who scored in the bottom quartile on the state reading assessment were selected to participate and randomly assigned to the control or experimental group. All students who participated received regular English/language arts instruction, and students in the experimental (treatment) group also received READ 180. The purpose of this presentation will be to describe Year 3 impacts of READ 180 on struggling readers’ reading achievement, as measured by the Iowa Tests of Basic Skills, and on their achievement in four core content areas, as measured by the state NCLB-related assessment. In addition to describing the experimental impacts, presenters will describe the results of treatment-on-the-treated (TOT) analyses completed using Bloom’s adjustment.
Striving Readers: Results From Portland
Bonnie Faddis, RMC Research Corporation, bfaddis@rmccorp.com
Margaret Beam, RMC Research Corporation, mbeam@rmccorp.com
The Oregon Striving Readers grant serves 4 high schools and 6 middle schools using the Content Literacy Continuum developed by the University of Kansas. The targeted intervention is the Xtreme Reading curriculum and the whole school intervention uses Content Enhancement Routines designed to help all students understand key content. Students were randomly assigned to treatment and control conditions within each school and grade. This presentation focuses on the impact of the targeted and whole school interventions over the past 3 years. A multilevel model was used to estimate the impact of the targeted intervention on spring GRADE scores, revealing significant effects for middle school but not high school students. The presenters will discuss implementation and its relationship to student outcomes. An Interrupted Time Series analysis was used to evaluate the effect of the whole school intervention, beginning 3 years prior to the Striving Readers grant through Year 3.
Results from Springfield/Chicopee
Kimberly Sprague, Brown University, kimberly_sprague@brown.edu
The Education Alliance at Brown University is conducting a randomized control trial (RCT) evaluating the effectiveness of two adolescent literacy interventions on the reading achievement of low performing students, as implemented by two school districts in western Massachusetts. Both teachers and students in five participating high schools were randomly assigned to one of three conditions: Read180, Xtreme, or the control condition. Year 3 results indicated positive effects on reading achievement for one of the two targeted interventions as compared to the control group. Despite challenges in the specification for implementation and monitoring, patterns emerged in the treatment-only or Treatment of the Treated (TOT) group and implementation levels. Results are presented in the context of the implementation study results. The final five-year study results will provide administrators and educators with the information necessary to make informed choices about which program to select and how best to implement it within their schools.

Session Title: Challenges and Best Practices in Benefit Cost Studies of Research and Technology Programs
Panel Session 527 to be held in BONHAM E on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Rosalie Ruegg, TIA Consulting Inc, ruegg@ec.rr.com
Abstract: Using case study of a renewable energy technology program, this panel of experts and practitioners will discuss challenges in benefit-cost studies and how they can be best met. Specific challenges to be addressed include extending the scope beyond projects to programs and portfolios of projects, inclusion of multiple types of benefits; development of the next best alternative from which to calculate benefits, strategies for tackling attribution, and data collection and assumptions for credible analysis. The U.S. Department of Energy (DOE) has drafted a Guide aimed at incorporating best practices in benefit-cost analysis that was followed in four recent retrospective studies. One of these studies--a benefit-cost analysis of DOE's investment in solar photovoltaic energy systems--will be presented as a case study. Panelists who have a broad view of best practice in RTD evaluation in the U.S. and Europe, including benefit-cost analysis, will provide their opinions in a lively discussion.
A Case Study: A Benefit-cost Analysis of Department of Education (DOE)'s Investment in Solar Photovoltaic Energy Systems
Alan O'Connor, RTI International, oconnor@rti.org
Ross J Loomis, RTI International, rloomis@rti.org
Fern M Braun, RTI International, fbraun@rti.org
Alan O’Connor, MBA, is a senior economist in the Environmental, Technology, and Energy Economics program at RTI International. His research focus is on studying the contributions of technology and innovation programs to society’s economic, public health, and environmental well-being. He has directed more than 20 economics studies for federal and regional clients such as NIST, EPA, DOE, CDC, and the states of Oregon, Maryland, and North Carolina. Alan will present the case study, a retrospective benefit-cost analysis of the U.S. DOE investment in solar photovoltaic energy systems.
View From Studies Done by the Advanced Technology Program
Rosalie Ruegg, TIA Consulting Inc, ruegg@ec.rr.com
Rosalie Ruegg, MA, MBA, is a consultant in RTD evaluation and managing director of TIA Consulting, Inc. Former Director of the Economic Assessment Office of the Advanced Technology Program (ATP), she is experienced both in conducting benefit-cost studies and in planning and overseeing these studies done by others. She co-authored the draft DOE Guide for benefit-cost analysis that was used in the recent DOE retrospective studies, including the case study presented in the Panel.
Perspectives From Europe
Isabelle Collins, Technopolis, isabelle.collins@technopolis-group.com
Isabelle Collins has, with colleagues, been evaluating research programmes in Europe for many years. Technopolis is the largest RTD evaluation firm in Europe. She has experience with programmes at national and international level across Europe, and looks at developing practice in the context of declining resources coupled with heightened political importance of research and innovation policy. Technopolis has just finished a major study of the impact of the European investment in ICT research through the Fifth and Sixth European Framework Programmes on Research and Technological Development, covering the period 1999 to 2007. To this can be added her experience with a range of studies carried out in related but separate fields including higher education and business support programmes, from which approaches and methodologies can be drawn.
View of the Chairman of the Expert Review Panel
Irwin Feller, Pennsylvania State University, iqf@ems.psu.edu
Dr. Feller is Professor Emeritus of Economics at Penn State and a Senior Fellow of the American Association for the Advancement of Science. He is an internationally recognized R&D evaluation expert, and consultant to multiple federal agencies. He served as Chairman of the expert panel reviewing the DOAE/EERE benefit-cost methodology guide and as expert reviewer of all four draft EERE benefit cost studies in 2009-2010.

Session Title: The Weakest Link: Does Good Evaluation Lead to Good Decisions? How to Assess Your Organization
Skill-Building Workshop 528 to be held in Texas A on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Government Evaluation TIG
Presenter(s):
Thea C Bruhn, United States Department of State, bruhntc@state.gov
Abstract: GIGO or Garbage In-Garbage Out is a familiar notion: bad data result in ineffective or inappropriate follow-on actions. Does the antithesis apply? If they have good evaluation data, will leaders make good decisions? The use of data for strategic decision making is Business Intelligence (BI). BI also includes providing decision makers with intuitive methods for monitoring and analyzing data on an ongoing basis. Studies of what managers actually do, as opposed to what they are supposed to do, or what they say they do, have shown that even successful managers rarely, if ever, employ rational approaches. How does your organization measure up? In this workshop, participants will apply tools to look at decision-making in their organizations and the role and impact of evaluation data.

Session Title: Taking Control of Your Evaluation Career
Skill-Building Workshop 529 to be held in Texas B on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Evaluation Managers and Supervisors TIG
Presenter(s):
George Grob, Center for Public Program Evaluation, georgefgrog@cs.com
Ann Maxwell, United States Department of Health and Human Services, ann.maxwell@oig.hhs.gov
Abstract: This session will engage its participants in a series of exercises deigned to help them understand the many possibilities of a rewarding life long career in the field of evaluation; to identify both the broad and specific skills, knowledge, and experience conducive to achieving it; to evaluate where they currently stand; and to set goals for their own personal future career development. The tools aim to open each participating evaluator’s vision to his or her roles and potential as an analyst/methodologist; substantive program expert; and manager/administrator/advisor. It will explain how these skills naturally develop over a lifetime of evaluation practice, and how an evaluator can plan for and enjoy an expanding role of professionalism, influence, and stature over his or her career.

Session Title: Ensuring High-Quality Data Processes in Evaluation: Examples From Qualitative, Quantitative and Mixed Methods Work
Multipaper Session 530 to be held in Texas C on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the
Chair(s):
Jacklyn Altuna,  Berkeley Policy Associates, jacklyn@bpacal.com
Quality Control Methods With Quantitative Data in a Randomized Control Trial Study for a Teacher Professional Development Evaluation
Presenter(s):
Lorena Ortiz, Berkeley Policy Associates, lorena@bpacal.com
Abstract: Evaluating student outcomes based on standardized test scores and administrative student records is challenging under the best of circumstances. This presentation will address the challenges of data preparation and rigorous data analyses methods in a multi-year, multi-school district random assignment evaluation of a teacher professional development program in 50 schools. As with many studies, appropriate methods for handling missing data and accounting for multiple comparisons are crucial to the quality of the evaluation. This presentation highlights the necessity of incorporating such processes into proposal writing and study designs. Topics to be discussed include data acquisition; data management and preparation; selection of appropriate missing data analyses; sensitivity analyses; implications around the number of research questions selected; and application of multiple comparison methods.

Session Title: Recent Developments in Research and Development Evaluation: The Academic Side
Multipaper Session 531 to be held in Texas D on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Juan Rogers,  School of Public Policy Georgia Institute of Technology, jdrogers@gatech.edu
Discussant(s):
Juan Rogers,  School of Public Policy Georgia Institute of Technology, jdrogers@gatech.edu
Science Overlay Maps: A New Research Evaluation Tool
Presenter(s):
Alan Porter, Georgia Institute of Technology, alan.porter@isye.gatech.edu
Ismael Rafols, University of Sussex, i.rafols@sussex.ac.uk
Abstract: Science overlay maps offer an appealing evaluation tool that helps locate bodies of research activity among the disciplines. Over the past couple of years, several of us have refined alternative map formulations. These are now quite accessible to evaluators who want to experiment with them. We describe our overlay mapping approach, with its background and issues. We then illustrate how one can use these to contribute to assessments of particular research work, in conjunction with other empirical and expert methods. These maps can enrich - Benchmarking – to compare commensurable research activities - Exploration of research community engagement - Helping to perceive collaborative patterns - Tracking research knowledge diffusion (e.g., citing of research publications sponsored by a given program) Tracking changes in such dimensions over time, or pursuant to an intervention We illustrate from recent research program assessment work for the National Science Foundation

Session Title: Clients Speak Out About Evaluation
Multipaper Session 532 to be held in Texas E on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Evaluation Use TIG
Chair(s):
Susan Tucker,  Evaluation & Development Associates, sutucker1@mac.com
Discussant(s):
Lyn Shulha,  Queen's University at Kingston, lyn.shulha@queensu.ca
Exploring Evaluation Use: Multiple Representations of Evaluation Findings
Presenter(s):
Michelle Searle, Queen's University at Kingston, michellesearle@yahoo.com
Christine Doe, Queen's University at Kingston, christine.doe@queensu.ca
Lyn Shulha, Queen's University at Kingston, lyn.shulha@queensu.ca
Susan Elgie, Independant Consultant, selgie@sympatico.ca
Abstract: Evaluation as systematic inquiry has many objectives; an important one is promoting understanding and assimilation of results by the client(s). Complex evaluations benefit from the use of varied reporting methods. This research investigates the process of tailoring the representation of findings to facilitate deep understanding of the evaluation context. After a year of qualitative data collection, our four-member evaluation team recognized that varied representations of the findings served the client’s needs by speaking to multiple audiences. In this paper, we look at our documented efforts to promote evaluation use by creating purposeful and varied representations of the evaluation findings. We draw on data from the deliberations about the evaluation findings within our team and with our clients, a focus group with the clients and a stakeholder interview. This paper describes our process, provides samples and considers the implications for drawing on multiple representations in reporting evaluation findings.
Enhancing Evaluation Quality Through Client-Centered Reporting
Presenter(s):
Micheline Magnotta, 3D Group, mmagnotta@3dgroup.net
Abstract: Every high quality evaluation begins with understanding the client’s and stakeholder’s needs. Yet, most evaluation reports are written with little attention paid to the report users, and a great amount of attention paid to the technical aspect of quality, such as by following a report template that an organization has proudly “perfected” over the years. Adding to the problem, evaluators often perceive that following a template is justified because it increases cost-effectiveness, when in fact funds may be needlessly spent. This session begins with the User-Based perspective of quality, and examines a spectrum of qualitative and quantitative report formats that have been effectively used in the field. Practical suggesting will help evaluators shift their definition of report quality from a “Product-Based View” to one that also embraces the “User-Based View” (Schwandt, 1990), resulting in evaluation reports that are higher quality because clients and stakeholders obtain greater use from reports.
Measuring Evaluation Use and Influence Among Project Directors of State Gaining Early Awareness and Readiness for Undergraduate Programs Grants
Presenter(s):
Erin Burr, Oak Ridge Institute for Science and Education, erin.burr@orau.org
Jennifer Morrow, University of Tennessee, Knoxville, jamorrow@utk.edu
Gary Skolits, University of Tennessee, Knoxville, gskolits@utk.edu
Abstract: This paper describes the development of an instrument used to measure evaluation use, influence, and factors that have an impact on the use of evaluations among state project directors of the national Department of Education program, "Gaining Early Awareness and Readiness for Undergraduate Programs"(GEAR UP). The survey instrument was administered to 17 state project directors via online and paper-and-pencil surveys. Results indicated that GEAR UP project directors are using their program evaluation reports for instrumental, conceptual, symbolic, and process-related purposes. Project directors reported evaluation influence at the individual, interpersonal, and collective levels. Both implementation factors and decision and policy setting factors had an impact on project directors' decisions to use their programs' evaluations. The study’s limitations, implications, and planned future research will be discussed.
Chilean Evaluated Teachers Give Their Opinions About the National Teacher Evaluation System
Presenter(s):
Dante Cisterna - Alburquerque, Michigan State University, cisterna@msu.edu
Abstract: A study was conducted in a Chilean district in order to describe opinions of evaluated teachers’ and school principals about the national teacher performance evaluation system and the ways they are using the reported information about teachers’ performance. Evaluated teachers and principals positively value the clear procedures and adequate organization of the system. By contrast, the overwhelming tasks required for responding to the instruments and the quality of the reports are scarcely evaluated. They also make little use of the information reported for teachers’ improvement. The results of this study suggest that the national and large-scale assessment, whose official purpose is formative, has focused on the quality of its operative procedures and its technical aspects, at the expense of strengthening the use, details and pertinence of the reported information for their intended users.

Session Title: The Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) Experience: Exploring the Promise of Multi-site Evaluation
Panel Session 533 to be held in Texas F on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the College Access Programs TIG
Chair(s):
Yvette Lamb, Academy for Educational Development, ylamb@aed.org
Discussant(s):
Melissa Panagides-Busch, Academy for Educational Development, mbusch@aed.org
Abstract: Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) is a federally funded program to provide services to prepare low income middle and high school students for entering and succeeding in post secondary education. Academy for Educational Development (AED) is working with multiple partnerships and a state agency to conduct external evaluation of GEAR UP programs. Data from across six school districts and 40 schools will be used to conduct multisite analysis to better understand the feasibility of various types of approach for program evaluation. The session begins with our conceptualization of multisite evaluation of GEAR UP program. The second paper presents an overview of GEAR UP programs managed by three agencies in three states. The final paper presents our process for data collection and analysis, including feedback we received from our clients in pursuing the multisite evaluation. The session will end with discussion of key design questions.
Overview of the GEAR UP Multi-site Evaluation
Yvette Lamb, Academy for Educational Development, ylamb@aed.org
This paper presents our conceptualization of GEAR UP multisite evaluation. Given the program across sites share similar goals and provide similar types of services, and given that we are working with formative evaluation, we propose taking cluster evaluation as the overall evaluation approach. The paper begins with providing an overview of multisite evaluation by following Straw and Herrell (2002)’s framework of multi site evaluations. Then it quickly move to discuss why cluster evaluation is the most suitable evaluation approach by discussing the nature of intervention GEAR UP program provides. It will also discuss what types of questions multisite evaluations might be able to address, as well as challenges for answering particular types of questions. The presenter brings her expertise of conducting evaluation of college access and community programs.
GEAR UP Programs: Similarities and Differences Across Sites
David Jochelson, Academy for Educational Development, djochels@aed.org
Susanna Kung, Academy for Educational Development, skung@aed.org
Arati Singh, Academy for Educational Development, asingh@aed.org
Three presenters will present three GEAR UP programs, on which we will be conducting multisite analysis. We present how GEAR UP programs implemented by three agencies differ in the scope of services, specific outcome measures, populations served, and the context in which the services are provided. We also present site specific evaluation questions, data collection and analysis schedule and forms of reporting for each site specific evaluation so that audience will have a concrete image of what types of data we are collecting for the site specific evaluation. Three evaluators who work with the site specific evaluation and grantees (if they are able to participate) will present this section of the session to provide a comprehensive picture of evaluation work, including the nature of collaboration between evaluators and program managers, evaluation use and information needs at each site.
Designs for Cluster and Multi-site Evaluation for GEAR UP Program
Mika Yamashita, Academy for Educational Development, myamashita@aed.org
The final paper discusses our plans for cluster evaluation, which we are currently designing. At this moment, we plan to use data analysis techniques drawn upon theory based evaluation (Davidson, 2000) to acquire descriptions about how different contexts shape implementation of GEAR UP programs. We will also discuss several strategies for analyzing outcome data collected from three sites for site specific evaluation. We will also report involvement of our clients in identifying evaluation questions, and our data analysis scheduling in relation to the site specific evaluation work. Finally, we present how we plan to report findings to the sites that have different evaluation and information needs. The presenter brings her expertise in qualitative analysis. Davidson, E. J. (2000). Ascertaining Causality in Theory-Based Evaluation. New Directions for Evaluation. 87 17-26

Session Title: Evaluation and Program Quality
Multipaper Session 534 to be held in CROCKETT A on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Extension Education Evaluation TIG
Chair(s):
Lisa Townsend,  University of New Hampshire, lisa.townsend@unh.edu
Organizational Impacts of Extension State Systems on Evaluation Practice in the Field
Presenter(s):
Alexa J Lamm, University of Florida, alamm@ufl.edu
Glenn D Israel, University of Florida, gdisrael@ufl.edu
Tracy Irani, University of Florida, irani@ufl.edu
Abstract: Very little research has been conducted regarding how the organizational structure of a state Extension system has influenced the individual evaluation behaviors of extension professionals in the field. Organizational structure, as it deals with evaluation, is a complex topic encompassing multiple layers including the broad mission and vision, reporting and leadership within the system, interaction patterns, and the individuals who do the work. By developing a comprehensive model, bringing together the concepts of organizational change and individual planned behavior within the context of evaluation, a greater understanding of the theoretical underpinnings of organizational structure and its influences on individual behavior in regards to evaluation can be gained. With this framework, we can begin to explore how Extension systems can enhance the willingness of Extension professionals at all levels to engage in high quality evaluation practices.
Applying Extension Methodology in an African Education and Food Security Program: Lessons Learned About Maintaining Quality From a Distance
Presenter(s):
Mary Crave, University of Wisconsin, crave@conted.uwex.edu
Abstract: This session will review a 5-year Teacher Training for School Gardens pilot program in two African countries funded by the US Agency for International Development (USAID) and implemented by the US Department of Agriculture (USDA) in partnership with a land-grant university. USDA is interested in promoting food security using school gardens as a delivery method. USAID is interested in improving basic education using school gardens as a learning laboratory. The project included developing curriculum materials, training teachers, and conducting monitoring and evaluation visits to each participating school. Program outcomes have implications for the many countries around the world who are interested in teaching vocational and food security skills to youth while improving students’ science literacy. Tension between USDA and USAID expectations and goals, some challenges and benefits of applying extension education methods in different cultures, and lessons from monitoring partner input and program quality from a distance will be discussed.
Assessing Youth Program Quality in 4-H Club Settings: Outcomes from Four Arizona Counties
Presenter(s):
Amy Schaller, University of Arizona, aschalle@email.arizona.edu
Christine Bracamonte Wiggs, University of Arizona, cbmonte@email.arizona.edu
Lynne Borden, University of Arizona, bordenl@ag.arizona.edu
Abstract: There is strong evidence that participation in a high quality youth program can promote a young person’s positive development. In addition to developing and implementing high quality youth programs, it is imperative that programs also undergo assessment to improve design and implementation, to create accountability, and to measure outcomes and impacts. This paper will describe findings from a multi-site pilot study, conducted by the University of Arizona-Cooperative Extension, assessing youth program quality among diverse 4-H club settings across Arizona. The paper will highlight the process of developing the youth program quality survey; address the incorporation of technology (radio frequency clickers) to aid in data collection; describe preliminary data findings from the study, including descriptive and inferential statistics; and discuss implications of the findings and validated instrument for the field of positive youth development.
A Systems Perspective on the Challenges in Finding Measures for High-Quality Evaluation
Presenter(s):
Monica Hargraves, Cornell University, mjh51@cornell.edu
Abstract: Extension systems and programs face increasing pressure for accountability, evidence of impact, and continuing program development and improvement. Efforts to develop high-quality measures and to make them more accessible to educators are a vital contribution to this situation. However this increased access is only part of the solution. It is also important to improve decision-making regarding measure selection. Drawing on experience with the Systems Evaluation Protocol developed by the Cornell Office for Research on Evaluation, this paper examines various factors that come into play in determining how well an evaluation (and measure) “fit” a program. Knowing when and why to use well-established and standardized measures, and when to adopt or add alternative measurement strategies, is an essential determinant of evaluation quality. This presentation offers some decision-making guidelines.

Session Title: Teaching About Specific Aspects of Evaluation
Multipaper Session 535 to be held in CROCKETT B on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Teaching of Evaluation TIG
Chair(s):
John Stevenson,  University of Rhode Island, jsteve@uri.edu
Teaching Qual Inside a Quant World
Presenter(s):
John Stevenson, University of Rhode island, jsteve@uri.edu
Abstract: This paper presents my reasoning and my teaching methods for incorporating qualitative perspectives into a course on evaluation that is primarily quantitative in its approach. Engaging students in a dialogue that questions assumptions and offers opportunities for reflection can enhance learning and practice of quantitative evaluation.
Integrating Social Justice Into Evaluation Teaching: Opportunities and Strategies
Presenter(s):
Veronica Thomas, Howard University, vthomas@howard.edu
Anna Madison, University of Massachusetts, Boston, anna.madison@umb.edu
Abstract: This presentation will argue that social justice should be included in evaluation education as a fundamental value in evaluation practice. A social justice orientation will provide students with a perspective that will enable them to challenge existing evaluation hegemonic ontological, epistemological, theoretical, and methodological practices that diminish groups at the margins of society and normalize injustice. We will present four major areas where educators can intersect social justice and evaluation in classroom and field experiences:(a) theoretical knowledge, (b) methodological knowledge, (c) interpersonal knowledge, and (d) professionalism. Further, we will examine how a social justice orientation can be evident in pedagogical approaches and the professors’ articulation of students’ expected learning outcomes. Sample activities will be provided that educators can utilize to integrate social justice, evaluation theory, and methodology in graduate training in an effort to produce a more critical evaluator.
Developing Evaluation Reports That Are Useful, User-Friendly, and Used
Presenter(s):
Tamara M Walser, University of North Carolina at Wilmington, walsert@uncw.edu
Abstract: The purpose of this presentation is to share an instructional strategy I use for teaching graduate students in Educational Leadership how to develop evaluation reports that are useful, user-friendly, and used. The strategy is based on the concept of storytelling and includes: (a) creating multiple tables, graphs, and figures as part of the data analysis process to uncover the story the data tell; (b) outlining the story the data tell, using headings and subheadings to organize the layers of the story; (c) drafting a narrative story in clear and concise language; (d) choosing the “illustrations” for the story—the tables, graphs, and figures that BEST illustrate the key points of the story. The presentation will include a brief review of the literature on evaluation reporting, a demonstration of the instructional strategy, and examples of student-developed evaluation reports.

Session Title: Evaluation Within Contested Spaces
Panel Session 536 to be held in CROCKETT C on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Ross VeLure Roholt, University of Minnesota, rossvr@umn.edu
Abstract: International and humanitarian and other aid agencies require evaluation for accountability and program improvement. Increasingly, evaluation has to be undertaken in communities under conditions of violent division. There is practice wisdom about how to conceptualize and implement this work, but it is not easily available, as is social research under such conditions. This panel will offer a public, professional space for describing, clarifying and understanding this work, suggesting practical strategies, tactics and tools. Also, research on evaluation practice under these conditions will be covered. A relevant bibliography will be distributed.
Doing Evaluation and Being an Evaluator in Violently Contested Spaces
Michael Baizerman, University of Minnesota, mbaizerm@umn.edu
Michael Baizerman has over 35 years of local, national, and international evaluation experience. Over the last seven years he has worked with governmental and non-governmental organizations in Northern Ireland, South Africa, Israel, Palestine, and the Balkan region to document and describe youth work in contested spaces and to develop effective evaluation strategies to document, describe, and determine outcomes of this work. He joins his experiences of doing evaluation in contested spaces to literature in evaluation, anthropology and other social sciences, and also in humanitarian and peace building providing an overview of doing evaluation and being an evaluator in violently contested spaces. Themes are located, positioned and named, and then put into conversation with evaluation theory and practice in non-contested spaces.
Crafting High Quality Evaluations in Contested Spaces: Lessons From the Field.
Barry Cohen, Rainbow Research Inc, bcohen@rainbowresearch.org
Barry Cohen has been Executive Director of Rainbow Research, Inc. since 1998. He has 35 years of experience in research, evaluation, planning and training in fields such as public health and eliminating health disparities; alcohol, tobacco and other drugs; violence prevention; after-school enrichment; school desegregation; systems advocacy, mentoring, social services, and welfare reform. His case study of evaluating programs in a contested space in the United States provides insights into how evaluation is shaped by local conditions and what evaluators must do to craft high quality evaluations studies under these conditions.
Show me your Impact: Evaluating Transitional Justice in Contested Spaces
Colleen Duggan, International Development Research Centre, cduggan@idrc.ca
This paper discusses some of the most significant challenges and opportunities for evaluating the effects of programs in support of transitional justice – the field that addresses how post-conflict or post authoritarian societies deal with legacies of wide spread human rights violations. The discussion is empirically grounded in a case study that assesses the efforts of the International Development Research Centre (IDRC) and one of its Guatemalan partners to evaluate the effects of a museum exposition that is attempting to recast historic memory and challenge racist attitudes in post-conflict Guatemala. The paper argues that despite the increasing trend to fund transitional justice programs, many international aid donors are stuck in traditional and arguably orthodox paradigms of program evaluation. The case study experience indicates that there is no perfect evaluation model or approach for evaluating transitional justice programming – only choices to be made by evaluators and evaluands.
Being Practical, Being Safe: Doing Evaluations in Contested Spaces
Ross VeLure Roholt, University of Minnesota, rossvr@umn.edu
Ross VeLure Roholt worked and lived in Belfast, Northern Ireland and in Ramallah and Gaza . During this time, he designed and worked on several evaluation studies for youth programs, youth services, museum exhibitions, and quality assurance in Belfast, Northern Ireland, and Ramallah and Gaza, Palestine. His evaluation experience under violence and post-violence conditions will be described and joined to other evaluation studies under similar conditions gathered from practitioners and researchers for a completed special full issue of Evaluation and Program Planning, co-edited by Ross VeLure Roholt and Michael Baizerman. Its focus is describing the challenges and strategies for evaluation work under these conditions, using case studies and analytic essays. In particular he discusses how contested spaces raises questions about the evaluation enterprise.

Session Title: Assessing the Health of and Improving the Evaluation Function Across the Government of Canada Through the Management Accountability Framework (MAF)
Panel Session 537 to be held in CROCKETT D on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Government Evaluation TIG
Chair(s):
Anne Routhier, Treasury Board of Canada, anne.routhier@tbs-sct.gc.ca
Abstract: In 2003, the Management Accountability Framework (MAF) was created by the Treasury Board of Canada Secretariat (TBS) to define/outline the expectations of senior public service managers for good management. The MAF is structured around 10 key elements (comprising 21 Areas of Management (AoMs)). These elements are assessed on a periodic basis (either annually or tri-annually) and the results are reported to Deputy Heads of departments to assist them in identifying management priority issue areas. In this panel presentation, the TBS Centre of Excellence for Evaluation (CEE) will present its methodology and experience in assessing MAF AoM 6 (Quality and Use of Evaluation) in departments and agencies of the Government of Canada and will be joined by representatives from Industry Canada, Canadian Heritage and Indian and Northern Affairs Canada who will share their experience in undergoing and utilizing MAF to improve the evaluation function in their departments.
Overview of the Government of Canada’s MAF's Area of Management 6 (AoM6) (Quality and Use of Evaluation) and Its Use in Assessing the Health of the Evaluation Function
Anne Routhier, Treasury Board of Canada, anne.routhier@tbs-sct.gc.ca
Brian Moo Sang, Treasury Board of Canada, brian.moosang@tbs-sct.gc.ca
This presentation will provide a background and context for participants concerning the goals and structure of the Government of Canada’s Management Accountability Framework (MAF) process to assess Area of Management 6 - Quality and Use of Evaluation (AoM6). Further, this presentation will provide an overview of the MAF-AoM6 assessment process that the Treasury Board of Canada Secretariat’s Centre of Excellence for Evaluation (TBS-CEE) uses in its periodic ‘MAF Reviews’ of federal departments/agencies. The presentation will include a description of TBS-CEE’s MAF assessment methodology which addresses four overarching assessment criteria – 1) quality of evaluations 2) evaluation coverage 3) neutrality of evaluation 4) utilization of evaluation. As TBS-CEE is the policy centre/functional leader of the evaluation in the Canadian federal government, the presenter (the Senior Director TBS-CEE) is uniquely situated provide an overall perspective on the impact of MAF on evaluation quality and on the overall health of the evaluation function.
Department of Canadian Heritage’s Experience in MAF to Inform Continuous Improvement of the Evaluation Function.
Paule-Anny Pierre, Department of Canadian Heritage, paule-anny.pierre@pch.gc.ca
The Department of Canadian Heritage is responsible for policies and programs that help all Canadians participate in their shared cultural and civic life. It is specifically responsible for formulating and implementing cultural policies related to copyright, foreign investment and broadcasting, as well as policies and programs related to arts, heritage, official languages, sports, state ceremonial and protocol, and Canadian symbols. Main activities involve funding communities and organizations to promote the benefits of culture, identity, and sport for Canadians. The measurement of these policies and programs impact is challenging given the nature of the expected results. This presentation will provide participants with an overview of how MAF contributed to enhance the department’s evaluation function with regards to the quality and use of evaluations. The presenter will highlight challenges in meeting MAF expectations, but also its influence on the planning, continuous improvement and increased recognition of the evaluation function within the department.
Building Evaluation Excellence at Indian and Northern Affairs Canada: Using the the MAF as a Roadmap
Tamara Candido, Indian and Northern Affairs Canada, tamara.candido@ainc-inac.gc.ca
Judith Moe, Indian and Northern Affairs Canada, judith.moe@ainc.gc.ca
Indian and Northern Affairs Canada (INAC) has one of the broadest and most complex mandates in the Canadian Federal context. The mission seeks to improve social and economic well-being, to develop healthier, more sustainable communities, and to ensure full participation of Canada’s Aboriginal Peoples (First Nations, Inuit and Métis) and Northerners in Canada's political, social and economic development. The Mandate is shaped by centuries of history, statutes, negotiated agreements and legal decisions and is adapted to unique demographic and geographic challenges. The presentation will focus on progress against a four point strategy toward evaluation excellence using the MAF as a roadmap. This strategy, implemented over a period of four years, includes: • Strengthen Evaluation and Performance Measurement Capacity (e.g. neutrality, technical expertise) • Elevate Evaluation Quality / Expand Evaluation Coverage • Build Relationships of Trust and Collaboration • Transfer Evaluation Knowledge The presentation will also include reflection on key challenges.
Use of the MAF at Industry Canada in Improving the Evaluation Function
Kim Bachmann, Industry Canada, kim.bachmann@ic.gc.ca
Beate Schiffer-Graham, Industry Canada, beate.schiffer-graham@ic.gc.ca
Industry Canada's mandate is to help make Canadian industry more productive and competitive in the global economy. The department oversees a variety of programs and activities to support this mandate, ranging from the creation of marketplace frameworks and regulations to the delivery of programs in support of economic development. Management Accountability Framework (MAF) assessments are being used at Industry Canada to identify not only management practices that need improvement, but what we are doing well. In our presentation we will discuss how the MAF assessment has assisted the evaluation function in refining our internal practices, benchmarking our progress, and engaging in discussions with the broader evaluation community.

Session Title: Assessing the Quality of Research Instruments Using Cognitive Lab Methodology: A Practical Discussion and Lessons Learned
Demonstration Session 538 to be held in SEGUIN B on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Qualitative Methods TIG
Presenter(s):
Joanna Gilmore, University of South Carolina, jagilmor@mailbox.sc.edu
Heather Bennett, University of South Carolina, bennethl@mailbox.sc.edu
Karen Price, University of South Carolina, pricekj@mailbox.sc.edu
Abstract: When considering “evaluation quality” it is important to critically analyze the appropriateness of research instruments used to garner data on a particular subject. One method employed by researchers from the University of South Carolina to review instruments is the cognitive lab methodology. Cognitive labs involve asking individuals to report their decision-making processes and analyzing the resulting verbal data to garner information about the cognitive processes that an individual uses to complete a task (Van Someren, Barnard, & Sandberg, 1994). This demonstration will describe the purpose of the cognitive labs, how cognitive labs were employed in two projects, and the methods by which data were collected and analyzed. The presenters will also share lessons learned in conducting cognitive labs. Additionally, participants will be provided with an opportunity to observe a “mock” cognitive lab and will be invited to pose questions and comments throughout the presentation.

Session Title: Complementary Approaches to Evaluating Social Safety Nets at the World Bank
Panel Session 539 to be held in REPUBLIC A on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Cheryl Gray, World Bank, cgary@worldbank.org
Abstract: This session will illustrate the World Bank’s Independent Evaluation Group’s (IEG) approach to evaluating the World Bank’s support to social safety nets world-wide. Specifically, the panel will demonstrate the various building blocks of the evaluation, the main themes of questions posed and the approaches used in addressing the questions. The presentations will explore how different qualitative and quantitative methods complement each other to provide a rich set of evidence.
Evaluating Social Safety Nets at the World Bank: Evaluation Questions and Approaches
Jennie Litvack, World Bank, jlitvack@worldbank.org
This presentation will provide an overview of evaluation of the World Bank’s support for social safety nets. It lays out how the main themes of evaluation questions were formed and it will discuss the various methods used and the rationale for the approaches taken to conduct the study. The objective of this presentation is to establish how complementary approaches help to derive the evaluative findings and conclusions, and specifically how the use of multiple approaches strengthened the evaluation design. The presentation will cover a range of approaches, including lending portfolio analysis, a review of analytical work, in-depth special papers, country case studies, and impact evaluations. Ms. Litvack is the task manager for the overall evaluation.
Evaluating Social Safety Nets at the World Bank: Country Case Studies – The Case of Jamaica
Victoria Monchuk, World Bank, vmonchuk@worldbank.org
The overall evaluation conducted 32 country cases including special studies in Turkey, Indonesia, Colombia, Ethiopia and Jamaica. They cases assessed how Bank assistance supported the development of SSNs in the country, the appropriateness of the support in relation to poverty, government resources, and outcomes of the support. The second presentation is based on the Jamaica case study and includes in-depth evaluations of two Bank-supported projects which both had SSN objectives. One project supported SSN reform and formation of a conditional cash transfer program. The other supported the Jamaica Social Investment Fund in upgrading social infrastructure and creating temporary employment in poor communities. The presentation will show how the study evaluated the effectiveness of Bank support at the country level in Jamaica and how the use of case studies contributes to the overall assessment of Bank support for SSNs world-wide. Ms. Monchuk coordinated the case studies and undertook the Jamaica study.
Evaluating Social Safety Nets at the World Bank: Impact Evaluation for Assessing Sustainability of Program Effects – The Case of Colombia
Javier Baez, World Bank, jbaez@worldbank.org
The final presentation discusses how impact evaluations contribute to responding to the major evaluation questions. It discusses the impact evaluation of “Familias en Accion”, a Colombian SSN program that transfers cash to the poorest households conditional on certain household behaviors. Previous evaluations of the program have found positive short-term impacts on a variety of intermediate outcomes. Little is known, however, about the impacts of the program in the middle- and long-term. This study uses a variety of quasi-experimental designs together with household surveys, poverty censuses and administrative data to shed light on this issue. In particular, it investigates whether the positive effects attributed to the program in the short-term also led to improvements in school achievement and cognitive development over time. Finally, the approaches employed in this study provide insights into different research strategies for long-term evaluations facing similar data constraints. Mr. Baez managed and undertook the impact evaluation work

Session Title: Tools for Aligning National-Level and Local-Level Evaluations: Helping Grantees Evaluate Their Public Health Interventions
Skill-Building Workshop 540 to be held in REPUBLIC B on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Health Evaluation TIG
Presenter(s):
Shyanika Rose, Battelle Memorial Institute, rosesw@battelle.org
Joanne Abed, Battelle Memorial Institute, abedj@battelle.org
Carlyn Orians, Battelle Memorial Institute, orians@battelle.org
Linda Winges, Battelle Memorial Institute, winges@battelle.org
Abstract: Public health grant programs often encourage grantees to implement a range of interventions tailored to local needs. This represents a challenge to technical assistance providers charged with aligning local- and national-level evaluations. We present two tools that integrate varied strategies into a comprehensive intervention framework. One uses intervention pathways to categorize interventions into different types, pursuing multiple pathways toward a set of shared health outcomes. The other uses an intervention mapping matrix to categorize interventions by setting and type of change desired. Where setting and change type intersect, a “profile” can be accessed that contains ideas for evaluation questions, indicators, and data sources. Both approaches facilitate clearer understanding of what to evaluate and lead to more appropriate and consistent evaluation across diverse interventions. Session participants will utilize the two methods to characterize their own (or their grantees’) interventions and be asked for input on how tools can be improved.

Session Title: Engaging Participants in the Evaluation Process: A Participatory Approach
Multipaper Session 541 to be held in REPUBLIC C on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Seriashia Chatters,  University of South Florida, schatter@mail.usf.edu
Evaluating a Community Partnership Using a Community-based Participatory Approach: The Men’s Health League Partnership in Cambridge, Massachusetts
Presenter(s):
Omonyele Adjognon, Institute for Community Health, oadjognon@challiance.org
Shalini Tendulkar, Institute for Community Health, stendulkar@challiance.org
Elisa Friedman, Institute for Community Health, efriedman@challiance.org
Claude- Alix Jacob, Cambridge Public Health Department, cjacob@challiance.org
Marsha Lazar, Cambridge Public Health Department, mlazar@challiance.org
Albert Pless Jr, Cambridge Public Health Department, apless@challiance.org
Barbara Kibler, Margaret Fuller Neighborhood House, bkilbler@margaretfullerhouse.org
Abstract: In 2007, three institutions— the Margaret Fuller Neighborhood House, the Cambridge Health Alliance and the Cambridge YMCA—joined efforts to address health disparities in Cambridge, with two main goals. The first was to create the Men’s Health League program in order to reduce the risk of diabetes, cardiovascular diseases and stroke among men of color in Cambridge, Massachusetts. The second was to evaluate this partnership between the three institutions. The program developers defined an effective partnership as the entity that would: a) identify strengths and weaknesses in the collaboration, b) develop and implement strategies to improve collaboration, and c) share lessons learned and best practices around local partnerships. This paper proposes to share methods used by the Institute for Community Health (an external research and evaluation institution) to evaluate this community-based partnership. Additionally, evaluation results, challenges to and lessons learned from the partnership evaluation process will be discussed.
To Tell or Not to Tell: Strategies of Breaking Through the Walls and Gaining the Trust of Evaluation Participants
Presenter(s):
Bellarmine Ezumah, Howard University, bellaezuma@gmail.com
Abstract: The process of evaluation is often met with resistance because it can very easily be construed as prying. Mabry (1999) calls it a “judgment-intensive craft” (p. 201). Whether it aims at assessing the extent to which a program lives up to its objective, or searching for ways of improvement, the overarching process entails some probes and questioning. Therefore, parties involved in the organization of the evaluand sometimes become defensive and uncooperative. This paper is a case example of how the author as evaluator gained trust of participants through applying some of the guidelines of the The Joint Committee for Educational Evaluation (1994) standards including, establishing evaluator credibility, values identification, understanding the cultural, socio-political, language, and economic dynamics of participants, involving a broader spectrum of stakeholders, among other things, in a dissertation research that evaluated a computer program designed for the low income communities of the world.
Utilizing Participatory Processes in Program Design and Implementation Sets the Framework for a More Streamlined Evaluation Process Especially for Participatory Evaluations
Presenter(s):
Carlene Baugh, CHF International, cbaugh@chfhq.org
Scott Yetter, CHF International, syetter@chfhq.org
Abstract: Participatory processes in programs set a seamless framework for participatory evaluation. Effective participatory processes include active stakeholder participation through consensus, conflict-resolution and ownership - key themes of participatory evaluation processes. 1. The Participatory Action for Community Empowerment (PACE) lays the foundation process-wise and experientially by raising key issues about participation. It helps to create a common value set throughout the program’s life cycle. Participatory evaluation focuses on who initiates and undertakes the process and who learns and benefits from the findings (IDS, 1998). 2. PACE supports community decision-making and problem solving - key criteria for participatory evaluation processes. (King, Jean, Making Sense of Participatory Evaluation, 2007) 3. PACE process promotes skills for evaluators. An emerging theme in the last decade is the need for evaluators to be trained not just in how to gather and analyze data, but also in negotiation and conflict resolution skills. (2009 Claremont Debates, Patton, M. 2009)
Assessing a Model to Support Community-Driven Research Initiatives: The Atlanta Clinical and Translational Science Institute’s Community Engagement and Research Mini-grant Program
Presenter(s):
Tabia Henry Akintobi, Morehouse School of Medicine, takintobi@msm.edu
Lewis Autor, Rock of Escape, rockofescape@yahoo.com
Jacqueline Brown, Empowerment Resource Center for Women Inc, jbrown@empoweryoungwomen.org
Joyce Essien, Emory University, essien@fox.sph.emory.edu
Katherine Erwin, Morehouse School of Medicine, kerwin@msm.edu
Daniel Blumenthal, Morehouse School of Medicine, dblumenthal@msm.edu
Michelle C Kegler, Emory University, mkegler@emory.edu
Winifred W Thompson, Emory University, wthomp3@sph.emory.edu
Abstract: Background: The Atlanta Clinical and Translational Science Institute’s Community Engagement and Research Mini-grant Program promotes community-campus partnership through funding ($4000) and technical assistance to non-profit, community-based organizations (CBOs) in Metropolitan Atlanta and Southwest Georgia. Methods: A request for applications was followed by engaging academicians and community leaders in a grant review process to identify applicants who clearly identified their community need, health project, evaluation plans, and agreement to partner with Morehouse School of Medicine, Emory University or Georgia Institute of Technology researchers. Results: CBO grants addressed asthma awareness, HIV risk reduction, peer education and physical activity. Cross-site trends in CBO outreach, training, as well as knowledge, skill, or ability changes in communities were among tracked measures. Equally important was identification of the perceived value added by community-academic partnerships and recommendations to strengthen the program. Discussion: Processes and outcomes that will be presented have implications for developing community-academic partnerships advancing research translation.
Utilizing Participatory Action Research Framework to Prevent HIV Infection Among Youth Living in Public Housing
Presenter(s):
Meelee Kim, Brandeis University, mlkim@brandeis.edu
Peter Kreiner, Brandeis University, pkreiner@brandeis.edu
Suzanne Boucher, Wayside Youth and Family Support Network, suzanne_boucher@waysideyouth.org
Abstract: Multiple studies document the disproportionate rates of HIV infection and HIV/AIDS diagnoses among persons of color in the U.S. Residents living in public housing around the Boston area reflect a diverse group of ethnic minorities. For example, Haitians and Hispanics/Latinos represent more than half of the public housing residents in Somerville, MA. While there are cultural differences within and among ethnic groups, there are some shared common factors that place individuals living in public housing at increased risk of HIV/AIDS: discrimination; stigma; poverty; high mobility; isolation; and marginalized status. The spread of this disease grows faster and farther in conditions of poverty, powerlessness, and lack of accurate information. Therefore, ethnic enclaves within public housing can help perpetuate cultural biases, myths, and fears related to HIV/AIDS. Utilizing a participatory action research framework helps to prevent HIV infection through activities that build and sustain community capacity to address social issues surrounding HIV/AIDS.

Return to Evaluation 2010
Search Results for All Sessions