2010 Banner

Return to search form  

Session Title: The Role of Metaevaluation in Promoting Evaluation Quality: National and International Cases
Panel Session 282 to be held in Lone Star A on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Presidential Strand
Chair(s):
Leslie Cooksy, University of Delaware, ljcooksy@udel.edu
Discussant(s):
Donald Yarbrough, University of Iowa, d-yarbrough@uiowa.edu
Abstract: Evaluation quality is a primary concern to the evaluation profession. However, different organizations and professionals may conceptualize, operationalize, practice, and use “evaluation quality” differently. This panel focuses on metaevaluations and their role in promoting quality in national and international contexts. The first two presentations will emphasize experiences at the United Nations Children's Fund (UNICEF) and CARE International from the perspectives of those who are or have been managing such metaevaluations. The third presentation will reflect on the experience in conducting guided, external, independent appraisals for the International Labour Organisation and reflect on challenges and opportunities for assessing evaluation quality over multiple years. The fourth presentation discusses pitfalls associated with applying the Joint Committee’s Program Evaluation Standards to written reports and identifies ways to improve consistency in metaevaluation. Together, these presentations allow for exploring theoretical and practical dimensions of evaluation quality in national and international metaevaluation contexts.
The Use of Evaluation Quality Assurance System in Meta-evaluation at the United Nations Children's Fund (UNICEF) (The opinions expressed are the personal thinking of the presenter and do not necessarily reflect the policies or views of UNICEF)
Marco Segone, United Nations Children's Fund, msegone@unicef.org
Based on the United Nations Evaluation Group Evaluation Standards, UNICEF adopted a two-tier approach to improve the quality of evaluation: a formative regional-level Quality Assurance system, complemented by a summative global-level Quality Assurance system. The Regional Office for Eastern Europe and Central Asia set up a Regional Evaluation Quality Assurance System to assist UNICEF country offices in meeting quality standards by reviewing draft evaluation Terms of References and reports, giving real-time feedback so that Country Offices can enhance and improve the final version. This regional-level system is complemented by a summative approach to monitor the impact of efforts and strengthen UNICEF’s evaluation function globally. This system reviews the quality of final evaluation reports supported by UNICEF worldwide, by having an independent institution rate final evaluation reports commissioned by country offices, regional offices and headquarter divisions. Reports meeting satisfactory ratings are made available in the UNICEF Global Evaluation Database.
Can Metaevaluations be Helpful to International NGOs? A Case Study From CARE International
Jim Rugh, Independent Consultant, jimrugh@mindspring.com
During the 12 years Jim Rugh led the M&E unit for CARE, he developed a system of biannual meta-evaluations. These were called MEGA evaluations, mainly to reflect the fact that they were really big meta-evaluations of as many evaluation reports as had been submitted to the Web-based evaluation library from projects all around the world during the past two years. The MEGA acronym was also defined as Meta-Evaluation of Goal Achievement to acknowledge that one of the expectations on the part of senior management and the board was that this would help answer the question of what impact this very large INGO was having globally. So they served as a synthesis of what was being learned from these evaluations. They also were meta-evaluations in the classical sense of assessing the methodologies used by the evaluators and judging how well they addressed the standards articulated in CARE’s Evaluation Policy.
The Role of Metaevaluation in Promoting Evaluation Quality at the International Labor Organization
Daniela Schroeter, Western Michigan University, daniela.schroeter@wmich.edu
Anne Cullen, Western Michigan University, anne.cullen@wmich.edu
Kelly Robertson, Western Michigan University, kelly.robertson@wmich.edu
Craig Russon, International Labor Organization Evaluation Unit, russon@ilo.org
The International Labour Organisation (ILO) maintains a large portfolio of technical cooperation projects and, given the size of its investment, is interested in learning about the quality of its projects for improvement purposes, accountability purposes, and decision making about allocation of funding. To ensure the quality and credibility of its evaluations, since 2006 ILO has mandated annual appraisals of all independent evaluation reports. As a result, ILO’s evaluation unit (EVAL) is contracting annual independent, external appraisals of a sample of technical cooperation project evaluation reports since 2006. ILO EVAL supports these efforts by integrating and harmonizing existing evaluation policies and practices and encouraging the development of an evaluation culture throughout the organization. This presentation focuses on ILO metaevaluations conducted in 2007 and 2008 with an emphasis on methodologies used and potentials and challenges of these methodologies for metaevaluation in ILO.
The Use of the Program Evaluation Standards in Metaevaluation: Potential and Pitfalls
Lori Wingate, Western Michigan University, lori.wingate@wmich.edu
The Program Evaluation Standards have been widely accepted as the prevailing criteria for assessing evaluation quality in North America. They were designed to be applicable to a broad array of evaluation contexts. Their generality makes them adaptable for different settings and uses, but also leaves them open to substantial interpretation by users. Although the Standards were not put forth as a rating tool, they are commonly used in that capacity for metaevaluation purposes. Problems related to consistency in the application of the Standards are exacerbated when information about the evaluation(s) being assessed is limited to what is documented in evaluation reports, since many standards refer to aspects of evaluations that are not commonly detailed in writing. Based largely on a study that investigated interrater reliability in metaevaluation, this presentation describes the pitfalls associated with applying the standards to written reports for metaevaluation purposes and identifies ways to improve consistency in metaevaluation.

Session Title: From Agent-Based Modeling to Cynefin: The ABC's of Systems Frameworks for Evaluation
Multipaper Session 283 to be held in Lone Star B on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Systems in Evaluation TIG
Chair(s):
Mary McEathron,  University of Minnesota, mceat001@umn.edu
Discussant(s):
Mary McEathron,  University of Minnesota, mceat001@umn.edu
Using the Cynefin Framework in Evaluation Planning: A Case Study
Presenter(s):
Heather Britt, Independent Consultant, heather_britt_cairo@hotmail.com
Abstract: This paper provides a case study of the use of the Cynefin framework in evaluation planning. The Cynefin framework is a systems thinking tool that can be used to describe evaluation situations. Each of the four domains of the framework is characterized by different dynamics and corresponding types of inquiry. The characteristics of each domain suggest appropriate evaluation methods. I outline five steps for using Cynefin to construct an evaluation design: 1. Identify evaluation users and purposes 2. Draft key evaluation questions 3. Assign each evaluation question to a domain of the Cynefin framework by asking 2 questions: a. How can we know the answer to this evaluation question? b. What is the nature of the relationship between elements within this situation? 4. Select appropriate evaluation methods for each evaluation question 5. Integrate the key questions and methods into a coherent evaluation design
Agent Based Modeling (ABM) Simulation for Program Evaluation
Presenter(s):
Stephen Magura, Western Michigan University, stephen.magura@wmich.edu
Rainer Hilscher, Altarum Institute, rainer.hilscher@altarum.org
Theodore Belding, TechTeam Government Solutions, ted.belding@newvectors.net
Jonathan Morell, Vector Research Center, jonny.morell@newvectors.net
Abstract: Developing simulations to assist in program evaluation has great potential. Simulation is the use of a computer-based mathematical model to mimic a real world system so that the likelihood of various system outcomes can be estimated. Simulation can assist the evaluative process in situations where it is impractical to implement expensive and complex programs and ascertain their outcomes in real time. Given good theory and good data on the initial state of a system, simulation would allow a large variety of implementation scenarios to be explored “virtually,” to assist in deciding which programs with what attributes in which settings are likely to be effective. To illustrate, we outline a concept to apply a specific technique, agent-based modeling (ABM), to simulate the adoption of evidence-based practices into existing addiction treatment systems. A “quick” large-screen demonstration of swarm intelligence in predicting improvised explosive device (IED) threat areas will be included.
Using Systems Thinking Concepts in Evaluation of Complex Programs
Presenter(s):
William M Trochim, Cornell University, wmt1@cornell.edu
Wanda Casillas, Cornell University, wdc23@cornell.edu
Margaret Johnson, Cornell Univeristy, maj35@cornell.edu
Jennifer Brown Urban, Montclair State University, urbanj@mail.montclair.edu
Abstract: A growing emphasis on systems thinking in evaluation increases the potential for conducting quality evaluation of programs nested in complex systems. Recent research efforts have led the development of strategic application of systems concepts such as the Systems Evaluation Protocol (SEP) for planning and implementation in the evaluation of nested programs. The SEP applies systems thinking concepts from evolutionary biology and developmental systems theory to program modeling, boundary analysis, and program/evaluation alignment in order to position the program and its evaluation within phases of a lifecycle that co-evolve in an ecological context. This paper presentation will discuss these theoretical concepts in depth, drawing examples from current evaluation work which utilizes a facilitated SEP.

Session Title: I See What You Mean: Applications of Visual Methods in Evaluation
Think Tank Session 284 to be held in Lone Star C on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Terry Uyeki, Humboldt State University, terry.uyeki@humboldt.edu
Discussant(s):
Jara Dean-Coffey, jdcPartnerships, jara@jdcpartnerships.com
Terry Uyeki, Humboldt State University, terry.uyeki@humboldt.edu
Jara Dean-Coffey, jdcPartnerships, jara@jdcpartnerships.com
Abstract: This interactive session will explore the use of visual methods to maximize inclusive evaluation approaches for obtaining input from stakeholders, enhancing and deepening participant engagement, and supporting collaborative problem solving and visioning. The two presenters will share the ways in which they use visual methods, ranging from graphic recording and graphic facilitation to graphic adaptations of concept mapping or multiple cause diagrams of complex systems or situations. Small groups will discuss the benefits and potential applications for visual methods in their work, implications for effective integration in to their practice, as well as evaluation settings in which visual methods may not be appropriate.

Session Title: Contextual Issues in a Randomized Control Group Evaluation of a School-based Intervention: Fielding an Evidence-based Intervention to Reduce Youth Gun Violence in Chicago
Panel Session 285 to be held in Lone Star D on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Wendy Fine, Youth Guidance, wfine@youth-guidance.org
Abstract: This panel will explore how a promising youth development program is being evaluated, with the goal of utilizing the results to establish an evidence-based intervention that reduces youth gun violence, a major problem affecting many American cities. Panelists will discuss the different stakeholder contexts that have left their mark on the evaluation process: a) mobilizing a group of civic-minded funders for a large scale experimental evaluation; b) establishing program stakeholder buy-in for experimental evaluation design; and c) the evaluation’s impact on program model implementation by collaborating nonprofits. Panel members will highlight challenges and key decisions made along the way, such as 1) the selection process used to identify the most promising program for the evaluation; 2) how risk indices were developed to identify the target student population – a politically sensitive issue; and 3) the approach of the service provider in maintaining schools’ support for participation.
Overcoming the Odds: Carrying Out a Large Scale Randomized Control Evaluation of a Promising Youth Violence Prevention Program in Chicago
Roseanna Ander, University of Chicago, rander@uchicago.edu
Randomized control evaluations in areas of social policy such as youth violence prevention remain very rare. As a consequence there is very little gold standard, "clinical trial" evidence about what works, for whom and under what circumstances despite the fact that youth violence, like medicine, is an area where lives are at stake. From a funder’s perspective Roseanna Ander will provide a broad overview of how Becoming A Man - Sports Edition, which combines life skills building, cognitive behavioral therapy (CBT), and mentoring with sports involvement, became the largest scale randomized control evaluation of Cognitive Behavioral Therapy (CBT) to be carried out among youth in an urban area. She will discuss the roles the foundation community, policy makers, academia, the media, the non-profit sector and the presiding juvenile court judge played in bringing the project to fruition.
The Evaluator’s Context: Challenges to the Design and Implementation of an Experimental School-based Intervention
Harold Pollack, University of Chicago, haroldp@uchicago.edu
Harold Pollack will describe the construction of the sample frame used to evaluate Becoming A Man – Sports Edition. He will describe the use of risk indices in defining the group of students served by the intervention, and algorithms to randomly assign 45 treatment groups and suitably balanced control groups in 15 schools across Chicago. He will describe the value and limitations of administrative data for program assignment. He will also describe the role and limitations of power calculations in the design of a complex intervention within an intent-to-treat framework. Given this statistical design, he will characterize refusal and dropout rates in the different treatment arms based on prior student characteristics available in administrative data. Finally, he will describe from a university researcher perspective the practical obstacles and challenges of a complex collaboration involving researchers, public schools, and social service providers.
The Organization’s Context: Evaluation Design’s Affect on Program Implementation, Ethics and Evaluation Utility
Wendy Fine, Youth Guidance, wfine@youth-guidance.org
Youth Guidance is an urban school-based nonprofit, which is accustomed to conducting internal evaluations for its various programs for quality improvement and accountability to funders. One of Youth Guidance’s programs, Becoming A Man (BAM) – Sports Edition, was selected by a panel of experts as a promising practice for addressing youth gun violence. The program is now undergoing a large-scale evaluation with the goal of developing it as an evidence-based intervention that can be replicated in at-risk communities nationally. From the nonprofit’s perspective, Wendy Fine will describe how program managers, internal evaluators, practitioners, and schools have adapted to the demands of a randomized control group design. Several questions will be addressed, including 1) what are the limits of changes to a program model to accommodate evaluation design and related implications for program replication, and 2) how does an organization address the ethical dilemma posed by having a control group.

Session Title: Estimating Rater Consistency: Which Method Is Appropriate?
Demonstration Session 286 to be held in Lone Star E on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Robert Johnson, University of South Carolina, rjohnson@mailbox.sc.edu
Min Zhu, University of South Carolina, helen970114@gmail.com
Grant Morgan, University of South Carolina, praxisgm@aol.com
Vasanthi Rao, University of South Carolina, vasanthiji@yahoo.com
Abstract: When essays, portfolios, or other complex performance assessments are used in program evaluations, scoring the assessments require raters to make judgments about the quality of each examinee’s performance. Concerns about the objectivity of raters’ assignment of scores have contributed to the development of scoring rubrics, methods of rater training, and statistical methods for examining the consistency of raters’ scoring. Statistical methods for examining rater consistency include percent agreement and interrater reliability estimates (e.g., percent agreement, Spearman correlation, generalizability coefficient). This session describes each method, demonstrates its calculation, and describes when each is appropriate.

Session Title: Influencing Evaluation Policy and Evaluation Practice: A Progress Report From the American Evaluation Association's (AEA) Evaluation Policy Task Force
Panel Session 287 to be held in Lone Star F on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the AEA Conference Committee
Chair(s):
Patrick Grasso, World Bank, pgrasso45@comcast.net
Discussant(s):
Jennifer Greene, University of Illinois at Urbana-Champaign, jcgreene@illinois.edu
Abstract: The Board of Directors of the American Evaluation Association (AEA) established the Evaluation Policy Task Force (EPTF) in September, 2007, to enhance AEA's ability to identify and influence policies that have a broad effect on evaluation practice and to establish a framework and procedures for accomplishing this objective. The EPTF has issued key documents promoting a wider role for evaluation in the Federal Government, influenced federal legislation and executive policy, and informed AEA members and others about the value of evaluation through public presentations and newsletter articles. This session will provide an update on their work and invite member input on their plans and actions.
Introduction to the Evaluataion Policy Task Force
Patrick Grasso, World Bank, pgrasso45@comcast.net
As Chair of the EPTF, Mr. Grasso will present an overview of the EPTF and of recent broad events surrounding it, including membership changes and recent AEA Board of Directors’ guidance on cardinal evaluation policies to be used as a frame of reference for explaining evaluation policies to outside contacts, vetting of public evaluation policy statements with AEA members and the Board, and evaluating the EPTF.
Activities and Plans for the Evaluation Policy Task Force
George Grob, Center for Public Program Evaluation, georgefgrob@cs.com
As Consultant to the EPTF, Mr. Grob will facilitate a discussion involving EPTF members and the audience about the activities and plans of the EPTF

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Simplifying the Complex: Creating Transparent Evaluation in Multi-institutional Education Partnerships
Roundtable Presentation 288 to be held in MISSION A on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Presenter(s):
Dewayne Morgan, University System of Maryland, dmorgan@usmd.edu
Susan Tucker, Evaluation & Development Associates, sutucker1@mac.com
Jennifer Frank, University System of Maryland, jfrank@usmd.edu
Abstract: Transparency in programmatic outcomes is fast becoming an expectation from federal funding agencies. This presentation will engage participants in a discussion about the challenges associated with evaluating large-scale, multi-institutional projects. Presenters will use their diverse set of experiences and qualifications to offer examples for making evaluation findings relevant to broader education policy and practice, while attending to the expectation for transparency.
Roundtable Rotation II: Evaluating Twenty First Century Community Learning Centers: Reconciling Evaluation Needs and Constraints at Multiple Systemic Levels
Roundtable Presentation 288 to be held in MISSION A on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Presenter(s):
Elizabeth Whipple, Research Works Inc, ewhipple@researchworks.org
Mildred Savidge, Research Works Inc, msavidge@researchworks.org
Abstract: As State Evaluators for the 21st CCLC program in New York State, we are responsible for reporting on the quality of programs across the state. Initially (in 2006), local programs only reported to the federal government, using an online data collection system that provided information for federal reporting, but was not sufficient for state or local purposes. No local evaluator was required. Initial evaluation indicated a need for local evaluation targeted to both state and local needs. Based on our recommendation, all programs were required to have a local evaluator beginning in 2008. Subsequent review of local evaluation reports indicated the need for standardization of reporting to respond to state evaluation needs. This session will review and discuss the needs of decision makers at different systemic levels, soliciting feedback from the group on an evaluation template designed to provide standardized information for addressing local and state level evaluation needs.

Session Title: Constructing Relevant Guidelines for Disability Program Evaluations
Panel Session 289 to be held in MISSION B on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Special Needs Populations TIG
Chair(s):
Mary Moriarty, Picker Engineering Program, Smith College, mmoriart@smith.edu
Abstract: This panel brings together a team of experts in program evaluation and disability to discuss critical challenges in providing high quality evaluation of disability-based programs. Our intent is to identify and discuss challenges in meeting quality standards, measures, and evidence in disability evaluations from three perspectives: a governmental agency, a disability program director, and an evaluator. Dr. Linda Thurston from the National Science Foundation will provide an overview of NSF/Research in Disability Education evaluation expectations and guidelines. Dr. Joan McGuire from the Center on Postsecondary Education and Disability at the University of Connecticut will discuss the postsecondary institutional perspective, and Dr. Mary Moriarty from the Picker Engineering Program at Smith College will talk about strategies from an evaluator’s perspective. The presentation will incorporate a discussion of critical factors in disability program evaluation. Included are such issues as utilizing research-based practices, incorporating an understanding of contextual factors, confidentiality, and standards-based frameworks.
Challenges and Expectations in Evaluating the National Science Foundation (NSF) Funded Disability Education Programs
Linda Thurston, National Science Foundation, lthursto@nsf.gov
The National Science Foundation funds programs that build interest, academic success, and degree-completion in science, technology, engineering, and mathematics education of students with disabilities. Formative and summative evaluation is a crucial component of these programs. In addition to significant student outcome data, the Research for Disability Education (RDE) program within NSF requires examination of collaborative efforts at the institutional level, student transition points, and impacts of specific activities and interventions. The panelist, who is an NSF program officer in RDE, will discuss these challenges, and NSF expectations for meeting these challenges.
Evaluating Postsecondary Disability Programs: Answering Questions and Monitoring Outcomes
Joan McGuire, University of Connecticut, joan.mcguire@uconn.edu
While postsecondary disability services including accommodations are required under Section 504 of the Rehabilitation Act, there is no federal or state requirement for the evaluation of such services. Whether it be conducted by internal staff or by an independent program evaluator, or whether it be a component of a grant, targeted evaluation of specific elements of disability services is paramount. Data gathered through a systematic process that documents program services can be used to answer questions from administrators (e.g., how many requests for course substitutions were approved), consumers (e.g., how frequently do students take a reduced course load), and faculty (e.g., what comprises a “reasonable” accommodation). Data relating to outcomes for students with disabilities (e.g., retention, graduation) may be a component of more comprehensive institutional efforts or of grant funded interventions. Practical considerations will be addressed by the presenter who has been a disability program director for 20 years.
Ensuring Cultural Relevancy: Disability and Quality Evaluation
Mary Moriarty, Smith College, mmoriart@smith.edu
The evaluation process can be envisioned in three interrelated phases: development, implementation, and reporting. In each phase incorporating a contextual awareness of disability will lead to improved quality and utility of evaluation results for disability-based projects. In the development phase one formulates a theoretical model, creates evaluation questions, and establishes a methodological approach. Understanding the disability community, applicable literature in the field, and requirements of funding sources will allow for the construction of a plan that meets the needs of all stakeholders. The implementation phase incorporates data collection and analysis. In this phase there are a number of challenges (e.g., confidentiality issues, accessibility concerns, sample sizes, and communication constraints) that are critical to the reliable collection of data. Attention to these challenges will facilitate effective communication in the reporting phase. The presenter who has been a disability grant manager and evaluator for over 15 years will address these issues.

Session Title: The Fight for Evaluation Quality: Perspectives From the Trenches
Panel Session 290 to be held in BOWIE A on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Independent Consulting TIG
Chair(s):
Carol Haden, Magnolia Consulting LLC, carol@magnoliaconsulting.org
Abstract: Teaching students about evaluation quality in a graduate class is one thing. Finding ways to ensure it when conducting applied evaluation studies is another. Evaluators can enter into the conduct of evaluations with idealistic notions of how their work will espouse quality. What they will find are a myriad of challenges that can threaten the quality of all aspects of evaluation. The job of the evaluator is to anticipate, plan for, and respond to these challenges as they present themselves during evaluation studies. This panel is comprised of independent evaluation consultants who will share their experiences and perspectives related to evaluation quality using examples from a variety of studies. Presenters will share insights about how local context influences evaluation quality, the nature of relationships in promoting evaluation quality, and evaluators’ role in ensuring quality through reporting.
Evaluation Quality and Local Context: When Anything Can Happen in the Trenches
Stephanie Wilkerson, Magnolia Consulting LLC, stephanie@magnoliaconsulting.org
This presentation will address how local context influences evaluation quality. Using examples from multisite randomized control trials, the presenter will share how the politics, organizational culture, and institutional capacity of districts and schools affect evaluators’ abilities to ensure evaluation quality. This presentation conceptualizes evaluation quality as it relates to maintaining design integrity and coherence, fidelity of implementation, and participant engagement. Contextual factors related to Institutional Review Boards, local budgets, district priorities, and staff capacity support or threaten evaluation quality. This presentation aims to increase evaluators’ awareness of potential threats to evaluation quality during the development and implementation of large-scale studies. The presenter will offer lessons learned in ensuring evaluation quality despite challenges.
Evaluation Quality and Relationships: When Nurturing Makes a Difference in the Trenches
Lisa Shannon, Magnolia Consulting LLC, lisa@magnoliaconsulting.org
For independent evaluators, developing positive relationships with clients and study participants is critical to ensuring evaluation quality. Relationships with clients set the tone for studies and can affect the degree to which evaluators can interact with study participants. When clients have confidence in evaluators, they trust them to communicate freely with study participants, which facilitates the collection of meaningful data. Likewise, when participants feel supported and appreciated, they are more likely to abide by study specifications, respect implementation guidelines, participate in data collection activities, and share relevant feedback, all of which will contribute to evaluation quality. In some situations, such as when participation is mandated, study participants might feel mistrustful initially, making it even more critical to nurture affirmative relationships. This presentation will provide an overview of relationship-building techniques for evaluators and demonstrate how positive relationships with clients and study participants can ensure evaluation quality, particularly in challenging situations.
Evaluation Quality and Reporting: When Working Together Improves the Trenches
Mary Styers, Magnolia Consulting LLC, mary@magnoliaconsulting.org
Weiss (2004) suggested that evaluation reports are not journal articles, but are often considered of high importance to a particular program or district. As independent consultants, our work is critical in helping a particular group of individuals to improve their capacity and deepen their understanding. The work may not necessarily be publishable but is of critical importance to those individuals directly impacted by the program. This presentation will speak to evaluation quality as related to what an evaluation means for stakeholders and participants, with specific examples from our work with various clients to improve their program. We work collaboratively with program users and stakeholders to provide a voice for program benefits, disadvantages and suggestions for improvement. As independent consultants, we judge the quality of our evaluations by their utility for clients and not by whether it is published. Reference: Weiss, C. H. (2004b). Rooting for evaluation: A cliff notes version of my work. In M. C. Alkin (Ed.), Evaluation roots (pp. 153-168). Thousand Oaks: Sage.

Session Title: Advancing Multiethnic Program Evaluation Through Theory and Practice: An Examination of Culture, Cultural Context, and Culturally Responsive Evaluation
Multipaper Session 291 to be held in BOWIE B on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Pamela Frazier-Anderson,  Lincoln University, pfanderson@lincoln.edu
Conducting Evaluations in a Multiethnic Context: Lessons from a Hawaiian Experience
Presenter(s):
Felix Blumhardt, The Evaluation Group, gblumhardt@aol.com
Abstract: Multiethnic evaluation poses unique challenges that require evaluators to explore the larger context in which the evaluation takes place. A thorough understanding of the context is necessary since these challenges influence evaluation processes and outcomes. The purpose of this paper is to examine four evaluation projects conducted in the State of Hawai`i and how the indigenous, multiethnic, and multicultural makeup of the islands influenced the manner in which the evaluations were conducted, as well as the potential impact on the outcomes of the evaluations. This paper will discuss lessons learned from these experiences and how these lessons translate into practice when working with indigenous, multiethnic and multicultural populations. A checklist that identifies issues to consider when conducting such evaluations will be presented for readers’ feedback.
A Thematic Discussion of the Relational and Ecological Dimensions of Cultural Context: Notes From Three Interconnected Research Studies
Presenter(s):
Jill Anne Chouinard, University of Ottawa, jchou042@uottawa.ca
J Bradley Cousins, University of Ottawa, bcousins@uottawa.ca
Abstract: This paper provides a thematic discussion of the dynamic interconnections of cultural context within a six dimensional framework of cultural context, with particular attention given to those variables that influence and interactively inform the relationship between evaluators and stakeholders. The dimensions of cross-cultural context (relational, ecological, methodological, organizational, political and personal) can be visualized as multi-textual, intersecting and overlapping circles that intermingle throughout the evaluation and that are constantly at work (and in flux), creating boundaries, positions and possibilities within the cross-cultural program and evaluation context. The purpose of this paper, is to explore the dynamic inter-connections within the framework, with particular attention given to those variables that influence and interactively inform the relationship between evaluators and stakeholders. To bring the dimensions to life, we have identified three cross-cutting themes (perspective, engagement and accommodation) that we believe underscore the dynamic and vibrant inter-connections within the six dimensions of cultural context.
Cultural Competence Versus Cultural Responsiveness: Seeking True Evaluation Quality
Presenter(s):
Tamara Bertrand Jones, Florida State University, tbertrand@fsu.edu
Abstract: As evidenced by the literature the term “cultural competence” in itself conjures various definitions and implies certain assumptions. Given the elusiveness of an agreed upon definition, or even consistent terminology in evaluation, this research sought to understand cultural competence from a Black perspective. Defining and identifying the key characteristics of cultural context in evaluation, from an evaluators’ perspective, also contributes to the ongoing discussion of culture’s role in evaluation. Results indicate a difference in preferred professional language around the inclusion of cultural context in evaluation. This paper will highlight six dimensions of both cultural competence and cultural responsiveness in evaluation and discuss which term truly reflects the quality inherent in including culture in the evaluation context.
Participatory Outreach: Methods of Increasing Minority Participation in Agency Responsivenss to Community Racial Change
Presenter(s):
Asma Ali, University of Illinois at Chicago, asmamali@yahoo.com
Abstract: Culturally Responsive Evaluation (CRE] is an evaluation framework that “recognizes that demographic, sociopolitical, contextual dimensions, locations, perspectives, and characteristics of culture matter fundamentally in evaluation (Hopson, 2008).” CRE frameworks are important in the dynamic culture of nonprofit community-based organizations, which must be responsive to both internal and external changes in their surroundings (Hopson, 2009). This presentation describes "lessons learned" during a capacity-building process evaluation of an outreach effort to Latino and African-American consumers at the Will-Grundy Center for Independent Living, located in an outlying suburb of Chicago. The agency’s catchment area includes Will and Grundy counties in Northeastern Illinois, as well as their immediate urban locale. These areas have experienced unprecedented growth in African-American and Latino populations since 1990, challenging the 30-year-old agency’s current walk-in and referral methods of outreach to minority clients. Developed in consultation with researchers at UIC’s Center on Capacity Building for Minorities with Disabilities Research, the agency’s outreach program utilized Fawcett et. al. (2003) model for capacity building in evaluation and Patton (2008)’s model for utilization-focused evaluation to develop a formative, culturally-responsive evaluation of the agency’s outreach program. The program and its subsequent evaluation challenged and futher shaped the agency’s thinking, culture, and responses to changes in their community demographics. Ultimately, the evaluation process resulted in novel and surprising cross-cultural utility of the findings for the agency.

Session Title: Who Are Champions, What Are Their Impact and How Do You Know? Considerations for Advocates, Funders, and Evaluators
Think Tank Session 292 to be held in BOWIE C on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Advocacy and Policy Change TIG
Presenter(s):
Steve Mumford, Organizational Research Services, smumford@organizationalresearch.com
Sarah Stachowiak, Organizational Research Services, sarahs@organizationalresearch.com
Lance Potter, Bill & Melinda Gates Foundation, lance.potter@gatesfoundation.org
Abstract: Champion development is an important aspect of advocacy work. Building relationships with key individuals to support a cause requires time and energy on the part of advocacy organizations; however, the impact of this work may be unexamined. In small groups, participants will explore three key questions related to evaluating champion development and impact: How are champions defined? How can advocates and evaluators track and measure champion actions toward various outcomes? And what are potential challenges and opportunities for evaluating champions? Presenters will introduce questions by sharing lessons learned from work with advocacy projects in several fields. These will include a working framework for defining “champions” and potential outcomes, methods for tracking and measuring champion actions and impact, and important considerations for data collection. Discussion will lead toward a deeper understanding of champion development as an advocacy technique and how it can be evaluated.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Evaluation of the HIV Treatment Adherence Education Program
Roundtable Presentation 293 to be held in GOLIAD on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Health Evaluation TIG
Presenter(s):
Robin Kelley, National Minority AIDS Council, rkelley@nmac.org
Melanie Graham, National Minority AIDS Council, mgraham@nmac.org
Kim Johnson, National Minority AIDS Council, kjohnson@mnac.org
Abstract: Evaluation of the HIV Treatment Adherence Education Program By Robin T. Kelley, PhD and Melanie Graham, MSW, Kim Johnson, MD This is a multilevel evaluation with process evaluation internally and external evaluation by evaluation staff. There will also be individual evaluation forms completed, organizational assessments. There will be evaluation of individual technical assistance and of group level training. The evaluation revealed the effectiveness of an HIV peer driven behavior change program applied to HIV/AIDS Treatment Adherence. The concept was that through an innovative program that included technical assistance, a person living with HIV could become a role model for other HIV positive individuals. Findings revealed that organizations which implemented our peer program and the peers which complied to its core components demonstrated treatment adherence behavior and a greater level of personal development. The personal development included more life skills which included sense of self-efficacy, determination, and confidence. Moreover, with capacity building to shore up the infrastructure of the program, peers were able to further develop in their jobs.
Roundtable Rotation II: Measuring Communication Campaign Intermediate Outcomes: Tools and Techniques
Roundtable Presentation 293 to be held in GOLIAD on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Health Evaluation TIG
Presenter(s):
Michael Burke, RTI International, mburke@rti.org
Abstract: Communication campaigns often produce a wide range of outputs that may be connected to a variety of process measures and intermediate outcomes that might be examined. There are numerous ways changes in awareness, attitudes, and behavior can be assessed, and a wide range of behavioral domains that can provide useful evaluation information. For example, although HIV testing or condom usage may be a desired outcomes, information seeking behaviors such as ordering materials and visiting a website might be important intermediate indicators of likely changes in behavior. In this session we will discuss several ways evaluators can identify and assess intermediate outcomes, especially when in a resource constrained environment. Issues of quality, timeliness, ownership, and cost will be discussed.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Reflection on an Instrument for Capturing School Conditions in Developing Countries
Roundtable Presentation 294 to be held in SAN JACINTO on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Helene Jennings, ICF Macro, helene.p.jennings@macrointernational.com
Abstract: A classroom observation instrument needed to be designed for evaluations focusing on making improvements in schools in developing countries where many schools are visited over the course of a week or two. It was determined that a method of rapid appraisal of conditions in each school needed to be established and documented in a format that would permit comparisons across schools and provide summaries of indicators of educational quality. A streamlined protocol of “Indicators of Education Adequacy” was developed and field tested. It will be presented in a roundtable setting to gain feedback on the elements itemized in this tool (related to assessing Facilities and Environment, Learning Resources, Student Characteristics, Teacher Characteristics, as well as a guide to an overall assessment of instructional quality) and on the utility of the instrument. A discussion of other means of capturing such data is encouraged.
Roundtable Rotation II: Assessing Principals' Needs for Professional Development
Roundtable Presentation 294 to be held in SAN JACINTO on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Edith J Cisneros-Cohernour, University of Yucatan, cchacon@uady.mx
Roger Patron-Cortes, Universidad Autonoma de Campeche, roger_patron_cortes@hotmail.com
Abstract: This paper presents the findings of a study for assessing School Principal's needs for professional development. The research is part of an international evaluation study conducted in Australia, Canadá, United Kingdom, Mexico, scottland, South Africa and the US Unidos. Data collection involved qualitative case studies (participant observation, indepth interviews, focus groups, document analysis) as well as a survey to beginning principals in three states of Southern Mexico.

Session Title: Why Settle for Silos? Four Applications of Social Network Analysis for Building More Effective Organizational Networks and Alignment Around Outreach and New Initiatives
Demonstration Session 295 to be held in TRAVIS A on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the
Presenter(s):
Tom Bartholomay, University of Minnesota, barth020@umn.edu
Abstract: Organizations are increasingly challenged to adapt and respond to a changing world. Additionally, funders are increasingly focused on impacts that require integrated systems-based interventions. The University of Minnesota Extension has been using social network analysis (SNA) as a means to leverage existing “knowledge capital” between programs, increase alignment, and build new collaborative structures to reach new and shifting goals. This presentation will demonstrate how U of M Extension has used SNA for assessing (1) large outreach structures, (2) internal structures that support outreach, (3) existing collaboration levels, and (4) revealing potential frontier networks around target problems or audiences. Demonstrations will include examples of SNA concepts in application, SNA data collection instruments used, maps of network results, interpretation of networks, and how network maps can be useful at different levels of the organization.

Session Title: Contribution of Technology to Evaluation Practice
Multipaper Session 296 to be held in TRAVIS B on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Integrating Technology Into Evaluation
Chair(s):
Paul Lorton Jr,  University of San Francisco, lorton@usfca.edu
Mapping Adolescent Substance Abuse Issues and Treatment in Pima County, AZ: A Geographic Information System (GIS) Spatial Analysis Strategy for Community Needs Assessment
Presenter(s):
Judith Francis, Pima Prevention Partnership, jfrancis@thepartnership.us
Matthew Rahr, University of Arizona, rahr@ag.arizona.edu
Abstract: Following a local conference in which stakeholders voiced a desire for increased knowledge of the substance treatment issues and needs of low-income adolescents, Pima Prevention Partnership Partnership(PPP) completed an assessment of adolescent service needs/assets in the county using spatial analysis and created a dynamic, interactive map tool available to stakeholders and the public online. Working with our University of Arizona partner, PPP ssembled three types of data (community characteristics, adolescent treatment services and 4,900 adolescents with substance-related crimes or treatment) and used ArcGIS to create map layers. The final product is a web-based map tool that can be used to explore access to substance abuse treatment, neighborhood characteristics, and attributes of client groups. It is available to service providers, advocates, policy-makers and the public on the PPP website, along with research newsletters discussing adolescent substance abuse issues, research outcomes and best practices.
Using Networked Technology for Quality Evaluation
Presenter(s):
Colleen Manning, Goodman Research Group Inc, manning@grginc.com
Rucha Londhe, Goodman Research Group Inc, londhe@grginc.com
Mary Dussault, Harvard-Smithsonian Center for Astrophysics, mdussault@cfa.harvard.edu
Abstract: This paper explores the contribution of technology to evaluation practice through the lens of a science museum exhibit evaluation in which a networked bar code ID card was scanned at a number of the exhibit components, providing for embedded data collection regarding visitor activity and learning. We will discuss how the networked technology was used in combination with more traditional evaluation methods, the value-added of the technology to both the exhibit and the evaluation, and lessons learned.
Going Online With School-based Evaluations
Presenter(s):
Janet Lee, University of California, Los Angeles, janet.lee@ucla.edu
Nicole Gerardi, University of California, Los Angeles, gerardi_nicole@yahoo.com
Minerva Avila, University of California, Los Angeles, avila@gseis.ucla.edu
Abstract: For many school-based evaluations, the majority of data collected is conducted through traditional paper and pencil methods. However, as the use of technology in the schools increases and as many schools and districts struggle with budget concerns, many evaluators have turned to online methods for collecting data for school-based evaluations. In this presentation, three case examples of school-based evaluations where the main vehicle for data collection was online will be discussed. Specifically, we will examine various successes, challenges, and unintended consequences encountered while transitioning from traditional data collection methods to online methods. These successes, challenges, and unintended consequences will also be discussed in light of implications for quality of data collected and thus, quality of evaluative claims that can be made. Understanding how data collected online may impact an evaluation provides important insight that can be useful for the planning of future evaluations.
Validating Evaluations With Spatial Analysis
Presenter(s):
Kristina Mycek, State University of New York at Albany, km1042@albany.edu
Abstract: The current paper presents an overview of the emerging use of spatial analysis in program evaluation. This paper will review application of spatial methodology and analyses to program evaluation, particularly in areas of education, through the use of Geographical Information Systems (GIS). Spatial methodologies have been used in various, related fields, including urban planning, criminology, sociology, mental health, and for spatial understanding of program evaluation, which has the potential to promote reform (Renger, Cimetta, Pettygrove, & Rogan, 2002). While the idea of spatial analysis isn’t new, recent technological advances have created nascent opportunities for researchers by making these once burdensome techniques cost-effective, feasible, and fast. These advances allow researchers to quickly explore large, complex data sets in order to provide valuable information to multifaceted questions. This paper will explore the uses of spatial analysis and GIS in educational evaluation and demonstrate its use with example data.

Session Title: Introduction to Designing Needs Assessment Surveys
Skill-Building Workshop 297 to be held in TRAVIS C on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Needs Assessment TIG
Presenter(s):
James Altschuld, The Ohio State University, altschuld.1@osu.edu
Yi-Fang Lee, National Chi Nan University, ivanalee@ncnu.edu.tw
Hsin-Ling Hung, University of Cincinnati, hunghg@ucmail.uc.edu
Jeffry White, University of Louisiana, Lafayette, jwhite1@louisiana.edu
Abstract: Many evaluators, while familiar with what needs are and procedures for assessing them, are less knowledgeable about the unique aspects of and ways to design surveys for the endeavor. This short workshop will begin with questions asked of participants in regard to how they work with organizations as related to needs assessment (NA) surveys. From that starting point a brief overview of the NA process will be given followed by some key features of surveys (such as general guidelines, typical content areas, types of surveys, formats employed, groupings of questions, inclusion of multiple concerned stakeholders, item formats, scaling approaches, some analysis principles, etc). Participants will then use typical NA scenarios to try their hand at assorted item writing tasks. The workshop concludes with a group discussion of the nature of NA surveys, how they may be used, problems encountered, and other related issues.

Session Title: Attending to Context and Situation to Improve Evaluation Process and Reporting
Multipaper Session 298 to be held in TRAVIS D on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the
To Communicate or Not to Communicate: How Communication Within Organizations Affects Our Work as Evaluators
Presenter(s):
Namrata Mahajan, Cobblestone Applied Research & Evaluation Inc, namrata.mahajan@cobblestoneeval.com
Rebecca Eddy, Cobblestone Applied Research & Evaluation Inc, rebecca.eddy@cobblestoneeval.com
Hendrick Ruitman, Cobblestone Applied Research & Evaluation Inc, todd.ruitman@cobblestoneeval.com
Abstract: Research on evaluation has provided evidence for the importance of effective communication in conducting a successful study. This research has often focused on the advantages of good communication between evaluators and stakeholders or the importance of dialogue between an organization and its stakeholders. Little is known, however, about the importance of effective communication between stakeholders within an organization, and how communication patterns impact the overall evaluation. This paper discusses the value of communication between stakeholders in the organization and how this communication can affect the work of evaluators (e.g., increased workload and stress). Specific tips are provided via case study examples for evaluators to help stakeholders move in the direction of better communication. Other issues are also addressed, such as the role of the evaluator in improving communication between groups in an evaluation and conditions under which enhancing communication might be detrimental.
Playing in the Intersection of Context and Validity
Presenter(s):
Sheila A Arens, Mid-Continent Research for Education and Learning, sarens@mcrel.org
Andrea Beesley, Mid-Continent Research for Education and Learning, abeesley@mcrel.org
Abstract: Evaluators presumably reach valid claims through a process of collecting data, determining the importance significance and meaning of the data relative to the evaluand (thereby transforming it into evidence), and inferring conclusions or claims from the evidence. Contextual features play an important role in this process and an important role in evaluative validity. Context has the capacity to shape the outcome of a given evaluation—two evaluations of the same program or phenomenon might result in vastly different assertions. Is one of these more valid than another? This paper presents an exploration of the ways in which context frames evaluative validity and the validity of claims and then, through examples, asserts that an understanding of context is at least as important as following sets of rules for the “proper” conduct of evaluation.
From Quality Evaluations to Quality Learning: The Ten Steps to a Happy Marriage
Presenter(s):
Gabriel Pictet, American Red Cross, gabriel.pictet@ifrc.org
Margaret Stansberry, American Red Cross, margaret.stansberry@ifrc.org
Abstract: Quality evaluations require technical expertise, time and money: to be meaningful, they also need to be embedded in the client’s learning agenda. On the one hand, clients need to focus evaluations on their organizations’ knowledge management strategies, of which evaluations are just one of many components. On the other hand, consultants need to immerse themselves in their clients’ corporate cultures and inform the corpus of evidence-based knowledge. This has important implications on how evaluators advocate for corporate learning. In this paper we revisit the client’s and the evaluator’s respective and complementary roles, in a practical ten step quality framework, from (1) defining the organization’s learning agenda to (10) using the evaluation’s findings to update the organization’s learning agenda, with examples from the authors’ domestic and international evaluation experience and current practice with the America Red Cross Tsunami Recovery Program.

Session Title: Use of Fidelity Scores in Measuring Outcomes for Children Involved in the Child Welfare System
Panel Session 299 to be held in INDEPENDENCE on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Human Services Evaluation TIG
Chair(s):
Madeleine Kimmich, Human Services Research Institute, mkimmich@hsri.org
Abstract: Human Services Research Institute (HSRI) explored the use of model fidelity at the case level, using child fidelity scores as a covariate in outcomes analyses within the context of a five-year evaluation. This evaluation examined three strategies implemented across multiple Ohio child welfare agencies: Family Team Meetings, Supervised Visitation, and Kinship Supports. Fidelity to the strategies was used to help explain individual child outcomes via two statistical approaches: regression and ANOVA. By using these two techniques, evaluators were able to compare means of each fidelity group, as well as use fidelity as an independent variable in predicting child outcomes. In testing whether a very high level of fidelity is actually needed in order to achieve the desired results, evaluators are able to learn about effective practice even when high model fidelity is not evident across all areas. This becomes increasing important in complex service delivery environments, such as child welfare.
Integrating Child-level Fidelity Scores Into an Outcomes Analysis: An Evaluation of Family Team Meetings
Erin Singer, Human Services Research Institute, esinger@hsri.org
This presentation examines fidelity measurement in a child welfare intervention, Family Team Meetings (FTM). FTM is a method for engaging family members and other people who can support the family for shared case planning and decision making. Seventeen counties in Ohio implemented this intervention over five years, with varying degrees of model fidelity both across counties, and within counties. Evaluators were still able to learn about effective practice by using child-level fidelity scores as an independent variable in a regression analysis to predict child level outcomes. Outcomes explored include increased permanency, increased reunification, shorter time in placement, and a decrease in subsequent reports of abuse and neglect and/or re-entry into care. An increased understanding of effective FTM practice, as well as how to integrate fidelity into an outcome analysis emerged. An overview of this process and the results will be discussed.
The Predictive Utility of a Child Level Measure of Fidelity: Its Relevance to the Understanding of Outcomes Associated With Enhanced Visitation Practices.
Linda Newton-Curtis, Human Services Research Institute, lnewton@hsri.org
For those families who are involved with child welfare and for whom reunification is the case plan goal, it is generally understood that regular visits are critical for maintaining and improving the parent-child relationship. Twelve Ohio counties developed a visitation strategy in which supervised weekly visits, lasting a minimum of one hour were enhanced with the introduction of a ‘structured activities’ component. This component was expected to provide a mechanism through which visits might become more therapeutic, such that parenting skills could be improved thus potentially optimizing short and long-term family outcomes. An overview of the steps taken to measure the fidelity of this strategy at the child level is described followed by a discussion of the predictive utility of a child-level measure of visit fidelity in order to assess outcomes such as exit to reunification, length of stay in out-of-home care, and number of subsequent abuse and neglect reports.
Kinship Support Index: Do More Intensive Programmatic Efforts Result in Better Outcomes for Children?
Kim Firth, Human Services Research institute, kfirth@hsri.org
This presentation explores use of an index categorizing the degree of kinship support practices in county child welfare agencies participating in ProtectOhio. During this waiver, six counties focused on enhancing kinship placement supports via a practice model including increased staffing, recruitment of kin caregivers, and provision of hard goods and services. Because many of these kinship support efforts were also being utilized in other project counties, evaluators chose to index all participating counties on key kinship support elements, providing a measure of fidelity to the kinship model as well as an assessment of closeness to the model for non-strategy counties. These county-level index rankings were utilized as an additional variable in outcomes analysis for a sample of children in kinship care during the project period. Methodological challenges and outcomes findings will be discussed, providing lessons for the field on kinship support measurement and application of fidelity measures to outcomes analysis.

Session Title: Propensity Score Matching: Further Methodological Development
Multipaper Session 300 to be held in PRESIDIO A on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Ning Rui,  Research for Better Schools, rui@rbs.org
Discussant(s):
Frederick Newman,  Florida International University, newmanf@fiu.edu
On the Bandwidth of Propensity Score Caliper Matching
Presenter(s):
Wei Pan, University of Cincinnati, panwi@ucmail.uc.edu
Abstract: Caliper matching is one of the most efficient matching techniques in propensity score (PS) analysis. Researchers have been using the fixed caliper bandwidth c = .2 or .25 of the pooled standard deviation of PS, suggested by Cochran and Rubin (1973) and Rosenbaum and Rubin (1985), to address the variation of PS. Unfortunately, the fixed caliper bandwidth can only capture the between-subjects not the within-subjects variation of PS; and on the latter, it is exactly what PS matching operates. The present study proposes a random caliper bandwidth utilizing bootstrap confidence intervals of PS to capture the both between-subjects and within-subjects variations of PS. For subject i, the random caliper bandwidth c_i is determined by the width of the confidence interval of the bootstrap PS for that subject. The random caliper bandwidth is illustrated and discussed with an empirical example.

Session Title: Improving Medical and Prevention Services Through Continuous Evaluation and Organizational Learning
Multipaper Session 301 to be held in PRESIDIO B on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG and the Health Evaluation TIG
Chair(s):
John Bosma,  WestEd, jbosma@wested.org
Discussant(s):
Gayle Sulik,  Texas Woman's University, gsulik@twu.edu
Forming a Strategic Alliance: The Use of Collaboration Theory to Evaluate and Improve Nurses’ Safe Medication Administration in Massachusetts’ Nursing Homes
Presenter(s):
Teresa Anderson, University of Massachusetts, terri.anderson@umassmed.edu
Rebecca Gajda Woodland, University of Massachusetts, Amherst, rebecca.gajda@educ.umass.edu
Michael Hutton, , mshutton2001@yahoo.com
Carol Silveira, Massachusetts Board of Registration in Nursing, carol.silveira@.state.ma.us
Abstract: The Massachusetts Board of Registration in Nursing engaged the University of Massachusetts Amherst and Worcester to develop and evaluate the MA Patient Safety Initiative, funded by the National Council of State Boards of Nursing. The Initiative’s unique feature is our use of Gajda’s evidenced-based collaboration evaluation and improvement framework (Gajda, 2004; Gajda & Koliba, 2007) to organize, evaluate, and develop a strategic alliance of state and federal regulators and other state agencies and professional organizations relevant to nursing home care The use of the evaluation framework has led to an unprecedented statewide assessment of barriers to medication error reporting and patient safety culture in MA nursing homes As a result of our on-going evaluation and development of collaboration, the MA Patient Safety Initiative will produce a non-punitive, education-oriented and nursing home based curriculum for practice resolution of certain types of medication administration errors.
Arizona's Quest for Quality: Improving Prevention Services Through Evaluation and Capacity Building
Presenter(s):
Holly Lewis, Arizona State University, htlewis@asu.edu
Aimee Sitzler, Arizona State University, aimeesitzler@gmail.com
Abstract: Funding agencies including local, state and federal organizations are putting increased pressure on prevention providers to demonstrate quality and effectiveness through evaluation and the implementation of evidence-based programs. As a result, providers all over the country are conducting needs assessments, building their capacity for offering prevention services and measuring the outcomes of their programs. This paper examines Arizona’s approach to conducting a statewide assessment of behavioral health programs and describes the comprehensive process employed to increase prevention quality and help providers meet federal evidence-based guidelines. Evaluators share barriers encountered during the evaluation process as well as methods that proved successful, and explain ways to incorporate cultural competence in each aspect of evaluation.
Evaluation Learning Cycle: Applying Evaluation Lessons to Multi-year Long-Term Care Training and Organizational Development Initiatives
Presenter(s):
Marcia Mayfield, Paraprofessional Healthcare Institute (PHI), mmayfield@phinational.org
Malika Gujrati, Paraprofessional Healthcare Institute (PHI), mgujrati@phinational.org
Ines Escandon, Paraprofessional Healthcare Institute (PHI), iescandon@phinational.org
Abstract: PHI works with eldercare and disability services providers to improve the quality of jobs and the quality of care in home and residential settings. PHI’s learning cycle mirrors a continuous quality improvement cycle, with each step corresponding to a role for evaluation: plan (assessing strengths, challenges, and baseline status); implement (process evaluation); study (outcome evaluation) and act (helping to interpret lessons and outcomes in a way that informs program improvement). Two multi-year projects with long-term care nursing homes and home health agencies illustrate the value of evaluation at each step, with emphasis on the use of process and outcome data during staged implementation to inform both program design and evaluation design modifications. The initiatives provided employers with training in supervision and coaching skills, peer mentoring, and communications, as well as technical assistance in strategies to institutionalize behavior change that is critical to advance the quality of care.

Session Title: Outcome Assessment in Substance Abuse and Mental Health
Multipaper Session 302 to be held in PRESIDIO C on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Diana Seybolt,  University of Maryland, Baltimore, dseybolt@psych.maryland.edu
Importance of Substance Abuse Treatment Outcome Proxies
Presenter(s):
Lawrence Greenfield, Lawrence Greenfield Consulting, lg439@aol.com
Douglas Fountain, Outpourings, doug@outpourings.net
Abstract: Outcome proxies in substance abuse treatment evaluation are variables that accurately predict post-treatment outcomes. A key benefit of outcome proxies is that they may provide early indications of program success or failure. Secondly, they are less costly to collect compared to post treatment outcome data. Using secondary data analysis of data from the National Treatment Improvement Evaluation Study (NTIES) three variables were assessed for their accuracy in predicting post treatment outcomes. Of the three variables assessed, program satisfaction, program completion and length of stay in treatment, only the first two of these three variables were found to be useful as predictors of treatment outcomes. 1 This analysis was supported by DHHS, Caliber Associates Contract No. 270-97-2016. An earlier version of this analysis was reported by Feidler, K., Screen, M.A. Greenfield, L., and Fountain, D. Analysis of Three Outcome Proxies for Post-Treatment, Substance Use in NTIES (July 2001). .
Effective Indicators for Integrated Care in Behavioral Health Settings: An Evaluation of the Practices of Georgia's Community Service Boards
Presenter(s):
Michael Hammer, University of Georgia, hammer@paxhammericana.com
Abstract: This paper will evaluate the practice and outcomes of current integrated care data collection practices in the state of Georgia's publicly-funded community-based mental health centers (CSBs). Data instruments include a survey to all of the CSBs as well as interviews with agency CEOs, medical directors, and lead nurses. The purpose of this study is to determine the current practices of these facilities, the divergence between practice and theory, and recommendations for improved integrated care practices in behavioral health settings.
The Perfect Couple: Clinical Quality and Program Outcomes- Using Data to Improve Clinical Practices
Presenter(s):
Cathie McLean, Mental Health Center of Denver, cathie.mclean@mhcd.org
Pablo Olmos-Gallo, Mental Health Center of Denver, antonio.olmos@mhcd.org
Christopher McKinney, Mental Health Center of Denver, christopher.mckinney@mhcd.org
Abstract: The best place for outcomes reporting is near the end user, so that the data can be employed to positively influence the quality and efficacy of the clinical practice it is assessing. We are working to increase the value of data collection and reporting within the clinical workflow to make it a meaningful process to the end users. Outcomes have been integrated into the consumer’s medical record, our internal Peer Review process, and Services Utilization Management. Practical application of the data is thereby increased by making them easily accessible, within an information environment familiar to staff, and in the context of clinical quality review. Clinicians can quickly assess effectiveness of services, discuss it with consumers, and make modifications to best meet consumer needs. We are also working to make outcomes data available to the consumer through a secure on-line portal for improving engagement in recovery services.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Small Foundations With Big Learning Agenda: A Case of Using Analysis of Past Grant Making to Support Future Organizational Learning
Roundtable Presentation 303 to be held in BONHAM A on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
William Bickel, University of Pittsburgh, bickel@pitt.edu
Jennifer Iriti, University of Pittsburgh, iriti@pitt.edu
Julie Meredith, University of Pittsburgh, julie.meredith@gmail.com
Abstract: Foundations have several core avenues through which they can contribute to social good. Beyond direct grant making, systematic learning from past work to inform and strengthen future grant making and support building of field knowledge are additional possibilities. Large foundations make sizable investments in sophisticated knowledge capture and evaluation infrastructure and processes in this regard. But what is feasible in more modest sized foundations? The authors present a case study of a regional foundation commissioning a university-based evaluation group to undertake a retrospective review of selected grants to better understand grantee evaluability broadly, grantee capacities to document performance, and to identify ways the foundation can better support learning from its grant making going forward. The low-cost review yielded a number of specific recommendations regarding: modifications in foundation outcome targets for grantees, redesigns of foundation infrastructure to support learning, and actions relevant to building grantee capacities long-term to document their results.
Roundtable Rotation II: Challenges in Developing Multi-level Logic Models
Roundtable Presentation 303 to be held in BONHAM A on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Barbara Wauchope, University of New Hampshire, barb.wauchope@unh.edu
Curt Grimm, University of New Hampshire, curt.grimm@unh.edu
Abstract: Developing a multi-level logic model describing a foundation’s grant making initiative and the projects of its grantees is a useful tool to guide the evaluation of activities, outputs, and outcomes of both the initiative and its projects. When such models work well, the initiative model and individual project models link together logically to describe the contribution of each grantee’s project to the initiative overall. In actual work with foundations, however, we have found that multi-level model development is not always as successful a process as we would like it to be. This paper will describe the challenges faced by the evaluators of a current five year regional initiative in the development of a logic model that works for both the foundation and its grantees. The evaluators will invite a discussion of the factors involved and strategies that could make the process easier with better results for all.

Session Title: Insights Into Foundation Evaluation
Multipaper Session 304 to be held in BONHAM B on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Ellie Buteau,  Center for Effective Philanthropy, ellieb@effectivephilanthropy.org
The Assessment Challenge for Foundations: Understanding Impact
Presenter(s):
Ellie Buteau, Center for Effective Philanthropy, ellieb@effectivephilanthropy.org
Andrea Brock, Center for Effective Philanthropy, andreab@effectivephilanthropy.org
Abstract: The quality of an evaluation for a funder can only be as good as the clarity of a funder’s goals, the fit of its strategy to those goals, and the extent to which information is being collected about relevant performance indicators. Survey data about foundation goals, strategy, and assessment was collected by the Center for Effective Philanthropy during the fall of 2008, from 102 foundation CEOs and 89 program staff. Findings showed that foundation leaders are overwhelmingly positive in their belief that they are effective at creating impact – yet few are rooting those beliefs in relevant performance indicators. Without solid data, on what basis can foundation leaders feel confident that they are being successful in their work? This paper will share findings about the challenges CEOs and program staff face in evaluating their work and examples of funders who are working to improve their evaluation practices.
Benchmarking Foundation-level Evaluation: What are the Best Practices?
Presenter(s):
Jonathan Sachs, Canadian Health Services Research Foundation, jonathan.sachs@chsrf.ca
Kaye Phillips, Canadian Health Services Research Foundation, kayephillips79@gmail.com
Werner Muller-Clemm, Canadian Health Services Research Foundation, werner.mullerclemm@chsrf.ca
Abstract: In Canada and the United States, few research foundations have addressed the complex challenge of foundation- or organizational-level evaluation (Buteau, 2009). In order to provide a synthesized understanding of its impact as an organization, the Canadian Health Services Research Foundation (CHSRF) has begun to implement its own evaluation strategy. To ensure quality, the first step has been to identify how other foundations have approached this type of evaluation. This paper presents findings from a research study that CHSRF conducted to identify how comparable organizations approach foundation-level evaluation. A purposive sampling strategy was used to identify key informants at approximately 15 foundations in Canada and abroad. From the resulting interviews, CHSRF was able to identify applicable methods and metrics that will be used to inform its own evaluation efforts. Buteau,E., Buchanan, P., & Brock, A. Essentials of Foundation Strategy. Center for Effective Philanthropy, 2009
Foundation-Level Evaluation Approaches: Lessons Learned About Quality in Practice
Presenter(s):
Kaye Phillips, Canadian Health Services Research Foundation, kayephillips79@gmail.com
Kathryn Graham, Alberta Innovates-Health Solutions, kathryn.graham@albertainnovates.ca
Jill Yegian, California HealthCare Foundation, jyegian@chcf.org
Werner Muller-Clemm, Canadian Health Services Research Foundation, werner.mullerclemm@chsrf.ca
Abstract: In the past five years foundations have been transitioning from program-level evaluation to broader foundation or organizational level evaluation approaches (Foundation Strategy Group, 2007). Foundation-level evaluation is a complex process for understanding the cumulative and integrated results and value of an organization’s programs and strategies (Putnam, 2004). To ensure evaluation quality there is an emerging need to identify and practically assess various design, implementation, and analysis issues related to adopting foundation-level evaluation approaches (Buteau, E., 2009). This paper examines theoretical and practical quality issues related to foundation-level evaluation through a case study of three comparable health foundations. In this paper the Canadian Health Services Research Foundation; Alberta Innovates – Health Solutions; and the California HealthCare Foundation collaborate to present lessons learned about processes for capturing, monitoring and reporting foundation-level outcomes and impacts. Issues related to data quality, validation, attribution/contribution, and experimental design required for the counterfactual are highlighted.

Session Title: Current Topics in Educational Evaluation: An Eclectic Set of Noteworthy Projects
Multipaper Session 305 to be held in BONHAM C on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
James S Sass,  Research Support Services, jimsass@earthlink.net
Using A Mixed Methods Design to Conduct an Evaluation of International Baccalaureate Programs in Texas Schools
Presenter(s):
Jacqueline Stillisano, Texas A&M University, jstillisano@tamu.edu
Hersh Waxman, Texas A&M University, hwaxman@tamu.edu
Yuan-Hsuan Lee, Texas A&M University, jasviwl@neo.tamu.edu
Judy Hostrup, Texas A&M University, jhostrup@usa.net
Beverly Alford, Texas A&M University, alfordb@tamu.edu
Kayla Braziel Rollins, Texas A&M University, kaylarollins@gmail.com
Abstract: This paper reports on an evaluation study commissioned to examine the impact of International Baccalaureate Primary Years Programs (PYP) and Middle Years Programs (MYP) on mathematics and reading achievement of students in Texas schools. Using a mixed-methods design, the evaluation team examined the factors that contributed to the performance of PYP and MYP students on Texas achievement exams and how those factors differentially influenced reading and mathematics achievement of students of varying demographic profiles. Quantitative data from the Texas Assessment of Knowledge and Skills were analyzed, comparing IB schools to similar schools in the same district. In addition, in-depth case studies were conducted with eight IB Texas schools (four PYP and four MYP). These case studies provided a comprehensive picture of experiences, challenges, and opportunities related to planning and implementing an IB program and allowed evaluators to address qualitative outcomes that are often difficult to measure through quantitative methodology alone.
Partners of Education: Evaluation as a Step Towards Quality of Education
Presenter(s):
Ligia Elliot, Cesgranrio Foundation, ligiaelliot@yahoo.com.br
Lucia Favero, Partners Association of Education, lucia.favero@parceirosdaeducacao.org.br
Monica Guerra, Partners Association of Education, monica.guerra@parceirosdaeducacao.org.br
Abstract: In Brazil, the Program Partners of Education intends to contribute to the improvement of students’ achievement and promotes its actions by means of partnerships between executive directors and institutions that decide to participate and their sponsored public schools. Students from Schools Partners are systematically evaluated and their results guide pedagogical actions and activities planned to aid teachers and their students’ classes. In 2007 and 2009, Cesgranrio Foundation was in charge of students’ evaluation from different Schools Partners. Tests on curriculum subject matters for eight grades of fundamental education and for three grades of middle school were applied in April and November. This paper presents some highlights of the 2009 evaluation process by comparing results obtained in specific items which were included into the first and the second tests.
Alignment and Synthesis: Efforts to Improve the Quality of Parent Engagement Evaluation in a Large Urban School District
Presenter(s):
Wenhui Yuan, Fort Worth Independent School District, hugh.yuan@fwisd.org
Susan M Wolfe, Susan Wolfe and Associates LLC, susan.wolfe@susanwolfeandassociates.net
Abstract: How to improve the quality of an evaluation in a complex context is a concern for evaluators. This study will display the evaluators’ efforts and strategies for evaluation quality improvement for a parent engagement program in a large urban school district. Three aspects will be discussed: (1) aligning the evaluation designs, (2) coordinating the evaluation implementation, and (3) synthesizing the information. Further, the authors will investigate constraints on conducting an evaluation in the context of a large urban school district. The discussion will be framed in the context of the Joint Committee’s Program Evaluation Standards.
Profiles of Advocacy: Narrative Portrayals of School Superintendents’ Educational Practice and Social Action
Presenter(s):
Keith Trahan, University of Pittsburgh, kwt2@pitt.edu
Cynthia Tananis, University of Pittsburgh, tananis@pitt.edu
Cara Ciminillo, University of Pittsburgh, ciminill@pitt.edu
Abstract: In this presentation, we exhibit and examine narrative evaluation methods. While naturalist inquiry attends to theory-building based on practice, narrative provides a compelling and accessible means of telling a program’s story. Thus, narrative evaluation builds program-theory and provides means of communication. The Forum for Western Pennsylvania School Superintendents provides professionally relevant strategies to apprehend the complexity of issues facing the field of education and alleviate a sense of isolation often accompanying the position of superintendent. Building upon previous evaluation activities that focused on naming and further developing its program theory, we use narrative evaluation both to formatively speak to the Forum and help it publically speak as an advocate for children and youth. Thus, narrative evaluation holds promise for expanding evaluation utilization; not only do programs being evaluated draw benefit, but also other programs might benefit from both the product and process of such evaluation methods.

Session Title: Methods and Models for Evaluating Pre-Kindergarten and School Readiness Programs
Multipaper Session 306 to be held in BONHAM D on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Katie Dahlke,  Learning Point Associates, katie.dahlke@learningpt.org
Discussant(s):
James P Van Haneghan,  University of South Alabama, jvanhane@usouthal.edu
Maintaining Validity in School Readiness Evaluation: A Multi-dimensional Approach in Methodology
Presenter(s):
Summerlynn Anderson, Walter R McDonald and Associates Inc, sanderson@wrma.com
Gary Resnick, Harder+Company, gresnick@harderco.com
Fred Molitor, Walter R McDonald and Associates Inc, fmolitor@wrma.com
Julie Field, First 5 Sacramento, fieldj@saccounty.net
Abstract: “School readiness” programs aim to prepare children for kindergarten. The School Readiness program funded with tobacco taxes by First 5 Sacramento includes a broad range of services for school staff, parents, and children. The evaluation of these services requires a complex methodology and multiple instruments. Evaluators coordinate and monitor data collection activities through school staff, who had direct access to the target populations. During the school year, 82,050 services were provided to 2,894 families. Changes in parenting practices (e.g., reading to children) were assessed by a pre/post survey of randomly-selected parents. A standardized, cognitive assessment was administered to children by school staff; and school staff assessed children’s social-emotional development. A fourth instrument was completed by teachers and providers to assess activities and curricula. The evaluator required comprehensive training of school staff and weekly site visits to review recruitment. Services were found related to outcomes after controlling for family characteristics.
Getting Ready for School: Monitoring a User Productivity Kit (UPK) Pilot Program in an Urban Setting
Presenter(s):
Rob Fischer, Case Western Reserve University, fischer@case.edu
Lance Peterson, Case Western Reserve University, lance.peterson@case.edu
Nina Lalich, Case Western Reserve University, nina.lalich@case.edu
Claudia Coulton, Case Western Reserve University, claudia.coulton@case.edu
Abstract: This paper reports on a study of children in a pilot universal prekindergarten program in 24 sites in the Cleveland, OH area. Observational and parent survey data were collected on a sample of 204 children selected from a sample of early care classrooms for 3-to-5 year olds. Data were collected by trained observers using two standardized instruments – the Woodcock Johnson Letter-Word and Applied Problems subtests, and the Peabody Picture Vocabulary Test. Data were collected across three time points and linked to school readiness ratings provided by kindergarten teachers in public school settings. The findings speak to the developmental trajectory of children as they approach kindergarten and how that pattern may be impacted by the quality of the early care setting from which they emerge. The paper also addresses how to use quality data to inform practice and policy.
Crossing Borders: Evaluation of a Bi-lingual, Bi-cultural and Bi-national Kindergarten Readiness Program
Presenter(s):
Sharon DeJoy, State University of New York College at Potsdam, dejoysl@potsdam.edu
Tina Runkles, State University of New York College at Potsdam, runkletm190@potsdam.edu
Stephanie Hawkins, State University of New York College at Potsdam, hawkin78@potsdam.edu
Abstract: Families Can is a three-year home visiting pilot project conducted by the Tri-County Literacy Council in Cornwall, Ontario, with support from the Ontario Ministry of Health. The project uses the Parents as Teachers model to improve health and school readiness for at-risk children. The project area incorporates a three-county area around Cornwall, as well as the Akwesasne Mohawk Reservation, spanning Canadian and American (New York) territory. In the last year of the project, Families Can conducted an evaluation of school readiness among preschoolers served by the program. Three year old program participants and a comparison group, matched for age, race, income and school of attendance, were evaluating using the BRIGANCE Preschool Screening tool. Not only does the paper present the findings of the evaluation, it identifies lessons learned from conducting an evaluation with bi-lingual (French and English) white and indigenous families on both sides of the US/Canadian border.
Evaluating the Longitudinal Impact of Early Childhood Professional Development Programs on K-3 Success
Presenter(s):
Raymond Hart, Georgia State University, rhart@gsu.edu
Gary Bingham, Georgia State University, gbingham@gsu.edu
Nicole Patton-Terry, Georgia State University, npterry@gsu.edu
Abstract: Studies show that children entering kindergarten or first grade with early literacy deficits are at risk for academic difficulties (Cunningham & Stanovich, 1998; Juel, 1988; Scarborough, 2001). To ensure school success, children with limited emerging literacy skills must be provided with intense and appropriate early literacy instruction (Bowman, Donovan, & Burns, 2001; Simmons et al., 2003; Snow, et al., 1998). Few studies to date examine the longitudinal benefits of preschool programs on children’s language skills after the completion of such programs. The findings from this study demonstrate that children participating in three different preschool teacher professional development intervention projects across the country obtained significant increases in their language and literacy knowledge and significantly closed the achievement gap present at the beginning of preschool. Implications for evaluators studying the long term effects of preschool education and methods for evaluating students learning, teacher professional development, and long term school success are discussed.

Session Title: Grappling With Uncertainty in Innovative and Complex Settings: Weaving Quality in Developmental Evaluation.
Panel Session 307 to be held in BONHAM E on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Systems in Evaluation TIG and the Indigenous Peoples in Evaluation TIG
Chair(s):
Syd King, New Zealand Qualifications Authority, syd.king@nzqa.govt.nz
Discussant(s):
Michael Quinn Patton, Utilization-Focused Evaluaton, mqpatton@prodigy.net
Abstract: They say of life only three things are certain; birth, death and taxes. We say of Developmental Evaluation, the only certainty is uncertainty. In this session we present the challenges associated with ensuring evaluation quality in innovative and complex situations. Through a Developmental Evaluation lens we explore the process of moving from a conceptual vision to on-the-ground practice of evaluation in emergent and dynamic contexts. We reflect on what it takes to systematically weave quality into the engagement processes, data collection and evaluative thinking, in settings where uncertainty reigns.
Navigating Uncertainty: The Cross-site Evaluation of the Supporting Evidence-based Home Visiting Grantee Cluster
Margaret Hargreaves, Mathematica Policy Research, mhargreaves@mathematica-mpr.com
Diane Paulsell, Mathematica Policy Research, dpaulsell@mathematica-mpr.com
Kimberly Boller, Mathematica Policy Research, kboller@mathematica-mpr.com
Deborah Daro, Chapin Hall, ddaro@chapinhall.org
Debra Strong, Mathematica Policy Research, dstrong@mathematica-mpr.com
Heather Zaveri, Mathematica Policy Research, hzaveri@mathematica-mpr.com
Heather Koball, Mathematica Policy Research, hkoball@mathematica-mpr.com
Patricia Del Grosso, Mathematica Policy Research, pdelgrosso@mathematica-mpr.com
Russell Cole, Mathematica Policy Research, rcole@mathematica-mpr.com
In 2008, the Children’s Bureau (CB) within the Administration for Children and Families (ACF) at the U.S. Department of Health and Human Services funded 17 cooperative agreements to support the infrastructure needed for the high-quality implementation of existing evidence-based home visiting (EBHV) programs to prevent child maltreatment. The cross-site evaluation encompassed four domains: systems change, fidelity, child and family outcomes, and cost. Recognizing that grantees were operating in complex, dynamic, and unpredictable environments, the systems change domain used a Developmental Evaluation design that was responsive to changes in the initiatives and in their environments, including recession-related budget cuts and the unexpected drastic reduction in the grant and evaluation funding in its second year of operation. This paper reviews the evaluation’s developmental design and findings, including lessons learned by grantees about how to continue building infrastructure capacities and partnerships to support home visiting in tumultuous times.
Drawing on Deep Values to Ensure Evaluation Quality in Emergent and Uncertain Contexts
Kate McKegg, The Knowledge Institute, kate.mckegg@xtra.co.nz
As a visonary project has moved from the initial conceptual vision to on-the-ground implemention and development, the evaluators have been seriously challenged to ‘keep up’, and stay responsive to changing program needs. One of the emergent learnings from the developmental evaluation trenches has been that the quality and credibility of the evaluation process has been critically dependent on the evaluators ability to draw on the values, needs, strengths, and aspirations of the indigenous communities with whom we work, to define what is meant by ‘good program content/ design,’ ‘high quality implementation/ delivery and ‘valuable outcomes.’ This paper will discuss how the evaluators are learning to systematically build communities own definitions of ‘quality’ and ‘value’ into the evaluative process.
Talking Past Each Other: The Language of the Developmental Evaluator in Indigenous Contexts and Its Link to Quality
Nan Wehipeihana, Research Evaluation Consultancy Limited, nanw@clear.net.nz
“It’s the damn English” (language) is a phrase one of my colleagues uses when she can’t find the English phrasing or terminology to explain a cultural concept, practice or idea. Using a developmental evaluation being conducted with tribal and community based sport and recreation providers, this paper focuses on the language of evaluation, as an essential precursor to quality in evaluation. Why language matters in evaluation, is never more obvious when we are ‘talking past each other’ (Metge & Laing, 1984) and vague looks come back at us, and questions from the floor or in emails make apparent the lack of connection. This paper has a focus on the language of evaluation in an indigenous developmental evaluation context. It provides examples of evaluation language that worked and didn’t work, and reflects on the impact of language on engagement in the evaluation, shared understanding, evaluation practice, and ultimately evaluation quality
What Does Quality Look Like in Developmental Evaluation in Indigenous Contexts.
Kataraina Pipi, Independent Consultant, kpipi@xtra.co.nz
Six months into a Developmental Evaluation with indigenous and non indigenous providers of sport and recreation services, a key reflection has been the need and opportunity to use evaluation examples that emanate from a Maori world view and are grounded in the lived experience of what it means to be Maori, within the evaluation. Within this context, ‘quality’ is beginning to be conceived of as an ‘as Maori process’, guided by cultural principles, values and practices. In this paper, we share our initial exploration of what this means for what we do, how we do it, and what we prioritize in a Developmental Evaluation in indigenous contexts.

Session Title: Tips From the Trenches: The Role of the Evaluator in Designing a Quality Evaluation
Panel Session 308 to be held in Texas A on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Government Evaluation TIG
Chair(s):
Stanley Capela, HeartShare Human Services of New York, stan.capela@heartshare.org
Discussant(s):
Amy Germuth, EvalWorks LLC, agermuth@mindspring.com
Abstract: The session will focus on how three program evaluators from the government and non-profit sector developed an approach to designing a quality evaluation. It will provide participants a variety of tips and techniques on how to facilitate the process in a way that engages stakeholders to ensure the evaluation provides meaningful information to assist in strengthening program services. As part of the discussion, presenters will also discuss barriers they confronted and how they approached these issues to further ensure the quality of the evaluation.
Quality Perspectives in the Evaluation of a K-12 English Language Proficiency Program
Stephan Henry, REASolutions LLC, shenry@reasolutions.net
This presentation describes the application of the Joint Committee’s Program Evaluation Standards as tool in planning and metaevaluating an urban school district ELL program evaluation in terms of four quality elements: Propriety, Utility, Feasibility, and Accuracy. The design and implementation of the evaluation in examining the quality of the English Language Proficiency Program will also be undertaken. A challenge to utilization of the evaluation findings was created when the evaluator and key managers and administrators for the program resigned shortly after the evaluation was completed. Strategies implemented to maximize utilization of the findings, before and after the resignations, will be reviewed.
Real Time Peer Reviews: An Efficient Way to Ensure Evaluation Quality
Rakesh Mohan, Idaho State Legislature, rmohan@ope.idaho.gov
Many performance audit and evaluation organizations working at any level of government undergo external quality control reviews, or peer reviews. These peer reviews are conducted once every 3-5 years by a team of several senior auditors/evaluators from peer organizations. If done correctly, peer reviews are helpful in improving internal processes and procedures of an audit or evaluation organization. When I first learned about these peer reviews 22 years ago, I thought it would be nice if these reviews were done prior to completing the project. Recently, as the director of my evaluation office, I decided to do an experiment – modify the concept of these traditional peer reviews slightly by employing them in real time as we carry out each project from start to finish. These real time peer reviews have been extremely useful for us in ensuring the quality of our evaluation projects. Furthermore, I have found these real time peer reviews to be very cost effective. I will provide a how-to guide for using these real time peer reviews.
Quality Evaluation Template: How to Develop a Utilization Focused Evaluation System Incorporating Quality Improvement and Quality Assurance Systems
Stanley Capela, HeartShare Human Services of New York, stan.capela@heartshare.org
HeartShare Human Services provides a variety of developmental disabilities services. As a result the funder places a great deal of emphasis on quality assurance and compliance. At the same time senior management understands that organizations that strive to achieve a higher standard must incorporate a quality improvement system. This session will provide a template on how an evaluator incorporates the ideals of quality assurance and quality improvement to ensure the utilization focused evaluation approach meets the needs of senior management and at the same time strengthens program services to meet the needs of the individuals served by the programs.

Session Title: The Future of Knowledge Production and Dissemination in Evaluation
Panel Session 309 to be held in Texas B on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the
Chair(s):
Sandra Mathison, University of British Columbia, sandra.mathison@ubc.ca
Discussant(s):
Patricia Rogers, Royal Melbourne Institute of Technology, patricia.rogers@rmit.edu.au
Abstract: In this panel, we intend to address this broad set of developments and their implications for how we think about the production, organization, dissemination, appraisal, and use of knowledge about evaluation theory and practice. Four panelists have been asked to each address the following broad set of questions: (1) What are the most significant challenges to refereed, scholarly journals posed by new modes of organizing knowledge production & dissemination? (2) How does globalism (and global communications technologies) effect knowledge production & dissemination? (3) What role do truth, trust and expertise have to play in the creation and dissemination of knowledge through Internet sources? (4) How is knowledge–based expertise developed in the field, who are one’s ‘peers’, and what constitutes appropriate professional training? (5) What is the future role of the university (and the traditional academic disciplines and professional schools) in the knowledge society?
Knowledge Production and Dissemination in Evaluation
Thomas Schwandt, University of Illinois at Urbana-Champaign, tschwand@illinois.edu
Thomas Schwandt will consider the questions as a long standing evaluator who has worked in many contexts and is currently the Editor of the American Journal of Evaluation.
Knowledge Production and Dissemination in Evaluation
Laura Leviton, Robert Wood Johnson Foundation, llevito@rwjf.org
Laura Leviton will consider the questions as a long standing evaluator who is currently Special Adviser for Evaluation at the Robert Wood Johnson Foundation.
Knowledge Production and Dissemination in Evaluation
Sandra Mathison, University of British Columbia, sandra.mathison@ubc.ca
Sandra Mathison will consider the questions as a long standing evaluator who has worked in many contexts and is currently the Editor in Chief of New Directions for Evaluation.

Session Title: Using Mixed Methods to Expand Frameworks for Program Evaluation
Multipaper Session 310 to be held in Texas C on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the
Chair(s):
Rita Fierro,  Independant Consultant, fierro.evaluation@gmail.com
Discussant(s):
Virginia Dick,  University of Georgia, vdick@cviog.uga.edu
Unearthing Hidden Contexts in Evaluation Work: Using Brooks 5-Paths Analysis
Presenter(s):
Pauline Brooks, Independent Consultant, pbrooks_3@hotmail.com
Abstract: Under some circumstances, it is important for evaluators to access broader information concerning the context of programs. Circumstances likely to benefit from such broader contextual information include: situations where stakes are high, the problems being addressed are longstanding or deeply rooted, there is potential for high impact, the program is to be continued for another full round of funding or is to be scaled-up, etc. Increased understanding of the context can have influence on the evaluation focus, assumptions, methods, scope, processes, and eventually the evaluation findings and interpretations. This paper introduces an approach (Brooks 5-Paths Analysis) that evaluators and others can use in planning and collecting data concerning context. The approach focuses on five aspects of social context: history, laterality, accumulations, power dynamics, and resistance to change. Singularly and in combination, data concerning these five aspects of context can often refine the accuracy, relevance and meaningfulness of evaluations.
A Multi Method Approach for Assessing Fidelity to an Evidence-based Child Neglect Prevention Program
Presenter(s):
Jill Filene, James Bell Associates, filene@jbassoc.com
Lauren Kass, James Bell Associates, kass@jbassoc.com
Elliott Smith, Cornell University, egs1@cornell.edu
Abstract: Although there has been a growing emphasis on measuring fidelity to evidence-based programs, implementation evaluations tend to focus on quantitative elements of fidelity related to adherence and exposure (i.e., dosage). This paper will describe the process for developing a multi-site assessment of fidelity for an evaluation of the replication of a child neglect prevention program. To strengthen the quality of the findings, fidelity was assessed using a mixed-method approach to collecting qualitative and quantitative data from multiple data sources. In order to allow for the examination of the impact of fidelity on outcomes, a quantitative framework was developed that blends qualitative and quantitative fidelity data. Select findings will be presented about sites’ ability to implement the core components of the program, as well as the impact fidelity had on modifying family-level factors associated with child neglect. Lessons learned regarding developing fidelity assessment methods and applying them will be discussed.
Compliance is Improving, Now What? Using the Guskey Model in a Mixed Method Evaluation to Measure the Impact and Effectiveness of a National Technical Assistance Center
Presenter(s):
Paula Kohler, Western Michigan University, paula.kohler@wmich.edu
June Gothberg, Western Michigan University, june.gothberg@wmich.edu
Abstract: This paper describes the evaluation process of the first five years of a national technical assistance center. The center was charged to help states build capacity to support and improve transition planning, services, and outcomes for youth with disabilities and disseminate information and provide technical assistance on scientifically-based research practices with an emphasis on building and sustaining state-level infrastructures of support and district-level demonstrations of effective transition methods for youth with disabilities. To determine the effectiveness of the center, a complex, multi-site multi-level mixed-method evaluation was conducted using federal compliance data (quantitative) and Gusky’s model for evaluating professional development (mixed-methods). Findings indicate effective implementation of the capacity building model used by the center as well as providing a method for evaluation that may be generalized to other complex technical assistance projects.

Session Title: Third Generation Research Knowledge Tracking: Citation Analyses
Demonstration Session 311 to be held in Texas D on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Presenter(s):
Alan Porter, Georgia Institute of Technology, alan.porter@isye.gatech.edu
Stephen Carley, Georgia Institute of Technology, stephen.carley@innovate.gatech.edu
Abstract: Tracking the “citation trails” of research publications provides the strongest empirical evidence of research influence. This workshop demonstrates how desktop bibliometric/text mining software tools can facilitate analyses of Web of Science (WOS) data for three generations of data. We start at the “second generation” – research publications reflecting a body of research (e.g., papers deriving from a particular program or the work of a given research center). From these data, we process the cited references to extract author information and to derive subject category information, thus providing “first generation” data. Via new, openly available i-macros, we separately capture the citing paper abstracts from WOS – the “third generation” data. We then consolidate the data and prepare research profiles for each generation. We identify and visualize the generation-spanning networks. Science overlay and science citation maps further elucidate the transfer of knowledge among researchers, and across disciplines, institutions, and countries.

Session Title: The Cycle of Evidence-based Policy and Practice: Synthesis, Translation, and Evaluation Capacity Building
Panel Session 312 to be held in Texas E on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Evaluation Use TIG , the Organizational Learning and Evaluation Capacity Building TIG, Alcohol, Drug Abuse, and Mental Health TIG, and the Health Evaluation TIG
Chair(s):
Susan Labin, Independant Consultant, susan@susanlabin.com
Abstract: This panel will include presentations on the major components in the cycle of developing evidence-based policy and practice. The need for synthesis for policy will be explored in terms of contributions and roles of evaluation and performance measurement. The synthesis method, the core methodology to aggregate findings for evidence-based practice reviews, will be defined and various types of syntheses will be compared. Using results from synthesis is the impetus for the Centers for Disease Control and Prevention’s system for synthesis and translational research, which will be presented by those involved in developing the system. The translational process is further explored in a presentation of the Service to Science Academies (supported by the Substance Abuse and Mental Health Services Administration and the Center for Substance Abuse Prevention), an example of evaluation capacity building to increase the utilization of evidenced-based findings in the field and to bring practice into the evidence base.
Importance and Usage of Synthesis in Public Policy: Implications for Evaluation and Performance Measurement
Joseph Wholey, University of Southern California, joewholey@aol.com
A critically important use of evaluation findings is to improve public policies and programs. By summarizing what is known about the effectiveness of a public policy or program, evaluation synthesis produces evaluation findings and helps focus future programs and evaluations. Future evaluation may be accomplished either through ongoing monitoring of the extent to which outcomes meet policy or program goals or though subsequent evaluation studies that build on and extend the synthesis findings. The presenter will draw on his nationally recognized expertise in developing performance indicators and evaluations. The presentation will explore how evaluators can encourage and support the use of evaluation findings to improve policies and programs by offering recommendations, suggestions, or options for: 1) redesigning management systems to focus on results; 2) creating incentive systems focused on results; 3) developing key indicator systems at national, state, or community-level; or 4) creating performance partnerships.
Research Synthesis: The Core Methodology in Evidence-based Reviews
Susan Labin, Independant Consultant, susan@susanlabin.com
Evidence-based reviews are an essential step in the cycle of utilizing evaluation research to improve policy and practice. The core methodology of evidence-based reviews derives from research syntheses that aggregate findings from primary research. Given the attention and importance of evidence-based reviews, it is valuable to be aware the principles of the underlying method. Experience from developing and using synthesis methods will inform the presentation beginning with the history and types of broad-based syntheses. Broad-based syntheses are currently being used in the evidence-based review processes in federal agencies and are compatible with AEA’s Roadmap to OMB. Controversies regarding broad-based syntheses as opposed to meta-analysis, which usually restricts inclusion of evidence to randomized control trials, will be discussed. A variety of types of reviews (such as retrospective and prospective) and their potential usages will be compared. Recommendations will be offered for advancing synthesis methods and their application.
Developing a Prevention Synthesis and Translation System to Promote Science-based Approaches to Teen Pregnancy, HIV and Sexually-transmitted Infections Prevention
Kelly Lewis, Georgia State University, klewis28@gsu.edu
Catherine A Lesesne, Centers for Disease Control and Prevention, ckl9@cdc.gov
Abraham Wandersman, University of South Carolina, wandersman@sc.edu
S Christine Zahniser, Global Evaluation and Applied Research Solutions Inc, scz1@cdc.gov
Mary Martha Wilson, Healthy Teen Network, marymartha@healthyteennetwork.org
Gina Desiderio, Healthy Teen Network, gina@healthyteennetwork.org
Diane C Green, Centers for Disease Control and Prevention, dcg1@cdc.gov
Prevention synthesis and translation is seen as vital to bridging science and practice, yet how to develop it and train support system partners to use it is under-researched. By way of a case example by developers and implementers, this presentation highlights the importance of synthesis and describes an effective process for developing a synthesis/translation product called Promoting Science-based Approaches to Teen Pregnancy Prevention Using Getting To Outcomes. We will share our approach and experience in defining evidence-based programs and strategies in the area of teen pregnancy prevention; the implications of divergent definitions of and requirements for "evidence" on the development of clear synthesis/translation products; and the short-term results of our efforts to support the synthesis/translation product. Implications for research and practice will also be discussed.
Service to Science: The Role of Evaluation Capacity Building in Evidence-based Practice
Pamela Imm, University of South Carolina, pamimm@windstream.net
This presentation will focus on how local programs can build their capacity to evaluate their programs and gather data to submit to a federal registry for review. This initiative, referred to as “Service to Science,” is a national initiative supported by the Substance Abuse and Mental Health Services Administration (SAMHSA) and the Center for Substance Abuse Prevention (CSAP). One goal of the Service to Science Initiative is to increase the culturally appropriate and innovative substance abuse and other prevention interventions. States nominate local programs for a Service to Science Academy and to work with expert evaluators. This presentation is informed by direct experience in working with the Academies and will address how evaluators work with local programs for evaluation capacity building (ECB) and how these ECB efforts are important to meet the review criteria for SAMHSA’s National Registry of Evidence-Based Programs and Practices (NREPP).

Session Title: Change is a Process, Not An Outcome: Implication for Evolving Federal Evaluation Policy
Think Tank Session 313 to be held in Texas F on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the
Presenter(s):
Dianna L Newman, State University of New York at Albany, dnewman@uamail.albany.edu
Discussant(s):
Dianna L Newman, State University of New York at Albany, dnewman@uamail.albany.edu
Anna Lobosco, New York State Developmental Disabilities Planning Council, alobosco@ddpc.state.ny.us
Abstract: This think tank will explore the impact of outcomes-based project logic models on programs intended to promote systemic change and to inform federal evaluation policy development and refinement. Typical logic models are clearly service delivery focused and federal stewardship has a singular focus on attaining defined outputs and outcomes having a deleterious effect on effecting and documenting systemic change. The 3Is Model has been used to evaluate programs with systemic change intents and to re-think and refine their logic models. Since systems change is a complicated and intricate process, adjusting to changes in all aspects of the program logic model (including outputs and outcomes) is a feature of change efforts. Lessons learned from use of the 3Is Model will be used as an impetus for discussion intended to inform evolving federal evaluation policy to support program improvement and identification of promising practices.

Session Title: Pandemic Influenza and Evaluation Lessons Learned: The H1N1 Outbreak of 2009 - 2010
Panel Session 314 to be held in CROCKETT A on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Disaster and Emergency Management Evaluation TIG
Chair(s):
Elizabeth Harris, EMT Associates Inc, eharris@emt.org
Abstract: CDC-INFO is the Centers for Disease Control and Prevention’s unified, integrated contact center for delivering public health information. It responds to public inquiries by phone, e-mail and by sending CDC publications. CDC-INFO has developed a multi-component performance monitoring, quality improvement, and evaluation system that provides continuous feedback to CDC program managers and policy makers. The H1N1 pandemic posed an emergency response need for CDC-INFO, and a pilot for testing of surveillance measurement capability to be activated in the event of a public health emergency. The panel presents a) the planned approach and purpose, b) the implementation process and a major shift in purpose that emerged from that process, c) the findings and implications, and d) lessons learned.
CDC-INFO's Role in Responding to Public Health Emergencies
Amy Burnett, Centers for Disease Control and Prevention, aburnett@cdc.gov
CDC-INFO is the Centers for Disease Control and Prevention’s unified, integrated contact center for delivering public health information. It responds to public inquiries by phone, e-mail and by sending CDC publications. CDC-INFO has developed a multi-component performance monitoring, quality improvement, and evaluation system that provides continuous feedback to CDC program managers and policy makers. The H1N1 pandemic posed an emergency response need for CDC-INFO as several key constituent groups sought assistance and information about the outbreak: 1) CDC-INFO handled inquiries from the public (defined as any individual or group seeking health or public health information from CDC); 2) State Health Departments sought assistance in handling the influx of inquiries (e.g. State of New York following school closures); and 3) other hotlines (e.g. 211) sought content to provide to the public when contacting their centers for information about H1N1. The circumstances under which the Emergency Response Survey was developed and launched will be presented in order to provide context for the evaluation considerations and lessons learned (the latter two points to be presented by the remaining panelists).
Evaluating Response to a Public Health Emergency: Reactive or Proactive Approach?
Elizabeth Harris, EMT Associates Inc, eharris@emt.org
The rationale and methodology for designing evaluation studies which facilitate learning from the public in order to communicate relevant public health messages which will spur desired behaviors will be discussed by the panelist. The role of the Emergency Response survey in the contact of the broader evaluation system will be addressed. The original intent was to collect public health threat assessment and surveillance data in response to a metro, state, regional or multi-state, or national emergency. The CDC-INFO Emergency Response Surveys were planned to be activated only in the event of a public health emergency. The H1N1 outbreak was the first field test of the measure, and the end result was a paradigm shift. The rationale for the shift in the context of the broader comprehensive performance monitoring system for CDC-INFO will be addressed by the panelist.
The Public as Key Informant in an Emergency Response Evaluation
Janelle Commins, EMT Associates Inc, jcommins@emt.org
A major set of lessons concerns the way this Emergency Response evaluation was conducted highlighted the need to better understand the underlying and specific motivations that led the public to seek information from CDC-INFO rather than the multiple alternatives, and the utility of emergency survey results for improving responsive to this audience. The panelist will present on the process of building evaluation methods which account for the public as an active participant rather than a passive audience in the event of a public health emergency and, in the process, complementing responsive, reactive methodologies with proactive mechanisms. The challenges, resolution and contribution of this collaborative evaluation experience will also be discussed.

Session Title: Strategies for Preparing Quality Evaluation Practitioners
Multipaper Session 315 to be held in CROCKETT B on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Teaching of Evaluation TIG
Chair(s):
Gary Skolits,  University of Tennessee, Knoxville, gskolits@utk.edu
Teaching Program Evaluation with Quality in Mind: Challenges Faced and Lessons Learned in Preparing Next Generation Evaluators
Presenter(s):
Sheila Kohn, University of Rochester, sbkohn@rochester.rr.com
Kankana Mukhopadhyay, University of Rochester, kankana.m@gmail.com
Abstract: Recent literature on program evaluation highlights the lack of information on teaching it in university settings to prepare the next generation of evaluators. The purpose of this paper is to address this problem by presenting an approach to teaching the fundamentals of program evaluation research within a university’s certificate program. Our paper systematically documents the pedagogical practices of teaching evaluation to diverse groups of doctoral students, the majority of whom are full-time professionals in the fields of education, health-care, counseling, etc. Through our discussion of teaching this introductory course for the last two years, we attempt to highlight the challenges faced and lessons learned from our experience as instructors. In addition, we also argue for our strong commitment to the belief that the best way to ensure quality in evaluation is to teach high standards of its practice to those who wish to work in and contribute to this field.
Teaching About Evaluation Trends and Orientations
Presenter(s):
Michael J Smith, Hunter College, profmsmith@aol.com
Abstract: One of the most difficult choices facing someone who teaches program evaluation is what approach, philosophy, or orientation seems best fitted for use in conducting an evaluation study. These approaches and trends such as consumer empowerment, empowerment evaluation, evidence-based evaluation, democratic evaluation and strengths-based evaluation need to be taught. Although these orientations and their philosophical underpinnings are not essential for conducting an evaluation, professors should provide a historical perspective of each trend and their personal perspective on which trend or orientation seems to be the most promising perspective given the professor’s own orientation and approach to conducting a study. A brief overview of these approaches and the strengths and weaknesses of each approach will be presented and the presenter will engage participants in a discussion of how they teach these orientations to evaluation. The author’s view is that a strengths-based approach, which is very appealing to the field of social work and other professions may be the best approach in developing a relationship with stakeholders and encouraging organizations to continue to engage in evaluation and program development.
Using Evaluation Activities to Teach Our Students About Evaluator Roles
Presenter(s):
Gary Skolits, University of Tennessee, Knoxville, gskolits@utk.edu
Jennifer Morrow, University of Tennessee, Knoxville, jamorrow@utk.edu
Erin Burr, Oak Ridge Institute for Science and Education, erin.burr@orau.org
Abstract: This paper offers a perspective of evaluator roles based upon evaluation activities and the demands they place on an evaluator. Currently, evaluator roles are conceived in terms of an evaluator’s decisions regarding a particular evaluation model or methodology. Moreover, current evaluation literature suggests that evaluators play only one macro level role throughout the duration of an evaluation. The model we will present suggests that evaluators assume multiple roles throughout an evaluation, based on responses to common evaluation activities. This model offers a realistic understanding of the many evaluator roles established by a typical evaluation. We will present the phases and activities of a typical evaluation, describe how each of these activities creates demands on the evaluator, and review how the evaluator adopts a set of role responses to each set of demands. We will address how this conceptualization of evaluator roles is applicable to training of novice evaluators.
A Capacity Building Grant in Interdisciplinary Evaluation: A Graduate Program With Focus on Assessment for Learning Research
Presenter(s):
Steven Ziebarth, Western Michigan University, steven.ziebarth@wmich.edu
Abstract: The NSF-funded Assessment for Learning Grant at Western Michigan University supports graduate students in the Interdisciplinary Ph.D. Program in Evaluation, Science and Mathematics. This session will report on some of the research experiences that these students have engaged in to move the field of "assessment for learning (AfL)" research forward. Research has focused on a diverse range of AfL topics ranging from improving feedback in university science courses using "lecture tools" technology to developing tools to study AfL in classrooms and to analyze curriculum materials.

Session Title: Evaluating Support to Poverty and Gender in Cross Country Aid Programs
Panel Session 316 to be held in CROCKETT C on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Cheryl Gray, World Bank, cgray@worldbank.org
Abstract: This panel highlights how a set of three recent IEG multi-country evaluations tackles assessing the World Bank’s support to poverty reduction and gender. The findings are drawn from IEG’s evaluations of World Bank support to Poverty and Social Impact Analysis; Gender; and the use Poverty Reduction Support Credits as instruments for poverty reduction and social outcomes. The panel focuses on standard evaluation criteria and some insightful tools and approaches used in these challenging multi-country program evaluations.
Evaluating the Effects of Policy Reforms on the Poor Through Analytic Work: Effectiveness of World Bank Support to Poverty and Social Impact Analyses
Soniya Carvalho, World Bank, scarvalho@worldbank.org
The World Bank introduced the Poverty and Social Impact Analysis (PSIA) approach to help governments and the Bank address the consequences of reform on the poor and to contribute to country capacity for policy analysis. Accordingly, the IEG evaluation of Bank support to PSIAs assessed the effects that this analytical work has on country policies. A particular issue in evaluating analytical work is that the client’s decisions may accord with the recommendations of the analytical work, but the decisions may be the result of other sources. Hence, it is difficult to assess the contribution of analytical work, compared for example, with effects of investments. Based on experiences from the evaluation, this presentation focuses on the multiple methods used to assess the effects of analytical work. The criteria used, country case study questionnaires, interview protocols, and approaches for thematic reviews are shared. Ms. Carvalho was the team leader for the evaluation.
Assessing Gender Dimensions of Aid Programs at the World Bank
Gita Gopal, World Bank, ggopal@worldbank.org
Ensuring that both men and women benefit equitably from programs is critical for achieving gender equality and enhancing development effectiveness. It is important to understand whether interventions achieve their objectives in a gender-aware manner. The assessment of gender dimensions adds an extra layer to evaluation processes and resources. The lack of gender-related data; attribution challenged; and aggregation of project results to the sector/country level, increases complexity. Different social contexts add further to the difficulties of defining evaluation frameworks. Yet, ensuring gender aware evaluations cannot be avoided because they increase accountability for gender equality and provide lessons for enhancing development effectiveness. To ensure that evaluations effectively capture the gender dimension of results, the presentation will provide guidance on developing evaluation designs and methods, and present good practice principles, based on lessons from a recently completed evaluation of World Bank support for gender. Ms. Gopal was the team leader for the evaluation.
Assessing the Poverty and Human Development Outcomes of Aid Through World Bank Supported Lending Programs
Anjali Kumar, World Bank, akumar@worldbank.org
This paper illustrates the methods of evaluating the relevance and effectiveness of programs to support poverty reduction by directly addressing social outcomes in pro poor sectors such as health, education and water supply, through one of the Bank’s key tools to support poor countries; the Poverty Reduction Support Credit (PRSC). The evaluation illustrates the difficulties of setting up a counterfactual in the context of country programs, and also illustrates evaluation issues that arise when program benefits parallel similar interventions/ flows from other sources. In the absence of tools for rigorous impact evaluation, it shows how the use of methods such as difference-in-difference can be used to trace outcomes in growth and poverty at an aggregate level. These tools are combined with qualitative methods for tracing and comparing program outcomes across countries, in specific sectors especially health and education. Ms. Kumar was the team leader for the PRSC evaluation.

Session Title: Towards an Understanding of the Role of the Local Evaluator in Federally Funded Demonstration Projects: The Perspectives of Federal Policymakers, Community-based Organizations, and Evaluators
Multipaper Session 317 to be held in CROCKETT D on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Government Evaluation TIG
Chair(s):
Soundaram Ramaswami,  Kean University, sramaswa@kean.edu
The Demonstration Project and Expectations for Evaluation: The Policymaker’s Perspective
Presenter(s):
Alicia Richmond-Scott, United States Department of Health and Human Services, alicia.richmond@hhs.gov
Abstract: The Federal government’s efforts to improve performance assessment and public accountability have led to an increased emphasis on the collection of high-quality data and rigorous evaluation. The Adolescent Family Life (AFL) program has supported individual local evaluations for each funded demonstration project. Legislation requires each project to conduct a program evaluation with the goal of developing evidence-based programming and incorporating best practices in adolescent pregnancy prevention efforts, thus furthering the important goal of accountability. The AFL program has taken bold steps to create more systematic changes to maximize evaluation at the local level. This paper will explore the goals and objectives of this increased focus on evaluation from the perspective of those responsible for monitoring adherence to these policies. Further, we will discuss the use of core questions and standard instruments; demands for rigor; improved performance measures; cross-site evaluations; and the importance of local evaluation data to the policymaking process.

Session Title: Gender and Human Rights Evaluation
Panel Session 318 to be held in SEGUIN B on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Feminist Issues in Evaluation TIG
Chair(s):
Divya Bheda, University of Oregon, dbheda@uoregon.edu
Discussant(s):
Donna Podems, OtherWISE, donna@otherwise.co.za
Divya Bheda, University of Oregon, dbheda@uoregon.edu
Abstract: Since the development of the United Nations Evaluation Group (UNEG) Norms and Standards in 2005, UN entities have made efforts to achieve progress on integrating gender equality and human rights in evaluations, given that these two principles are at the heart of the UN work and should be guiding all its operational activities. Evaluation frameworks for gender focused programs have not always been in synch with principles associated with the furtherance of a human rights mission. This panel will examine UNIFEM's mandate for evaluation of gender focused programs, a transformative framework that shifts the focus directly on human rights, and strategies associated with grassroots movements in such contexts.
United Nations Development Fund for Women (UNIFEM) and United Nations Evaluation Group (UNEG): Evaluation in a Human Rights Framework
Belen Sanz, United Nations Development Fund for Women, belen.sanz@unifem.org
Inga Sniukaite, United Nations Development Fund for Women, inga.sniukaite@unifem.org
The presentation intends to contribute to the panel discussion by highlighting the key principles that emerge from human rights and gender equality frameworks in evaluation in the UN, to then present the evaluation approaches that are consistent to these principles derived from the existing evaluation literature and debate such as feminist research and transformative paradigms. It will then focus on sharing the work done by the UNEG to elaborate guidance for the UN system on integrating human rights and gender equality in evaluation, and will share lessons learned and challenges for UNIFEM and other UN agencies in developing and applying this guidance and approaches.
Transformative Lens Applied to Gender Focused Evaluations
Donna Mertens, Gallaudet University, donna.mertens@gallaudet.edu
Mertens will provide an overarching philosophical framework that is specifically focused on furthering human rights and social justice as a way of envisioning bridges between the demands for evaluation from multi-lateral agencies and the desire to further human rights. She will examine points of overlap and points of tension between the transformative paradigm and the UN's evaluation framework as it applies to gender focused evaluations.
Human Rights Enhancement From a Grassroots Community Change Agent
Denice Cassaro, Cornell University, dac11@cornell.edu
Based on her many years as a community change agent who focuses on gender-related issues, Cassaro will present strategies that offer promise in terms of linking evaluation efforts with social change to enhance human rights.

Session Title: Reflections From Applying a Complexity Lens to Monitoring and Evaluation
Panel Session 319 to be held in REPUBLIC A on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Tricia Wind, International Development Research Centre, twind@idrc.ca
Abstract: This panel will share reflections and questions emerging from a collaborative study among four action research projects that are experimenting with applying systems and complexity thinking to their monitoring and evaluation systems. The panel will present both broad themes as well as some practical experience of the projects in modifying their M&E strategies. The panel will include an overview to the study from the International Research Centre. It will highlight reflections from one of the participating projects, whose research seeks to understand the working, environmental and health conditions of informal sector solid waste workers and their families in Peru. The panel will conclude with reflections from the evaluation consultant who has been working with the Peruvian project to identify the project’s outcomes and evaluate the significance of those outcomes.
The Challenges of Monitoring and Evaluating Development Research in Complex Systems
Tricia Wind, International Development Research Centre, twind@idrc.ca
This presentation will introduce the collaborative study, and some of the broad themes and questions arising across the four participating action research projects. It will review different tools the projects used to enrich their M&E through the study, and some reflections on the use of those tools. This presentation will also connect this study to other ways in which IDRC’s Evaluation Unit has applied complexity thinking to its evaluation work.
The Specific Systemic and Complexity Challenges for the Consortio por la Salud, Ambiente y Desarrollo (ECOSAD) Action Research Project
José Valle, Consorcio por la Salud, Ambiente y Desarrollo, ECOSAD, jvalle2@yahoo.com
Ruth Arroyo, Consorcio por la Salud, Ambiente y Desarrollo (ECOSAD), arroyo.ruthy@gmail.com
Anita Lujan, Consorcio por la Salud, Ambiente y Desarrollo (ECOSAD), lujan.anita@gmail.com
Walter Varillas, Consorcio por la Salud, Ambiente y Desarrollo, ECOSAD, wvarillas@gmail.com
Magaly Oviedo, Consorcio por la Salud, Ambiente y Desarrollo (ECOSAD), 
Karim Castro, Consorcio por la Salud, Ambiente y Desarrollo (ECOSAD), 
The aim of the study is to identify, monitor and document outcomes, of processes of change in the behavior of recycling workers in an implemented project of research-participation, with Ecohealth approach, located on the Left Bank of the Rímac River. We developed a results-oriented monitoring system to analyze the complexity of the recycling process and its impact on the health of workers and their families, gathering information and developing collaborative learning between researchers and workers. We were able to recognize risk factors and components of community participation. We improved the bargaining skills of organizations, and openness to dialogue between workers and authorities. In conclusion, the activities of M & E from a systemic perspective and complexity of outcomes can guide research and identify flaws in the responses to challenges and explore new fields of observation of social-ecological changes.
The Strengths and Weaknesses of the Monitoring System Adapted by ECOSAD to Meet the Special Challenges it Faces
Ricardo Wilson-Grau, Ricardo Wilson-Grau Consulting, ricardo.wilson-grau@inter.nl.net
We will discuss the strengths and weaknesses of a monitoring (M & E) system that purposely does not attempt to track all activities and outputs of the research project. Instead, it identifies and documents the outcomes, understood as the principal changes in the behaviour, relationships, actions, policies or practices of the three principal social actors – the garbage recycling families, the policy makers and the ECOSAD research team – that emerge in the process of the action-research. In addition, the M&E system identifies the significance of those changes for the health of the garbage recyclers and for the environment and urban development and establishes how the ECOSAD project contributed to them. The outcomes are interpreted from three angles: their interrelationships, the varying perspectives on those relationships, and the boundaries of it all. Then, decisions are made for improving or modifying the research design.

Session Title: Improving School-Based Health Through Campus Centers, Nursing, and Effective Interventions
Multipaper Session 320 to be held in REPUBLIC B on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Kim van der Woerd,  Reciprocal Consulting, kvanderwoerd@gmail.com
Factors influencing grantee performance on youth outcomes targeted by the Safe Schools/Healthy Students (SS/HS) Initiative: Results from an exploratory meta-regression
Presenter(s):
Jim Derzon, Battelle Memorial Institute, derzonj@battelle.org
Bruce Ellis, Battelle Memorial Institute, ellis@battelle.org
Sharon Xiong, Battelle Memorial Institute, xiongx@battelle.org
Danyelle Mannix, United States Department of Health and Human Services, danyelle.mannix@samhsa.hhs.gov
Julia Rollison, MANILA Consulting Group Inc, jrollison@manilaconsulting.net
Abstract: To estimate performance and grantee characteristics associated with performance in reducing violence and substance use, promoting mental health, and enhancing school safety logged odds ratios (LORs) were calculated contrasting Year 3 with Year 1 performance from grantee-provided data on 12 outcome measures. The LORs were entered as dependent variables in a series of meta-regressions in which grantee characteristics and choices were tested after controlling for pre-grant characteristics. Findings indicate that SS/HS significantly improved the six youth violence and mental health outcomes, that grantee performance varied by outcome, and that none of the variables entered consistently predict grantee performance. Across outcomes, the 12 models explain 27.3% of the variation in outcomes with 48.6% of the explained variance attributable to grantee-controlled choices. The approach demonstrates that locally collected performance data can be used to estimate and explain grantee success in improving youth outcomes.
Evaluation of a Quitline-based Free Nicotine Replacement Therapies (NRT) Program for College Students: Is Campus Media Enough to Increase Quitline Utilization?
Presenter(s):
Joseph Lee, University of North Carolina at Chapel-Hill, jose.lee@unc.edu
Kathryn Kramer, University of North Carolina, kdkramer@unc.edu
Anna McCullough, University of North Carolina, annamc@unc.edu
Leah Ranney, University of North Carolina, leah_ranney@unc.edu
Adam Goldstein, University of North Carolina, aog@med.unc.edu
Barbara Moeykens, NC Health and Wellness Trust Fund, barbara.moeykens@healthwellnc.com
Nidu Menon, North Carolina Health and Wellness Trust Fund, nidu.menon@healthwellnc.com
Tom Brown, NC Health and Wellness Trust Fund, tom.brown@healthwellnc.com
Caroline Mage, University of North Carolina, caroline_mage@med.unc.edu
Mark Ezzell, North Carolina Health and Wellness Trust Fund, mark_ezell@earthlink.net
Abstract: Providing free nicotine replacement therapy (NRT) through quitlines increases participation and overall quit rates. Providing free NRT in combination with social and/or earned media has been effective in some states and reduces marketing costs. North Carolina piloted a program with no media budget to provide free NRT through QuitlineNC to young adults enrolled in college. College campuses (n=5) serving 55,000 students promoted the benefit through a variety of channels, including e-mail, social media, signage, and word of mouth by means of staff and faculty. Using a collaboratively-designed program logic model, an independent evaluation team assessed promotional activities conducted on campuses using an online reporting system and QuitlineNC call volume. Three months of data yielded no measurable increase in call volume during the intervention. Immediate program evaluation feedback to funders prompted subsequent changes that opened the intervention to a wider audience. We discuss findings relevant to NRT-based quitline promotions.
Evaluation of School Nursing in Underserved Schools: Truth, Beauty, and Justice in the Evaluation of the San Jose Unified School District (SJUSD) Nurse Demonstration Project
Presenter(s):
Eunice Rodriguez, Stanford University, er23@stanford.edu
Diana Austria, Stanford University, daustria@stanford.edu
Melinda Landau, San Jose Unified School District, melinda_landau@sjusd.org
Sue Lapp, School Health Clinics of Santa Clara County, suel@schoolhealthclinics.org
Candace Roney, Lucile Packard Children's Hospital, mcroney@lpch.org
JoAnna Caywood, Lucile Packard Foundation for Children's Health, joanna.caywood@lpfch.org
Abstract: With increasing budget cuts, inequities in available nursing time and health services in public schools continue to deepen, as confirmed by Taliaferro’s October 2008 report, "The Impact of the School Nurse Shortage". A significant factor for this trend is the lack of quality research documenting the impact of school nurses and nurse-to-student ratios on student health outcomes. This paper presents a model of assessing the impact of school nurses through an evaluation of the Nurse Demonstration Project, a five year endeavor to provide full-time credentialed nurses at four high-need schools in San Jose Unified School District, and a nurse practitioner at School Health Clinics of Santa Clara County. Project evaluation utilizes a mixed-method, case-control design to measure the impact of increased nursing time in improving access to primary and preventative care, chronic disease management, and in facilitating the establishment of a medical home for students who do not have one.
The Impact of School Based Health Centers (SBHCs) on Access to and Use of Health Services in New Orleans
Presenter(s):
Lisanne Brown, Louisiana Public Health Institute, lbrown@lphi.org
Marsha Broussard, Louisiana Public Health Institute, mbroussard@lphi.org
Paul Hutchinson, Tulane University, phutchin@tulane.edu
Nathalie Ferrell, Tulane University, natferrell@gmail.com
Sarah Kohler, Louisiana Public Health Institute, skohler@lphi.org
Abstract: In spring 2009, 2,011 students were surveyed in six public high schools in Orleans parish to evaluate the effectiveness of School Based Health Centers (SBHCs) in increasing access to and utilization of essential health services, promoting healthy lifestyles, and facilitating good decision-making skills in a complex urban environment. A quasi-experimental research design was utilized, involving three intervention schools with SBHCs and three comparison schools slated to eventually contain SBHCs. In this paper, we utilize propensity score matching to identify the impacts of SBHCs on indicators of adolescent utilization of health services and risky behaviors. Results indicate that adolescents with access to SBHCs not only receive quality health services, particularly vital mental health services, but they are also less likely to engage in behaviors that put their health at risk, including drug use, risky sexual activity, violence, smoking, unhealthy eating habits and lack of exercise.

Session Title: Five Partners, One Evaluation: A Cohesive Evaluation of the Action Communities for Health, Innocation, Environment Change (ACHIEVE) Healthy Communities Initiative
Panel Session 321 to be held in REPUBLIC C on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Andrea Lee, YMCA of the USA, andrea.lee@ymca.net
Abstract: Five national organizations are partnering on the ACHIEVE (Action Communities for Health, Innovation, EnVironment changE) initiative, supported by CDC’s Healthy Communities Program. Since 2008, ninety-three communities nationwide received funding to convene local leaders to promote health and well-being through policy, system, and environmental change. ACHIEVE partners include the National Association of County and City Health Officials (NACCHO), National Association of Chronic Disease Directors (NACDD), National Recreation and Parks Association (NRPA), and YMCA of the USA (Y-USA), with the Society for Public Health Education (SOPHE) contributing technical assistance and lesson dissemination. While ACHIEVE is predicated on collaboration at the local level, collaboration is also necessary among the national partners to effectively and comprehensively conduct an evaluation. Evaluators from each organization work closely to balance community and organizational needs while ensuring a cohesive evaluation of ACHIEVE. Representatives of each partner organization will present their perspectives of the evaluation plan and lessons learned.
Young Men's Christian Association (YMCA) of the United States of American (USA): Bringing Community Connections to the ACHIEVE partnership
Andrea Lee, YMCA of the USA, andrea.lee@ymca.net
YMCA of the USA (Y-USA)'s stated motto is "We build strong kids, strong families, strong communities." With a history rooted in responding to social need, Y-USA is addressing the obesity epidemic by bringing community development knowledge and participation in STEPS to a HealthierUS to the ACHIEVE initiaitve.
National Recreation and Parks Association (NRPA): Solving Chronic Disease Through Play and Space
Melanie Chansky, National Recreation and Parks Association, mchansky@nrpa.org
National Recreation and Parks Association (NRPA) has advocated for places for people to play to enhance their quality of life. Through ACHIEVE, NRPA is able to advance their mission further by improving the policies and environments that promote healthy eating and physical activity, thereby improving their quality of life. NRPA brings their experience of promoting environments to the ACHIEVE initiative.
National Association of County and City Health Officials: Bringing City and County Health Department Expertise to ACHIEVE
Sandra Silva, Altarum Institute, sandra.silva@altarum.org
As public health expands its understanding of health influences to address obesity and its risk factors, NACCHO is similarly bringing its local health departments to the table of community change. By drawing partnerships between health departments and community leaders, NACCHO is helping communities across the country to work together toward healthier living.
National Association of Chronic Disease Directors: New Approaches and Partnerships to Fight Chronic Disease
Ann Ussery-Hall, National Association of Councils on Developmental Disabilities, annusseryhall@gmail.com
The complex nature of obesity and its risk factors necessitates that public health organizations think outside their clinic walls to work toward community-wide solutions. As a public health association for chronic disease program directors of each state and U.S. territory, NACDD is expanding its reach through ACHIEVE and brings health expertise to the partnership.

Return to Evaluation 2010
Search Results for All Sessions