2010 Banner

Return to search form  

Session Title: A Conversation With Ernest House
Expert Lecture Session 822 to be held in  Lone Star A on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Presidential Strand
Chair(s):
Leslie Cooksy, University of Delaware, ljcooksy@udel.edu
Presenter(s):
Ernest House, University of Colorado, ernie.house@colorado.edu
Abstract: This year's conference theme, Evaluation Quality, is based in part on Ernie House's conception of validity as comprised of truth, beauty, and justice. House will start this session by describing the origins of the truth, beauty, and justice concepts in his work and considering how they might apply now. The session will then be opened up for questions and comments from the audience. While the conference calls attention to his tripartite view of validity, it is just one of House’s many contributions. With colleague Kenneth Howe, he has examined the conceptions of the fact-value relationship in different evaluation paradigms. Drawing on that and other work, he has explicated the role of evaluation in a democratic society and advanced an evaluation approach that promotes an egalitarian view of justice through inclusion, dialogue, and deliberation. Please join us for a conversation about the theoretical and practical implications of this important work!

Session Title: Culturally Responsive Theory-driven Evaluation: Understanding and Accurately Reflecting Cultural Contexts in Program Evaluation
Panel Session 823 to be held in Lone Star B on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Chair(s):
Stewart Donaldson, Claremont Graduate University, stewart.donaldson@cgu.edu
Discussant(s):
Rodney Hopson, Duquesne University, hopson@duq.edu
Pauline Brooks, Independent Consultant, pbrooks_3@hotmail.com
Abstract: The goal of this presentation is to discuss the conduct of culturally responsive theory-driven evaluation. Specifically, we seek to articulate how a culturally responsive theory-driven evaluation would, in House’s theoretical framework, address aspects of truth (providing credible evidence that resonates with the community and its varied stakeholders), beauty (telling the story that is most important to community needs), and justice (including the voices of those consumers who might otherwise not be included in the evaluation). Bledsoe and Donaldson carefully illustrate their points through real-time examples of their work at the community, and national levels. Discussants and reactors Hopson and Brooks provide commentary and critique of the theory-driven perspective and its usefulness and responsiveness in cultural contexts.
Toward a Culturally Responsive Theory-driven Evaluation Science
Stewart Donaldson, Claremont Graduate University, stewart.donaldson@cgu.edu
One of the major advances in the discipline and profession of evaluation in the past decade has been the enlightened understanding of the role of culture in evaluation. The concepts of culture, cultural competence, and culturally responsiveness have been established in the evaluation literature, AEA Task Force Statements, and the forthcoming evaluation standards. Theory-driven evaluation science is an evolving and adaptive approach used to guide modern evaluation practice. In this presentation, I will explore how theory-driven evaluation science is incorporating the advances of culturally responsive evaluation to increase the accuracy and use of theory-driven evaluations. For example, I will discuss how the key topics of engaging stakeholders, needs assessment, expressing and assessing program theory, formulating and prioritizing evaluation questions, formative evaluation and continuous improvement feedback, and determining program impact are being improved in light of the advances in understanding roles for culture in evaluation practice.
Cultural Responsiveness in Theory-driven Evaluation: Increasing Accuracy in Theories of Change, Questions, and Methods in Community-based Settings
Katrina Bledsoe, Walter R McDonald and Associates Inc, katrina.bledsoe@gmail.com
How does an evaluator seek to insure cultural responsiveness in a theory-driven evaluation? In part, by using theories that are considerate and representative of the consumers, and cultural context; by making sure that conceptual models go beyond linear logic modeling; by seeking to articulate the underlying mechanisms that may occur within a program; and by using methods that are most appropriate for the story that stakeholders are most wedded to telling. This includes exploring unintended outcomes and side effects that might be mediators and/or moderators of how a program works (e.g., how participants will react in situation based upon the historical aspects of the community, the context, and the program), and articulating aspects such as institutionalized injustices. In this presentation, I will provide a framework from which to work and use real-time and real-world examples.

Session Title: High Flexibility and Low Fidelity: The Challenges of Evaluating Highly Adaptable Programs
Panel Session 824 to be held in Lone Star C on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the
Chair(s):
Ann House, SRI International, ann.house@sri.com
Discussant(s):
Leslie Goodyear, National Science Foundation, lgoodyea@nsf.gov
Abstract: In an ideal world, many evaluation approaches prioritize clearly defining the purpose of the program being evaluated, understanding how the program is implemented, identifying targeted outcomes and following up with program participants. Yet, there are times when evaluators encounter programs that are, by design, flexible and open to adaptation; where strong program fidelity is not emphasized or seen as critical; and when participants are not readily identified. This panel will describe two projects that are distributed in their implementation and pose important evaluation challenges. Adobe Youth Voices is a global youth media program that relies on partners and distributed materials in implementation. Intel’s Elements courses are a series of teacher professional development opportunities which are freely available and intended to be used in ways that meet local training needs. The panelists will discuss issues of evaluator role, evaluation design, data collection, and analysis with regard to these adaptable programs.
Focusing on Quality: Designing Research for an Adaptable Program
Ann House, SRI International, ann.house@sri.com
In 2009, Intel launched the Elements courses, a new series in the Teach professional development portfolio. The courses are free to the public, can be implemented through self-study or facilitated offerings, and can be adapted in schedule, platform and design. Rather than trying to describe all the different variations in course offerings and different possible course outcomes, this research was instead designed to provide a documentation of the strategic value of the course to sponsoring stakeholders, establish potential impacts and outcomes for course participants under positive implementation conditions, and identify some best practices and optimal conditions for implementation. The strategy of this approach, then, was to not describe all course uses and contexts since this is untraceable due to its open nature. Instead, this approach worked to establish potential positive impacts and outcomes, while providing implementation models that helped bring about those outcomes.
Finding the Core: Evaluating a Distributed Program
Sophia Mansori, Education Development Center, smansori@edc.org
Adobe Youth Voices (AYV) is a global youth media initiative that empowers youth in underserved communities with real-world experiences and 21st century tools to communicate their ideas, exhibit their potential, and take action in their communities. AYV includes professional development for educators, access to resources, and communities of practice, delivered through varying channels to support educators in a wide range of settings. For the past 4 years, the program evaluation has worked to understand both the implementation and outcomes of this program as a whole as well as through its variations, which include: formal and informal education settings; face-to-face and online training; projects of different durations and intensity; and youth of different ages, cultures and geographies. Sophia Mansori will share her experience working to design and execute a cohesive evaluation for a program with so many changing and moving components.

Session Title: Evaluations Done Right: Paving the Way for Closing a Program
Multipaper Session 825 to be held in Lone Star D on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Evaluations Done Right: Paving the Way for Closing a Program
Presenter(s):
Lorna Escoffery, Escoffery Consulting Collaborative Inc, lorna@escofferyconsulting.com
Abstract: Closing down a program is not an easy decision for foundation staff and board members. However, when board members are committed to quality and are accountable to their community, they must engage in an honest discussion of the effectiveness of programs. This proposal presents the process by which an evaluation team collaborated with the Alpha 1 Foundation, as well as partner institutions, to evaluate four programs and offer recommendations for funding and policy decisions. The stakeholders deemed the process to be successful and it was used as the basis for board decisions such as closing the DNA and Tissue Bank, continuing funding to the other programs, and increasing the visibility of the foundation to promote research. Key elements for this success were the stakeholders’ commitment to quality and accountability; the positive dialogue established by the evaluation team; building staff capacity through training and mentoring; and fostering board involvement though systematic feedback.

Session Title: A Closer Look at Non-Equivalent Designs in Evaluation
Multipaper Session 826 to be held in Lone Star E on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
George Julnes,  University of Baltimore, gjulnes@ubalt.edu
A New Strategy for Eliminating Selection Bias in Non-experimental Evaluations
Presenter(s):
Laura Peck, Arizona State University, laura.peck@asu.edu
Furio Camillo, University of Bologna, furio.camillo@unibo.it
Ida D’Attoma, University of Bologna, ida.dattoma2@unibo.it
Abstract: This paper presents a creative and practical approach to dealing with the problem of selection bias. Taking an algorithmic approach and capitalizing on the known treatment-associated variance in the X matrix, we propose a data transformation that allows estimating unbiased treatment effects. The approach does not call for modeling the data, based on underlying theories or assumptions about the selection process, but instead it calls for using the existing variability within the data and letting the data speak. We illustrate with an application of the method to Italian Job Centers.
The Truncation-by-Death Problem: What to do in an Experimental Evaluation When the Outcome is Not Always Defined
Presenter(s):
Sheena McConnell, Mathematica Policy Research, smcconnell@mathematica-mpr.com
Elizabeth Stuart, Johns Hopkins University, estuart@jhsph.edu
Barbara Devaney, Mathematica Policy Research, bdevaney@mathematica-mpr.com
Abstract: While experiments are viewed as the gold standard for evaluation, some of their benefits may be lost when, as is common, outcomes are not defined for some sample members. In evaluations of marriage interventions, for example, a key outcome—relationship quality—is undefined when a couple splits up. This paper shows how treatment-control differences in mean outcomes can be misleading when outcomes are not defined for everyone and discusses ways to identify the seriousness of the problem. Potential solutions to the problem are described, including approaches that rely on simple treatment-control differences-in-means as well as more complex modeling approaches.

Session Title: Linking With Outcomes
Multipaper Session 827 to be held in Lone Star F on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Yu Ping,  Battelle Memorial Institute, yup@battelle.org
Using Large Scale Data Management Systems in the Evaluation of Multi-site Family Support Programs
Presenter(s):
Pam Van Dyk, Evaluation Resources LLC, evaluationresources@msn.com
Bertha Gorham, Gorham Consulting, berthagorham@nc.rr.com
Linda Blanton, Cumberland County Partnership for Children, lblanton@ccpfc.org
Pat Hansen, The North Carolina Partnership for Children, phansen@smartstart.org
Abstract: This paper sessions explores the implications and challenges of implementing web-based, large-scale data collection and management sytems across multiple sites. Two comparative case studies offer insights from multiple perspectives (system developers, program funders, system users). Central to this discussion is how these systems ultimately contribute to effective multi-site evaluations and subsequently impact program quality.
Assessing the Strength of Community Health Programming: A New Tool for Evaluators
Presenter(s):
Amy A Sorg, Washington University in St Louis, asorg@wustl.edu
Sarah C Shelton, Washington University in St Louis, sshelton@wustl.edu
Stephanie Herbers, Washington University in St Louis, sherbers@wustl.edu
Douglas Luke, Washington University in St Louis, dluke@wustl.edu
Bobbi Carothers, Washington University in St Louis, bcarothers@wustl.edu
Abstract: An ongoing challenge with complex initiatives is the ability to link efforts to outcomes. As part of our evaluation of the Missouri Foundation for Health’s multi-site, multi-program Tobacco Prevention and Cessation Initiative (TPCI), we created the Strength of Community Health Programming Index (SCHPI). The Index serves as a tool to monitor the intensity of TPCI programming at the county level and to link these efforts to tobacco-related outcomes. The Index also serves as an important planning tool and can be used to inform community health planning, policy development and evaluation. The process used to create SCHPI can be adapted to other community health interventions and a range of geographic boundaries. In this session, we will describe the process taken to create and validate SCHPI, how the index is currently used, and recommendations for other evaluators on how they can use the Index in their work.

Session Title: Dealing With Evaluation Challenges and Complexities in Policing and Prison Environments
Multipaper Session 828 to be held in MISSION A on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Crime and Justice TIG
Chair(s):
Roger Przybylski,  RKC Group, rogerkp@comcast.net
How to Measure Police Performance: Handling of Methodological Challenges in a Complex Police Environment
Presenter(s):
Morten Eikenes, Office of the Auditor General of Norway, morten.eikenes@riksrevisjonen.no
Abstract: The Office of the Auditor General has the last years evaluated the Norwegian police performance. The evaluations that this paper is based on, represent different approaches when police performance is evaluated. The police operate in a complex and changing environment. In that regard the police have to handle many different types of crimes with limited amount of resources. New forms of crime have arised, such as organized crime. In addition the police have to handle the challenges due to the increased globalisation and the increase of mobility of goods, persons, information and capital. Viewed against this background it is therefore a complex and extensive field to evaluate. The following paper sheds light on the methodological challenges faced when the police performance should be evaluated and measured. In addition the paper offers possible solutions that might be useful to the field of evaluation.
Getting Evaluation Findings Out of Prison: The Challenges of Doing Evaluation Work in a Total Institution
Presenter(s):
Eric Graig, Usable Knowledge LLC, egraig@usablellc.net
Abstract: Conducting research in prisons is frequently costly, frustrating and difficult with success founded upon an upfront understanding of the particular challenges evaluators face when working in these settings. This paper begins by outlining the major features of the prison as a total institution. This discussion is based on both the academic literature and on accounts provided by those who make their lives in prisons, both the inmates themselves and those who make their living as their warders. From there, it moves to a discussion of the specific challenges that can make research work in prison so difficult These include access challenges, logistics, data collection limitations, the sometimes difficult relationship with prison staff and the paradoxical arbitrariness that characterizes these otherwise highly bureaucratic and rigidly structured institutions. The paper concludes with some ideas, collected after nearly a decade of work about how to deal with them.

Session Title: Improving Survey Quality
Multipaper Session 829 to be held in MISSION B on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the
Strategies for High Response Rates Among Hard-to-Reach Respondents: A Case Study From the Communities Empowering Youth National Evaluation
Presenter(s):
Lindsay Fox, Abt Associates Inc, lindsay_fox@abtassoc.com
Christopher Mulvey, Abt Associates Inc, christopher_mulvey@abtassoc.com
Ryoko Yamaguchi, Abt Associates Inc, ryoko_yamaguchi@abtassoc.com
Marjorie Levin, Abt Associates Inc, marjorie_levin@abtassoc.com
Abstract: National surveys show low response rates among hard-to-reach respondents, most notably busy professionals such as teachers and executive directors. While web surveys are seen as a panacea for these respondents, attaining high response rates remains elusive. We present strategies on attaining high response rates using the Communities Empowering Youth Evaluation sponsored by the US Department of Health and Human Services as a case study. In the administration of a web-based survey to over 550 organizations, we were able to achieve response rates of 98% at baseline and 97% at follow-up. We will have results from our third wave of data collection in time for the conference. Our methods include a user-friendly online survey, on-going tracking of contact information, and a team of survey administrators (called solutions desk liaisons) through which technical assistance and outreach was provided. We hope others in the evaluation field can benefit from employing these successful methods.
Organizational Survey of Workplace Climate: Differences in Representation Across Response Modes
Presenter(s):
David Mohr, United States Department of Veterans Affairs, david.mohr2@va.gov
Katerine Osatuke, United States Department of Veterans Affairs, katerine.osatuke@va.gov
Scott C Moore, United States Department of Veterans Affairs, scott.moore@va.gov
Boris Yanovsky, United States Department of Veterans Affairs, boris.yanovsky@va.gov
Thomas Brassell, United States Department of Veterans Affairs, thomas.brassell@va.gov
Mark Nagy, Xavier University, nagyms@xavier.edu
Abstract: The Veterans Health Administration (VHA) All Employee Survey (AES) is a voluntary annual survey, an important evaluation and feedback tool in the VHA system. The AES assesses employee job satisfaction, perceptions of workplace climate in their specific workgroups, and perceptions of organizational culture in their broad organizations (VHA hospitals, clinics, or program offices). Since 2004, the AES results are consistently used to inform local and national decision-making regarding human capital and workplace management priorities. AES response modes include paper, phone, and web. Given the changing perceptions of technology, we examined representation of employee groups by modes in years 2004, 2008, and 2010. We found the VHA workforce to be demographically similar to AES respondents, however, comparing AES respondents across mode of survey completion showed demographic differences. We conclude that offering several response modes maximizes demographically balanced participation. This is particularly important when evaluation of perceptions guides subsequent organizational actions.

Session Title: BUILD-ing an Institute for Child Success: The Statewide Systems Design for the South Carolina Institute for Child Success
Panel Session 830 to be held in BOWIE A on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Systems in Evaluation TIG and the Human Services Evaluation TIG
Chair(s):
Aimee Sickels, University of South Carolina, aimee@customevaluation.com
Discussant(s):
Julia Coffman, Center for Evaluation Innovation, jcoffman@evaluationexchange.org
Abstract: The purpose of the panel is to share the tangible struggles and benefits associated with systems design efforts specifically around early childhood collaboratives. The panelists will discuss using the nationally recognized BUILD initiative model of systems design for the development of the theory of change and evaluation plan for The South Carolina Institute for Child Success. A collaborative initiative of the Children’s Hospital, Greenville’s Hospital System (GHS) and the United Way of Greenville County (UWGC), the project incorporates multiple levels of stakeholders during the design stage using the BUILD framework of effective systems of delivery design. The panel will include representatives from the GHS, UWGC, University of South Carolina, and the BUILD model author to share with a wider audience the nuances, challenges, and benefits of using an established model to design a system of delivery for the children (0-5) in the state of South Carolina.
BUILD-ing The South Carolina Institute for Child Success
Susan Shi, South Carolina Institute for Child Success, susan.shi@furman.edu
Laurie Rovin, United Way Of Greenville County, lrovin@unitedwaygc.org
Desmond Kelly, Greenville Hospital System, dkelly@ghs.org
Linda Brees, Greenville Hospital System, lbrees@ghs.org
Dennis Poole, University of South Carolina, dpoole@mailbox.sc.edu
The purpose of the panel is to share the tangible struggles and benefits associated with systems design efforts specifically around early childhood collaboratives. The panelists will discuss using the nationally recognized BUILD initiative model of systems design for the development of the theory of change and evaluation plan for The South Carolina Institute for Child Success. A collaborative initiative of the Children’s Hospital, Greenville’s Hospital System (GHS) and the United Way of Greenville County (UWGC), the project incorporates multiple levels of stakeholders during the design stage using the BUILD framework of effective systems of delivery design. The panel will include representatives from the GHS, UWGC, University of South Carolina, and the BUILD model author to share with a wider audience the nuances, challenges, and benefits of using an established model to design a system of delivery for the children (0-5) in the state of South Carolina.
Looking Back, Thinking Forward: An Historical Analysis of a Child Wellness Initiative Via the BUILD Model
Carl Maas, University of South Carolina, cdmaas@mailbox.sc.edu
Leigh Hewlett, University of South Carolina, hewlett.leigh@gmail.com
Addressing gaps in the science of child wellness initiatives, a historical comparative analysis will be presented. The analysis will test the BUILD Initiative model in terms of how a child wellness initiative is developed using a theory of change approach. The Build Initiative outlines five areas: context, components, connections, infrastructure, and scale. The analysis examines the beginnings and transformations of a child wellness initiative developed in a suburban county in the Southeast. Implications include evaluation themes pertaining to the development of unifying concepts, service components, and political and expert connections leading to infrastructure (political and social) required to start and maintain a child wellness initiative.

Session Title: The Role of Evaluation in Informating Local and State Policy Makers
Multipaper Session 831 to be held in BOWIE C on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Jared Raynor,  TCC Group, jraynor@tccgrp.com
Must Policy-Makers Choose Between Putting Out Fires and Making Good Policy? Evaluator Observations From "The Belly of the Beast", also known as The Chicago Political Machine
Presenter(s):
Amarachuku Enyia, University of Illinois at Urbana-Champaign, aenyia@illinois.edu
Abstract: This paper seeks to explore the role of evaluation and evaluative thought in big city policy-making. Chicago’s Mayor’s Office is an example of a constant crisis mode environment that leaves its primary policy-makers little opportunity for evaluative thinking on the policy decisions that directly impact the public. How can policy-makers think more evaluatively about critical social, economic, fiscal and other issues when they are constantly putting out fires and while simultaneously contending with the demands of the Press, a powerful mayor, and the public? Is this ever possible? This paper seeks to utilize works by Thomas Schwandt on evaluative thinking, Jennifer Greene on evaluation and policy-making, Rodney Hopson on contextually relevant evaluation, and others to chart out a path to effectively embedding evaluative thought into policy-making to better serve the public – while still putting out the fires that characterize city government and other high level, fast-paced policy environments.
Evidence of Impact: Informing Legislators to Improve Decision-making
Presenter(s):
Sarah Bradford, Kansas State University, sbradford@ksu.edu
Valerie York, Kansas State University, vyork@ksu.edu
Jan Middendorf, Kansas State University, jmiddend@ksu.edu
Janice Cole, Kansas State University, jrc@ksu.edu
Abstract: Our office, the Office of Educational Innovation and Evaluation, serves as evaluator of a statewide broadband network initiative. Evaluation activities include collection of data related to the impact of the network on its four constituencies, including K-12 school districts, higher education institutions, hospitals, and libraries. The office increases the quality of the evaluation by compiling the impact data into legislative packets for use in the legislative session, to assist Kan-ed in securing continued state funding for the initiative. This presentation will discuss the methodologies used and presentation of data to the legislators. These data are tailored to each legislator to present evidence of the impact of the statewide network on constituents in that legislator’s district or region, including network usage statistics, funding and equipment received, and impact statements and stories. The value of using advocacy packets and a website feedback survey to collect impact data will be discussed.

Roundtable: Use of Implementation Rubrics as Indicators of Evaluation Quality
Roundtable Presentation 832 to be held in GOLIAD on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Qualitative Methods TIG
Presenter(s):
Elise Arruda Laorenza, Brown University, elise_laorenza@brown.edu
Stephanie Feger, Brown University, stephanie_feger@brown.edu
Joye Whitney, Brown University, joye_whitney@brown.edu
Abstract: In this roundtable, we propose to share our experiences (1) developing two qualitative evaluation rubrics measuring program implementation levels, and (2) ensuring the evaluation rubrics are useful for program planning and policy decision-making. The roundtable discussion will raise questions of how evaluators determine the quality of implementation evaluations when the focus is on qualitative data. We propose to use our experience with implementation rubrics to frame the dialogue around two key indicators of quality, evaluation use and mixed methods. A completed evaluation study of a summer learning program provides an opportunity to reflect on how program stakeholders and evaluators identify factors of quality. While evidence suggests that usefulness was attained, focusing stakeholders on program descriptions rather than quantitative data was a challenge. The roundtable will address the dilemma of when and how to assess the quality of qualitative evaluation designs, and strategies developed to enhance use of evaluation tools.

Roundtable: Toward Improving the Evaluation Practice of Financial Education Programs: Key Issues and the Role of Formative Evaluation
Roundtable Presentation 833 to be held in SAN JACINTO on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Assessment in Higher Education TIG
Presenter(s):
Nicole Jackson, University of California, Berkeley, jackson@berkeley.edu
Abstract: The melt-down of financial markets both in the U.S. and world-wide has called into the question not only the credibility of financial institutions, but also forms of accountability in financial education. Financial education evaluation has traditionally used more summative approaches in evaluating outcomes related to the learning of savings and investment behaviors, which span from high school to professional education programs, to address this accountability. These approaches have proven faulty in curtailing three general issues, which pervade financial education and the financial service industry more generally. These issues include forms of 1) overconfidence bias at the individual level, 2) forms of overconfidence bias reinforced at the group level, and 3) related issues of over-directed self-aggrandizement. This proposal investigates these three issues and proposes the need for more formative as opposed to summative approaches to improve the practice of financial education evaluation.

Session Title: Thinking About Thinking: Assessing Critical Thinking Instruction in Higher Education
Demonstration Session 834 to be held in TRAVIS A on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Assessment in Higher Education TIG
Presenter(s):
Linda Lynch, United States Army, sugarboots2000@yahoo.com
Abstract: This demonstration will feature methods to develop items for the instructional assessment of critical thinking (CT) in higher education, using an experiential learning approach. Assessment items will be developed emphasizing current theories of CT, with a focus towards improvement of CT instructional quality. Items will be developed to assess two aspects of CT instruction: instructor delivery of CT skill-based lessons; and implementation of integration opportunities that may occur anywhere in the curriculum. A review of CT current theories will support quality instruction.

Session Title: Data Management: How Not to Lose Face
Skill-Building Workshop 835 to be held in TRAVIS B on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Integrating Technology Into Evaluation
Presenter(s):
Julien Kouame, Western Michigan University, julienkb@hotmail.com
Abstract: It is not too exaggerated to argue that evaluation results are determined by data. A well constructed database can greatly improve evaluation data analysis and therefore the evaluation result. The objective of this workshop will be to provide the participants with the resources and knowledge to optimize database performance and to ensure quality of results. This workshop will be conducted through hands-on practice group activities. The session will last 45 minutes, broken into brief introduction of data management, presentation of criteria to ensure data quality and actual steps for building a solid database. No prior knowledge of Data management is required. During the workshop, participants will learn Data Management techniques and tools, participants will learn how to create a solid foundation for your research database.

Session Title: Quick and Quality Level Three Evaluations for Corporate Staff Learning
Panel Session 836 to be held in TRAVIS C on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Jaime Quizon, World Bank, jquizon@worldbank.org
Abstract: In Level 3 assessments of staff learning in large learning organizations, we are usually interested in knowing whether a staff learning program resulted in desired changes in the attitudes, skills, and/or behaviors of participants, and the extent to which these changes have manifested in the relevant, work-related performance levels of learning participants. The typical Level 3 evaluation, however, is a complex, time consuming, and expensive process. Meanwhile, the opportunities for changing the nature and essentials of most staff learning programs are usually time-bound and driven by change management and budget constraints. Thus, delays in obtaining timely and actionable information to correct the ineffective and costly elements of ongoing learning programs may exacerbate costs and delay the program’s contributions to corporate business results. The session will present two studies that will compare and contrast rigorous (first study) vs. quick, but effective (second study) evaluations based on recent Level 3 assessments of leadership development and communications’ skills programs for World Bank staff.
A Rigorous Evaluation of a Team Leadership Training Program in a Corporate Environment
Jaime Quizon, World Bank, jquizon@worldbank.org
This study is based on an actual, rigorous L3 evaluation of a leadership training program at the World Bank. It is based on a quasi-experimental design that followed seven cohorts of training program graduates over time over a 12-18 month period. The study used “before’ and “after” surveys of training program participants (and a matched “control” group), including their respective peers and the managers. These surveys were supplemented by interviews and discussions with selected participants (and “control” group members) to elaborate on issues that arose in the course of the surveys and on the responses received from the open-ended questions in the surveys. Interviews with managers, peers, and program administrators allowed the study to understand better the impacts of the leadership training program from different perspectives. This first presentation will focus mainly on the process of undertaking a rigorous L3 evaluation of a staff learning program, highlighting the challenges and benefits of such rigorous studies in a corporate setting.
A Quick and Quality Level 3 Evaluation of the Language and Culture Program in a Corporate Environment
Valya Nikolova, World Bank, vnikolova@worldbank.org
The Language and Culture program Level 3 evaluation is a study of the application of skills gained through the courses offered by the program and on-the-job experience. It measures improvements in participants’ communication and interaction with clients and peers, awareness of participants in understanding their clients’ culture, ability to transfer skills on-the-job, and demonstrate efficient business results. Evaluation respondents include participants from six language courses, peers, managers, and colleagues of the participants, which enables a gap analysis of different stakeholders’ perceptions about the effect of the program. The evaluation draws on existing sources of data (Level 1 participant feedback and Level 2 achievement tests and oral proficiency outcomes) while providing further in-depth assessment of the end results of the program and focuses on examining participant and peer and management perceptions of effective communication and appropriate interaction between Bank staff and clients and partners. The evaluation utilizes a mix of quantitative and qualitative approaches with emphasis placed on qualitative feedback.

Session Title: Internal Evaluators: Contextual Considerations and Roles
Multipaper Session 837 to be held in TRAVIS D on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the
Chair(s):
Magdalena Rood,  Third Coast R&D Inc, mrood@thirdcoastresearch.com
Discussant(s):
Joelle Greene,  National Community Renaissance, jgreene@nationalcore.org
Making Sense of the Contemporary Roles of Internal Evaluators
Presenter(s):
Boris Volkov, University of North Dakota, bvolkov@medicine.nodak.edu
Abstract: This paper will describe and analyze roles of internal evaluators in contemporary organizational settings. It will provide an overview of various contexts; the parameters and dimensions pertinent to internal evaluation and internal evaluators within organizational settings; organizational demands on internal evaluation professionals; and evaluators’ roles, generated in response to these demands. Critical issues will be illuminated in the role of the internal evaluator in a context of modern organizations influenced by various traditions and movements, including quality assurance, continuous quality improvement, performance monitoring, and Monitoring and Evaluation (M&E), to name just a few. It has become more evident that the internal evaluator’s job is not just about using appropriate evaluation methodology or building complex M&E systems but increasingly about dealing with intra-organizational obstacles to quality evaluation. New roles certainly require reconfigured, unorthodox methods and styles of work to effectively meet the needs of the emerging learning organizations.
Program Manager as Internal Evaluator: Challenges and Lessons Learned
Presenter(s):
Anthony Kim, University of California, Berkeley, tonykim1@gmail.com
Abstract: In real-world settings, program managers often times have to play a dual role as an internal evaluator. This situation arises for a couple of reasons. To begin with, many organizations do not have the resources necessary to assign independent evaluators to evaluate programs. Secondly, a growing number of organizations have been cutting or eliminating their budget for independent internal or external evaluators due to the current economic climate. This paper outlines the author’s experience as a program manager at an education nonprofit as he was “forced” to conduct an internal evaluation of his program. The author faced conflicts of interest, since his objectives as a program manager often did not coincide with his objectives as an internal evaluator. Reflecting on these experiences, the author questions the increasing reliance of evaluation by program managers. Program managers, by the nature of their positions, are limited in their ability to perform honest and objective evaluations of their own programs. The author’s reflections can serve as a cautionary tale for organizations against relying solely on program staff for evaluation.

Session Title: Applications and Issues With Regression Discontinuity Designs
Multipaper Session 838 to be held in INDEPENDENCE on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Karen Larwin,  University of Akron, Wayne, drklarwin@yahoo.com
Replicating Experimental Impact Estimates Using a Regression Discontinuity Design
Presenter(s):
Jillian Berk, Mathematica Policy Research, jberk@mathematica-mpr.com
Phillip Gleason, Mathematica Policy Research, pgleason@mathematica-mpr.com
Alexandra Resch, Mathematica Policy Research, aresch@mathematica-mpr.com
Abstract: This study will generate RD estimates of program impacts using data from two experimental education evaluations for which experimental impact estimates already exist. For each of the two existing evaluations, one of educational technology products and one of Teach for America, we construct an analysis file from the original experimental data file that could have arisen from an RD design. In particular, we choose a baseline characteristic that could plausibly be used to assign individuals to treatment and construct a new sample consisting of treatment students from one side of an arbitrary threshold and control students from the other side of the threshold. With the newly constructed analysis file, we estimate impacts using RD methods and compare these results with the experimental estimate. The study provides evidence on the validity of RD designs and examines the extent to which this validity depends on the specifics of the design.
An Evaluation of a State-Funded Prekindergarten Program Utilizing a Regression Discontinuity Design
Presenter(s):
Jamie Coburn, Tennessee Technological University, jamie.coburn@tn.gov
George Chitiyo, Tennessee Technological University, gchitiyo@tntech.edu
Abstract: An evaluation of the attendance in public prekindergarten programs and school readiness skills in Tennessee’s schools was examined using both factorial ANOVA and regression discontinuity, taking into account students’ socioeconomic status as measured by their eligibility for free/reduced lunch. Both the regression discontinuity and ANOVA results indicated a significant impact of prekindergarten participation on school readiness skills. Kindergarten students who were from low socioeconomic status and attended prekindergarten had greater gains in school readiness skills than kindergarten students who were not from low socioeconomic backgrounds. Also, students who were eligible for free/reduced lunch and attended prekindergarten performed better than kindergarteners who were eligible for free/reduced lunch and did not attend prekindergarten.

Session Title: Overcoming the Limitations of the Educational Context to Increase Rigor
Multipaper Session 839 to be held in PRESIDIO A on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Independent Consulting TIG
Chair(s):
Kathleen Haynie,  Haynie Research and Evaluation, kchaynie@stanfordalumni.org
Using an Interrupted Time Series Design to Evaluate the Impact of a Professional Development Program for Teachers
Presenter(s):
Frederic Glantz, Kokopelli Associates LLC, fred@kokopelliassociates.com
Abstract: Evaluators typically use comparison groups of students in other schools or in other classrooms within the same school to evaluate the impact of interventions designed to improve student outcomes. Neither design is appropriate in the case of voluntary teacher professional development programs. First, it is almost impossible to find truly comparable schools. Second, in voluntary programs for teachers’, self-selection into the intervention results in biased impact estimates. An interrupted time series design avoids these problems by using participating teachers as their own comparison group. This paper discusses an evaluation of a professional development program designed to improve teaching skills in mathematics. The primary outcome measure is individual students’ placement on annual statewide standards-based assessments (SBA). The evaluation compared SBA scores of participating teachers’ students for several years prior to participation in the intervention to the SBA scores of the same teachers for several years following their participation in the intervention.
Collaborating With Clients to Develop Psychometric Parallel Teacher and Student Evaluation Measures
Presenter(s):
Kathryn Race, Race & Associates Ltd, race_associates@msn.com
Abstract: The benefits of using evaluation measures that have demonstrated reliability and validity are well documented in the evaluation and social science literature. Yet it can be quite challenging for evaluators working with small programs, especially given limited resources, to develop evaluation measures that are sensitive and relevant to local programs yet at the same time supported by demonstrated psychometric standards. The purpose of this presentation is to describe how as an external evaluator we worked cooperatively with a client to create such measures, one focused on teacher attitudes toward teaching science and the other focused on student assessment of teaching methods experienced in their science classrooms. Each measure was sensitive to the local program’s model and the reliability and validity of each measure was investigated. How we negotiated such issues as: use of shared resources; and contractual issues such as intellectual property; and lessons learned will be highlighted as well.

Session Title: Using Logic Models to Build Evaluation Capacity at the Community Level: Enhancing Program Effectiveness by Building Evaluation Skills Among Community Coalitions
Demonstration Session 840 to be held in PRESIDIO B on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Presenter(s):
Tiffany Comer Cook, University of Wyoming, tcomer@uwyo.edu
Laura Feldman, University of Wyoming, lfeldman@uwyo.edu
Abstract: In this demonstration, presenters will illustrate a step-by-step process for successfully building the evaluation capacity of community coalitions. The presenters will offer examples from a pilot project conducted with tobacco prevention and control programs. Evaluators worked with local coalitions to help them transform their strategic plans into logic models and then to use their logic models as guides for identifying ways to improve and direct program impact and, consequently, their progress in achieving project goals. The coalitions learned how to use logic models to link activities, outputs, and outcomes and to inform the documentation of the short- and long-term impacts of day-to-day activities. They also learned to collect data purposefully, to track informative outputs, and to gather easily available evidence of program impact. The presentation will also focus on the lessons learned in implementing this approach to improve both evaluation capacity and evaluation utility.

Session Title: Evaluating Mental Health Peer Support and Peer Specialist Programs
Multipaper Session 841 to be held in PRESIDIO C on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Gitanjali Shrestha,  Washington State University, gitanjali.shrestha@email.wsu.edu
Evaluation of Peer Support Programs: Implications for Utility and Accuracy
Presenter(s):
Glenn Landers, Georgia State University, glanders@gsu.edu
Mei Zhou, Georgia State University, mzhou1@gsu.edu
Abstract: In 2007, the Centers for Medicare and Medicaid Services issued a letter to states providing guidance for the development of Medicaid billable peer support services. Because of the likelihood that peer support programs will continue to expand nationwide with support of the CMS guidance, it is important, from a public policy perspective, to better understand the mental health service delivery costs associated with peer support. This study investigated whether or not peer support impacted crisis stabilization costs, psychiatric hospitalization costs, and total Medicaid costs within one state’s Medicaid system using a retrospective quasi-experimental design. Peer support was associated with significantly higher total Medicaid cost, significantly lower facility cost, and significantly higher prescription drug and professional services costs. Short-term increases in state Medicaid spending for peer support programs may support community integration, which, in turn, may lead to lower Medicaid spending in the long-term.
Mixed Methods Evaluation of the Massachusetts Peer Specialist Training and Certification Program
Presenter(s):
Heather Strother, University of Massachusetts, heather.strother@umassmed.edu
Linda Cabral, University of Massachusetts, linda.cabral@umassmed.edu
Kathy Muhr, University of Massachusetts, kathy.muhr@umassmed.edu
Laura Sefton, University of Massachusetts, laura.sefton@umassmed.edu
Christine Clements, University of Massachusetts, christine.clements@umassmed.edu
Abstract: A growing trend in mental health systems is for individuals with mental illness and experience with mental health services to work as Peer Specialists. In this role, they serve as role models and provide support, education and advocacy to clients using mental health services. The authors recently completed an evaluation of a training program that prepares Peer Specialists for this work in the Massachusetts mental health delivery system. Using a mixed methods approach, the evaluation explored a) strengths and opportunities to enhance the training program and b) the degree to which the training program is establishing a competent workforce of certified peer specialists throughout Massachusetts. This paper will describe the overall evaluation aims and questions, rationale for using a mixed methods approach, and details of the study design. It will also present integrated findings regarding the training’s impact on its participants and a description of factors that explain this impact.

Roundtable: Negotiating Multiple Challenges While Maintaining Quality: Lessons From Urban:Rural Alaska
Roundtable Presentation 842 to be held in BONHAM A on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Rosyland Frazier, University of Alaska, Anchorage, anrrf@uaa.alaska.edu
Alexandra Hill, University of Alaska, Anchorage, anarh1@uaa.alaska.edu
Abstract: We discuss the challenges of evaluating a student exchange program between urban:rural Alaska schools. These include limitations on evaluation design stemming from the self selected nature of the group; the fact that participants are minors; and the difficulty of measuring changes in attitude. These challenges are multiplied by the need to build relationships with new program staff who were not part of evaluation design and aren’t familiar with empowerment and evaluation principles. Each of these affects the ability to collect adequate and appropriate data that is essential to evaluation quality.

Session Title: Evaluation of Underrepresented Student College and Career Choice Programs
Multipaper Session 843 to be held in BONHAM B on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the College Access Programs TIG
Chair(s):
Kurt Burkum,  ACT, kurt.burkum@act.org
Evaluation Strategies for Educational Career Pathway Programs: Targeting Underrepresented High-Performing Youth
Presenter(s):
Sandra Eames, St Edward's University, seames@austin.rr.com
Fred Estrello, St Edward's University, alfrede@stedwards.edu
Abstract: In the age of teacher accountability, school report cards, and No Child Left Behind, evaluation becomes a piece of the puzzle that assists in shining light on a program’s effectiveness through the use of data proven efforts. The purpose of this evaluation study was five fold: accountability, improvements, understanding, and dissemination of effective services for target beneficiaries as well as empowering the program for better sustainability/transportability, the ultimate goal being to increase the access to higher education for underrepresented populations. The evaluation of Project Jumpstart operated at St. Edward’s University in Austin, Texas was designed as a utilization-focused approach to examine the effectiveness of a teacher recruitment program in attracting high-performing underrepresented high school students from the Austin Independent School District to earn a college degree with teacher certification at St. Edward’s University through an innovative 2+2+2 pathway in career and technical education.
Evaluation of Summer Enrichment Programs as a Gateway for Low-Income Students to Experience and Choose College
Presenter(s):
Mehmet Öztürk, Arizona State University, ozturk@asu.edu
Brian Garbarini, Arizona State University, brian.garbarini@asu.edu
Kerry Lawton, Arizona State University, klawton@asu.edu
Abstract: This paper will demonstrate the importance of embedded evaluation in summer enrichment programs and illustrate methods that may be used to provide quality evaluations of summer enrichment programs. For this purpose, the methodology for an ongoing evaluation of ASYouth will be presented. ASYouth is a program developed to provide an overall support system so that needy and deserving kids may participate in summer activities. The goal of the ASYouth evaluation has been to increase (1) the understanding the relationship between parental and child knowledge and understanding of college (2) awareness of perceptions low income children have towards college, and (3) exploration of how children’ college and related perceptions may change upon completion of a university-based summer enrichment program. Given that the knowledge base regarding quality evaluation of summer programs is limited, a discussion of evaluation design in this area will be greatly appreciated by evaluation researchers, practitioners, and policy-makers.

Session Title: Fidelity Instruments and School Burden
Panel Session 844 to be held in BONHAM C on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
David Merves, Evergreen Educational Consulting LLC, david.merves@gmail.com
Abstract: One of the challenges to many educational interventions is the development, testing, and use of quality fidelity instruments. Over the last ten years, the U.S. Department of Education and other federal and state agencies have promoted the implementation of Response to Intervention models. One of the more established models is Positive Behavioral Interventions and Supports (PBIS). While not a model that provides a scripted curriculum, a number of strategies and data collection tools are recommended. In this session, presenters will report on an analysis of four of the recommended PBIS instruments, including preliminary findings of a school survey to determine (1) what fidelity instruments are used on an annual basis and (2) how schools are using the data collected via these instruments. Participants will discuss implications of these survey data on fidelity measures and the burden to schools.
Positive Behavior Intervention Support Model Overview and Instrumentation
Patricia Mueller, Evergreen Educational Consulting LLC, eec@gmavt.net
This session will briefly describe the Positive Behavior Intervention Support (PBIS) intervention model and how it aligns with a Response to Intervention framework. An overview of efficacy research of the PBIS model will be presented, with emphasis on the array of instruments available for PBIS schools to evaluate the program’s success. The presenter, Dr. Mueller, is the lead evaluator on several large-scale federally funded grant projects in 3 states that utilize PBIS as a key intervention.
A Survey of Positive Behavior Intervention Support Schools: Using Fidelity Measures to Inform Decisions
Brent Garrett, Pacific Institute for Research and Evaluation (PIRE), bgarrett@pire.org
In this session, Dr. Garrett, lead evaluator for PBIS initiatives in four states, will present the results of a survey of PBIS schools from three states. The purpose of the survey was to assess what fidelity instruments were used on an annual basis and how schools used the data collected via these instruments. Implications of the results will be the focal point of a discussion with participants relative to the use and application of fidelity measures for data-based decision making.

Session Title: Assessing the Use of Test Score Data to Inform Decisions About Student Achievement
Multipaper Session 845 to be held in BONHAM D on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Tara Pearsall,  Savannah College of Art and Design, tpearsal@scad.edu
Discussant(s):
Susan Henderson,  WestEd, shender@wested.org
Data Mining Electronically Linked Grade Three Standardized Assessment Scores From Kindergarten Assessments to Identify Performance Patterns
Presenter(s):
Deborah Carran, Johns Hopkins University, dtcarran@jhu.edu
Jacqueline Nunn, Johns Hopkins university, jnunn@jhu.edu
Tamara Otto, Johns Hopkins university, tamaraotto@jhu.edu
Abstract: Linkage of unique student identifiers across grade levels has generated renewed interest in predicting high stakes test scores at early ages. Data mining, an iterative process using large extant data warehouses to discover meaningful patterns in data, examined the relationship between kindergarten assessments and grade 3 high stakes reading and math assessments. 152,105 kindergarten students were identified as receiving a kindergarten assessment between 2002 and 2005. Of these students, 100,957 were matched with their Grade 3 standardized math score and 100,978 with their Grade 3 Reading score, representing a 66% match rate. Using Classification and Regression Tree modeling analysis results are presented in tree-like figures with branches representing the splitting of cases based on values of predictor attributes. Results indicated that the kindergarten assessment is a moderately successful predictor of later high stakes testing performance; math performance was predicted better than reading.
Using Student Test Scores to Evaluate Performance
Presenter(s):
Steven Glazerman, Mathematica Policy Research, sglazerman@mathematica-mpr.com
Liz Potamites, Mathematica Policy Research, lpotamites@mathematica-mpr.com
Abstract: There are many ways to use student test scores to evaluate the effectiveness of teachers or schools. This paper compares regression-based “value added” indicators to alternative estimators that are potentially simpler and cheaper. Such alternatives include those based on changes in average test scores for a given cohort in successive grades (average gains) and those based on changes in successive cohorts’ average scores in the same grade (cohort changes). We argue that while average gain indicators can potentially provide useful information, they have important limitations that must be taken into account. Cohort change indicators, however, are misleading and should be avoided.

Session Title: Challenges and Recommendations From Evaluating Autobody Shop Environmental Compliance Programs
Multipaper Session 846 to be held in BONHAM E on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Environmental Program Evaluation TIG and the Business and Industry TIG
Chair(s):
Dale Pahl,  United States Environmental Protection Agency, pahl.dale@epa.gov
Use of Mixed Methods to Evaluate Three States' Environmental Results Programs (ERPs)
Presenter(s):
John Heffelfinger, United States Environmental Protection Agency, heffelfinger.john@epa.gov
Scott Bowles, United States Environmental Protection Agency, bowles.scott@epa.gov
Abstract: This evaluation examined the experience of three states, Delaware, Maine, and Rhode Island, which established Environmental Results Programs (ERPs) for the auto body repair sector. Auto body shops can pose a range of cross-media environmental and/or health concerns, including air emissions, water discharges, hazardous materials handling and waste management, and worker health and safety concerns. Each of these state’s ERPs incorporates voluntary self-certification by auto body shops, compliance assistance/training workshops, inspection of a random sample of facilities before and after program implementation, and use of statistical analyses to estimate overall compliance. While these states are similar in the types of programs they implemented, they differ in terms of several circumstances in the state that could affect ERP implementation, participation of auto body shops, and outcomes. The evaluation describes each state’s individual experience with their ERP program, focusing primarily on outcomes, changes in facility management practices, and costs.
The Statistically Valid Pilot: Taking Advantage of Unique Opportunities to Design and Implement Rigorous Program Evaluations
Presenter(s):
Tracy Dyke Redmond, Industrial Economics Inc, tdr@indecon.com
Terell Lasane, United States Environmental Protection Agency, lasane.terell@epamail.epa.gov
Abstract: The task of ascertaining a program’s definitive effects is difficult to achieve in field settings where multiple factors pose rival explanations for a program’s causal impacts. In advance of a forthcoming regulation affecting small businesses, EPA planned to offer compliance assistance to regulated entities. To assess its results, EPA convened a group of the program’s managers, evaluation experts, and other stakeholders to design a statistically-valid program evaluation. The evaluation design includes random assignment to treatment and control groups, random selection from an identifiable universe, and a difference-in-differences analytical approach to analyze two comparison groups over time. This innovative approach to program evaluation design may yield impact evaluation results, which is rarely feasible in federal programs because of multiple rival causal explanations, limited fiscal resources, and other institutional restrictions. The lead evaluator will present the evaluation’s history, design, implementation, initial results, and potential applications of the design.

Session Title: The Use of Program Progress Reports in Federal Government Program Evaluations
Think Tank Session 847 to be held in Texas A on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Government Evaluation TIG
Presenter(s):
Rose Ann M Renteria, Academy for Educational Development, rrenteria@aed.org
Abstract: The session will focus on the U.S. government's movement to program progress reports to systematically compile outcome data for grantees in the field of community economic development. Dr. Rose Ann Renteria will orient attendees to the issue and relevant context (e.g., defining the reports and how they will be used in the future). Attendees will break into small groups to explore and answer the 2-3 guiding questions and reconvene to share their enhanced understanding. Questions are: 1) How is outcome data collected improved with the reports for program evaluations and for the client?; 2) What will be the anticipated improvements and challenges to data collection (e.g., use in the field by grantees, streamlining collection, evalutaion 101 with grantees and the client); and 3) What can evaluators do to plan for the use of program progress reports in the future? Roles of the individual breakout groups Each will address the 3 questions. Each will examine the question from a particular viewpoint: a) evaluation firm (e.g., private; non-profit) b) consultant and/or c) program manager for programs and/or interventions.

Session Title: Evaluation in Community Colleges
Multipaper Session 848 to be held in Texas B on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
William Rickards,  Alverno College, william.rickards@alverno.edu
Discussant(s):
George Reinhart,  University of Maryland, greinhart@cals.umd.edu
Perspectives From the Field: Evaluation Approach of and Lessons Learned From an Evaluation of a Community College Learning Community
Presenter(s):
Charyl Staci Yarbrough, Rutgers University, cyarbrou@rci.rutgers.edu
Bill Mabe, Rutgers University, billmabe@rci.rutgers.edu
Abstract: Community colleges play a vital role in America’s educational system, offering millions of low-income and disadvantaged students the skills they need for economic success. Unfortunately, most of the students enrolled never graduate. Hundreds of colleges have implemented learning communities (LCs) to address this problem. LC students enroll as a cohort in two or more common classes. Surprisingly, very few studies have been conducted that have rigorously evaluated the implementation and effectiveness of LCs. Our paper presents findings and lessons learned from a two-year process evaluation of the implementation of a LC in math and science at an urban community college with single digit graduation rates. This paper is a resource for evaluators who are looking to establish research partnerships with community colleges. It outlines lessons learned about the culture and unique needs of urban community colleges and reports successes, failures, and obstacles encountered in using our mixed methodology approach.
Institutional and Programmatic Self-Evaluation in Higher Education Development: Telling Their Own Stories in a Consortium of Two-Year Institutions
Presenter(s):
William Rickards, Alverno College, william.rickards@alverno.edu
Abstract: In the current climate of higher education assessment, evaluation is entangled with assessment practice, accreditation, and accountability. Public rhetoric has been deeply focused on accountability in language and intent, with little effort to deal with the complexity of teaching and learning across post-secondary options. While some recent assessments—such as the Collegiate Learning Assessment (CLA) and AAC&U’s VALUE project—are focused on student learning and performance, their overwhelming purposes and uses seem more oriented to different forms of accountability. While potentially serving as forms of self assessment, such efforts seem lacking in the power of evaluation—as Cronbach might have suggested—to illumine these educational programs. The following presentation uses the experience of a consortium of two year colleges to examine how the participants used analysis of their learning initiatives (reporting on progress, giving and getting feedback to one another) to develop—and strengthen—evaluative narratives for their own efforts.

Session Title: Establishing a Monitoring, Evaluation and Learning (MEL) System for Policy Reform: Lessons From Oxfam America’s Advocacy on More Country Ownership of United States Foreign Aid
Expert Lecture Session 849 to be held in  Texas C on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Advocacy and Policy Change TIG
Presenter(s):
Omar Ortez, Oxfam America, oortez@oxfamamerica.org
Abstract: Thanks to a Gates Foundation grant, Oxfam America is innovating with practical methodologies to establish a Monitoring, Evaluation and Learning (MEL) System within its Aid Effectiveness Team to measure the effects of policy-advocacy on making US foreign aid more country-led. We have organized policy reform asks along a three-dimensional Ownership framework: information, donors informing recipient countries of what they are funding; capacity, helping countries manage their own development and supporting citizens to hold them accountable; and control, letting countries lead their development agendas. The MEL system tracks how these concepts are entering the discourse of USG policy makers and influentials in the international development community; and how their positions shift over time. It also tracks how specific policy asks are influencing legislation and operational processes where aid reform is actually taking place. It ultimately aims at closing the feedback loop between new policies adopted and their actual implementation.

Session Title: Informing Portfolio Management Using Tracking Systems and Bibliometrics
Multipaper Session 850 to be held in Texas D on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Juan Rogers,  School of Public Policy Georgia Institute of Technology, jdrogers@gatech.edu
The Impact of the United States-China collaboration on China's Research Performance: Evidence From Nanotechnology Publication
Presenter(s):
Li Tang, Georgia Institute of Technology, tang006@gmail.com
Abstract: The impacts of international collaboration on research performance have been extensively explored in former research. In spite of its rich volumes, the findings are rather controversial. Analyzing the CVs of 77 Chinese nanotechnology scientists and their longitudinal publication records, this study found that Sino-US research collaboration has positive impact on China's research performance. This impact increases proportionately with year and is insensitive to subject category.
The Benefits and Challenges of Participatory Tracking Systems for Monitoring Institutional Change.
Presenter(s):
Marc Brodersen, University of Colorado, Denver, marc.brodersen@ucdenver.edu
Kathryn Nearing, University of Colorado, Denver, kathryn.nearing@ucdenver.edu
Susan Connors, University of Colorado, Denver, susan.connors@ucdenver.edu
Bonnie Walters, University of Colorado, Denver, bonnie.walters@ucdenver.edu
Abstract: In this paper we discuss best practices in setting up and utilizing program monitoring systems to track the progress of organizational change initiatives in such a way that also promotes participatory evaluation practices. Effective and efficient use of these systems can help evaluators and other stakeholders systematically track progress in reaching a large number of specific organization goals, while maintaining the flexibility to respond to changing situations and emerging issues. Evaluation professionals are often called upon to assist organizations as they implement complex structural and systemic changes. Assisting with the monitoring of these organizational changes can be difficult and time consuming. However, when done properly, it can promote deeper thought about program goals, theories of change, and achievable outcomes. Working collaboratively with clients to establish and continually refine organizational benchmarks and measurable outcomes (indicators) not only fosters accuracy in the monitoring system, but also promotes stakeholder buy-in and collaboration.

Session Title: Deeper Implementation of the Student Success Learning to Eighteen Strategy Through Developmental Evaluation
Multipaper Session 851 to be held in Texas E on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Evaluation Use TIG , the Organizational Learning and Evaluation Capacity Building TIG, and the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Michael Quinn Patton,  Utilization-focused Evaluation, mqpatton@prodigy.net
The Use of Student Success Indicators in the Student Success Learning to 18 Strategy: The Ontario Experience
Presenter(s):
David Euale, Ontario Ministry of Education, david.euale@ontario.ca
Abstract: The Student Success Learning to 18 Strategy is a province-wide strategy designed to ensure that all students successfully complete their secondary schooling with the knowledge and dispositions required to pursue work and learning opportunities available to them following graduation. The strategy encourages innovative and flexible educational opportunities that reflect regional, social, and cultural differences affecting students’ learning experiences and outcomes. Beginning in 2004-05, new accountability requirements were introduced, including annual report backs by boards/school authorities on the original nine indicators of success to assist in local and provincial monitoring of the impact of this initiative on improved outcomes for students. The presentation will explore how the Student Success Indicators are used to measure outcomes and inform secondary schools, boards and province of progress and areas in need of improvement.

Session Title: Research on Evaluator Competencies
Multipaper Session 852 to be held in Texas F on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Research on Evaluation TIG
Chair(s):
John LaVelle,  Claremont Graduate University, john.lavelle@cgu.edu
Discussant(s):
John LaVelle,  Claremont Graduate University, john.lavelle@cgu.edu
The Search for Evaluator Competency Inventories
Presenter(s):
Jeanette Gurrola, Claremont Graduate University, jeanette.gurrola@cgu.edu
Abstract: Using Stevahn, King, Ghere, & Minnema’s (2005) Essential Competencies for Program Evaluators, a content analysis of articles/chapters for three theoretical evaluation approaches was conducted. The goal of the study is to examine the key competencies required to design and implement different evaluation approaches. The data collected will be analyzed by evaluation approach to find how they compare and differ on their emphasized use of specific evaluator competencies. The results of this study have numerous implications that can strengthen the quality of future evaluations through avenues such as evaluator training and additional research on evaluation specific to evaluation approaches. Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005). Establishing essential competencies for program evaluators. American Journal of Evaluation, 26 (1), 43-59.
Constructing a Measure for Evaluator Competencies: Exploratory and Confirmatory Factor Analyses Approach
Presenter(s):
Jie Zhang, Syracuse University, jzhang08@syr.edu
Abstract: In a practice-based field like evaluation, evaluators should be equipped with essential knowledge, skills, and dispositions defined as evaluator competencies (King, 2005) in order to perform their required tasks. Evaluator competencies are what separate evaluation from other professions. Though important, there are not any specific and comprehensive set of competencies that evaluators and many evaluation training programs can follow. The purpose of the study is to fill in the void by creating a scale of evaluator competencies, and testing reliability and validity using factor analytical framework. The research process is guided by Wilson’s (2005) four steps of constructing measures: creating construct maps, designing items, indentifying the outcome space, and establishing the measurement model. An exploratory factor analysis is first conducted to extract factors from the set of created items, and a following confirmatory factor analysis examines the validity (divergent and convergent) of the scale, and forms foundation for future research.

Session Title: Using Observational Assessments to Measure and Improve Youth Program Quality
Multipaper Session 853 to be held in CROCKETT A on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Extension Education Evaluation TIG
Chair(s):
Kate Walker,  University of Minnesota, kcwalker@umn.edu
Matching Method to Purpose: Using Standardized Observational Assessment to Enhance Point of Service Quality
Presenter(s):
Tom Devaney, Weikart Center for Youth Program Quality, tom@cypq.org
Abstract: This paper will describe the Youth Program Quality Assessment (Youth PQA; HighScope, 2005), a standardized, research-validated observational assessment tool designed for out-of-school-time settings. Participants will learn how various out-of-school networks are using performance data produced by the Youth PQA to engage program managers and front-line staff in continuous quality improvement initiatives. They will also discover how to use the Youth PQA to produce performance data aligned with local improvement objectives and purposes. Specifically, this paper will describe how the Youth PQA can be deployed for program self-assessment (appropriate for low stakes, non-normative learning purposes), external assessment (appropriate for higher stakes, normative comparisons and performance accountability), as well as various hybrids that combine elements from each approach.

Session Title: The Thinking Corporation
Expert Lecture Session 854 to be held in  CROCKETT B on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Business and Industry TIG
Presenter(s):
David Frood, Independent Consultant, davidfrood@hotmail.com
Abstract: Consulting to big business for sixteen years, I have spoken to many people with new innovative ideas for products, services, new markets and complete innovations that were never commercialized. Over the past three years during the course of helping entrepreneurs and inventors to find seed capital I witnessed many good ideas that are far too difficult to get off the ground. Ideas like processing condensate to produce cleaner fuel, an add-on to car engines that increases fuel efficiency by up to 35%, producing near clean emissions and a new energy device capable of supplying energy to households at negligible pollution levels. All of these at last look were still struggling to get off the ground. Big business has a role to play in implementing employee generated ideas and providing a route to market for entrepreneurs and inventors. To accomplish this end business needs to change their culture and behavior.

Session Title: Reading Sociograms
Expert Lecture Session 855 to be held in  CROCKETT C on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the
Presenter(s):
Maryann Durland, Durland Consulting, mdurland@durlandconsulting.com
Abstract: The application of social network analysis requires data, data analysis, and sociograms (maps of the network) in order to establish findings and results. How do the three go together, and how do you read a sociogram and connect the analysis to it? This is the main question that the author gets from newcomers to SNA. This expert lecture will illustrate how using sociograms alone, or using the results of data analysis alone are not sufficient to establish findings and results in understanding networks. The lecture will include both a PowerPoint presentation and handouts and will begin with definitions and illustrations of social network analysis data, sociograms, data analysis, and examples of how findings and results are intertwined together with several examples of analysis and sociograms. The expert lecture is built on a review of the research on how the sociogram and the data analysis have historically been used together.

Session Title: Evaluating Government-Sponsored Education Programs
Multipaper Session 856 to be held in CROCKETT D on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Government Evaluation TIG
Chair(s):
Nicole Vicinanza,  JBS International Inc, nvicinanza@jbsinternational.com
Improving Outcomes in Learning at the Defense Language Institute: How Program Evaluation is Contributing to the Increased Language Proficiency of Military Linguists
Presenter(s):
Melody Wall, Defense Language Institute, melody.wall@us.army.mil
Abstract: The Defense Language Institute Foreign Language Center annually hosts thousands of military members from all service branches learning more than 20 languages. The Institute has over the last five years implemented a robust plan consisting of numerous initiatives designed to improve the quality of learning, with $362 million in funding provided to support them. An effort of this size demands a vigorous evaluation plan to ensure that resources are being well spent. A team of evaluators is currently working at the Institute to accomplish this goal. The team has successfully used the participatory evaluation approach to enlist the interest and support of stakeholders, and has seen both formal and informal successes. This presentation will provide an overview of the participatory process at DLIFLC, the reactions to this approach from work groups, and examples of how outcomes and behaviors have changed and improved as a result of the evaluation process.
The National Aeronautics and Space Administration's (NASA) Informal Education Program Evaluation: The Role of Context in Evaluation
Presenter(s):
Alyssa Rulf Fountain, Abt Associates Inc, alyssa_rulf_fountain@abtassoc.com
Hilary Rhodes, Abt Associates Inc, hilary_rhodes@abtassoc.com
Abigail Jurist Levy, Education Development Center, alevy@edc.org
Abstract: NASA’s informal education projects are designed to inspire and engage individuals of all ages in NASA’s mission. The evaluation of NASA’s Informal Education Program was designed to document a range of activities in the informal education portfolio as a means to better understand the effectiveness of the projects funded by the Office of Education. Given the existing contextual program factors, the evaluation employed qualitative methods to gather descriptive information about NASA’s informal education projects. First, the evaluation team conducted a comprehensive review of NASA’s investments in informal education by creating profiles of the projects serving the informal education community. The evaluation will also provide NASA’s Office of Education with information about selected projects’ sustainability, reach into their respective communities, use of NASA resources and materials, progress toward stated goals, and development of strategic partnerships. The results of this evaluation are intended to ultimately inform future funding decisions.

Session Title: Designing Surveys for Use in Needs Assessments
Multipaper Session 857 to be held in SEGUIN B on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Needs Assessment TIG
Chair(s):
Lora Warner,  University of Wisconsin, Green Bay, warnerl@uwgb.edu
Discussant(s):
Ann Del Vecchio,  Alpha Assessment Associates, delvecchio.nm@comcast.net
The Effects of Scale Forms on Perceived Needs: An Example From a Study of Identifying Essential Competencies for Program Evaluators
Presenter(s):
Yi-Fang Lee, National Chi Nan University, ivanalee@ncnu.edu.tw
James Altschuld, The Ohio State University, altschuld.1@osu.edu
Abstract: While traditional needs assessment (NA) is based on the measurement of the discrepancy between desired and current states, using only one scale (desired) is common practice. A literature review indicated that there was little empirical study comparing single and double formats. The intent of this presentation is to explore whether single, double, or triple scaled instruments influence the way in which respondents rate importance or desired states. The surveys were developed for a study of the essential competencies for program evaluators. The sample consisted of 150 evaluators who participated in university evaluation programs organized by the Higher Education Evaluation and Accreditation Council in Taiwan. The content of the forms was skills noted above in relation to the perceived needs of respondents. Recommendations are drawn from the findings.
Using Online Surveys to Assess Information Needs of Healthcare Professionals in Low Resource Settings: How is Data Quality Ensured?
Presenter(s):
Saori Ohkubo, Johns Hopkins University, sohkubo@jhuccp.org
Tara Sullivan, Johns Hopkins University, tsulliva@jhuccp.org
Abstract: Conducting online surveys offers many benefits to program evaluators in a wide range of disciplines. However, the method is not widespread or proven to reach highly diverse populations in areas without a reliable Internet connection. K4Health, a global knowledge management project, has overcome these limitations by instituting numerous quality control mechanisms in the design and implementation of online surveys to systematically assess information needs of healthcare professionals working in resource poor settings. K4Health’s global online needs assessment was conducted in three languages (English, French, and Spanish) over a one-month period in spring 2009, and collected 925 responses from various professionals in 110 predominantly low- and medium-income countries. Survey results were triangulated with findings from other components of a broader needs assessment, and effectively informed the design of a global knowledge management project aiming to bring credible, relevant and usable information to key audiences including community health workers in isolated locations.

Session Title: Evaluating Peace Building and Conflict Prevention in Fragile States
Multipaper Session 858 to be held in REPUBLIC A on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Krishna Kumar,  United States Department of State, kumark@state.gov
Building the Architecture for Assessing the Performance of Peace Building
Presenter(s):
Tjip Walker, United States Agency for International Development, stwalker @usaid.gov
Abstract: One of the stiffest challenges facing those committed to preventing violent conflicts or promoting post-conflict recovery is answering the simple question, “what works?” At present there are a raft of competing proposals that range from changing attitudes to delivering social services more equitably to sustaining processes of negotiation and reconciliation and more. With few exceptions, these proposals are justified more with conviction than evidence that they work. To address this problem, USAID’s Office of Conflict Management and Mitigation is undertaking a project focused on the Theories of Change underlying conflict programs. The assumption is that superficial diversity notwithstanding, conflict programs are based on a limited number of theories. Distill these theories and develop appropriate performance measures for each one and we then have the architecture to systematically compare programs and determine what works. This presentation will describe progress to date in building this architecture and introducing it into USAID-supported programs.

Session Title: Successess and Lessons Learned From Evaluations of Long-Term HIV/AIDS Programs
Multipaper Session 859 to be held in REPUBLIC B on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Miles McNall,  Michigan State University, mcnall@msu.edu
Using Scorecards to Increase Compliance Among Medical Care Providers Working in AIDS Care
Presenter(s):
Jennifer Catrambone, Ruth M Rothstein CORE Center, jcamacho@corecenter.org
Abstract: The Ruth M. Rothstein CORE Center (CORE) of Chicago is a large free-standing ambulatory care center that serves over 5000 patients infected or affected by HIV/AIDS. This paper details one of our ongoing quality improvement efforts currently receiving national attention: the Provider-Specific Scorecard, in which a random 10% of patients’ charts are selected and evaluated by members of CORE’s Quality Improvement Committee on 42 quality of care and quality of record indicators. The results are then grouped by medical care provider and each provider receives a scorecard detailing their patients’ charts and comparing that data to aggregate data from the rest of CORE’s providers. The providers’ reactions to this effort as well as the dramatic improvements resulting from it over the past six years will be discussed.
Evaluating the Value of Capacity Building in Enhancing Aid Effectiveness; Key Findings and Lessons From the Largest President’s Emergency Plan for AIDS Relief (PEPFAR) Grants Management Program in Africa
Presenter(s):
Rita Sonko-Najjemba, Pact Inc, rsonko@pactworld.org
Lynn McCoy, Pact Inc, lmccoy@pactworld.org
Abstract: Lack of evidence from the field has supported continuing debate within the aid community about the link between capacity building and aid effectiveness. This external evaluation of Pact South Africa’s grant management program, the largest PEPFAR funded program, presents important lessons combining aid effectiveness and sustainability through a comprehensive capacity building program for civil society organisations. It provides abundant evidence on the effectiveness of the program in strengthening organizational capacity of grantee organisations and enhancing their ability to utilise the donor funding to achieve rapid growth in the scale, reach and quality of HIV and AIDS services across South Africa. The results are indicative of the essential role of capacity building as a key component in enhancing aid effectiveness in maximizing beneficiary outcomes

Session Title: Let Quality Guide Evaluation Quality: Recent Trends Implemented in the Middle East
Multipaper Session 860 to be held in REPUBLIC C on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Theories of Evaluation TIG and the Assessment in Higher Education TIG
Chair(s):
Eqbal Darandari,  King Saud University, e_darandari@hotmail.com
Discussant(s):
Tahira Hoke,  Prince Sultan University, tahirahoke@gmail.com
An Eclectic Approach to Evaluation for Higher Education Institutions: Lessons Learned in Saudi Arabia
Presenter(s):
Eqbal Darandari, King Saud University, e_darandari@hotmail.com
Tahira Hoke, Prince Sultan University, tahirahoke@gmail.com
Abstract: From empowerment to black box evaluation, different approaches are required to increase the likelihood that higher education institutions in Saudi Arabia will achieve their accreditation goals. Depending on the stage of accreditation, quality assurance directors should employ evaluation methods which are most appropriate for the size, type, and development of a higher education institution. The following paper features lessons learned by quality assurance directors in Saudi Arabia that are actively engaged in capacity building, and outcome-based research, to ensure standards of quality are met. In order to avoid extremes with any approach, theoretical propositions for future research projects are recommended.

Return to Evaluation 2010
Search Results for All Sessions