| Session Title: What Constitutes Quality Evaluation of Development and Social Change: Values, Standards, Tradeoffs, and Consequences | |||
| Panel Session 202 to be held in Lone Star A on Thursday, Nov 11, 9:15 AM to 10:45 AM | |||
| Sponsored by the Presidential Strand | |||
| Chair(s): | |||
| Indran Naidoo, Office of the South African Public Service Commission, indrann@opsc.gov.za | |||
| Discussant(s): | |||
| Patricia Rogers, Royal Melbourne Institute of Technology, patricia.rogers@rmit.edu.au | |||
| Abstract: How we view the processes of social change and development and what we consider quality evaluation of interventions to achieve these are inextricably linked. There is a risk that efforts to improve the quality of development evaluations will only support the evaluation of standardized, simple interventions. This may divert attention and resources from innovations that are more complex - i.e. emergent and unpredictable – and therefore inherently risky, yet critical if development is to be sustained and respond to urgent issues that require innovative responses. We need to move beyond the ongoing paradigmatic ‘tug-of-war’ about methodology to a deeper understanding of how change happens, better ways of evaluating ‘the complex’ without compromising quality standards and better understanding of the ways in which evaluation itself has an impact on the processes of social change. This issue has implications for other interventions that seek to bring about sustainable structural change. | |||
| |||
| |||
|
| Session Title: Implementation From the Ground Up: Defining, Promoting, and Sustaining Fidelity at All Levels of a State Program | |||||
| Panel Session 203 to be held in Lone Star B on Thursday, Nov 11, 9:15 AM to 10:45 AM | |||||
| Sponsored by the Program Theory and Theory-driven Evaluation TIG | |||||
| Chair(s): | |||||
| Elizabeth Oyer, EvalSolutions Inc, eoyer@evalsolutions.com | |||||
| Abstract: Fidelity is the extent to which the intervention, as realized, is “faithful” to the pre-stated model. Measuring implementation fidelity provides data for understanding the overall impact of the program. Presenters will discuss the issues around developing a state framework for evaluating the Illinois Mathematics and Science Program and the policies and resources that are needed to sustain and scale up the initiative. Site evaluators will discuss tools and analyses for formative and summative evaluation of progress toward state goals for IMSP, which employs a comprehensive site visit protocol to create profiles of the local grants and develop themes across grants for understanding implementation of the program. Evaluators will discuss the tools for the site visit as well as the results from 2007-2008 and 2008-2009 program evaluation. Finally, the George Williams College of Aurora University MSP project evaluator will discuss the local evaluation of implementation for the IMSP. | |||||
| |||||
| |||||
|
| Session Title: The Essential Features of Collaborative, Participatory, and Empowerment Evaluation | |||||
| Multipaper Session 204 to be held in Lone Star C on Thursday, Nov 11, 9:15 AM to 10:45 AM | |||||
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG | |||||
| Chair(s): | |||||
| Abraham Wandersman, University of South Carolina, wandersman@sc.edu | |||||
|
| Session Title: Nonprofit Rating Systems and Implications for Evaluation |
| Think Tank Session 205 to be held in Lone Star D on Thursday, Nov 11, 9:15 AM to 10:45 AM |
| Sponsored by the Non-profit and Foundations Evaluation TIG |
| Presenter(s): |
| Debra Natenshon, The Center for What Works, debra@whatworks.org |
| Johanna Morariu, Innovation Network, jmorariu@innonet.org |
| Abstract: In the past few years efforts to use common measures to assess and compare nonprofit performance seem to have multiplied. Interest in comparing nonprofit performance is in a dramatic upswing, and new/different sets of common measures seem to emerge frequently. Some sets of measures have been developed for niche fields, while others seek to compare across the entire sector. As evaluators, we should be aware of these efforts and aware of their possible implications. This think tank seeks to explore a number of questions related to the topic of nonprofit rating systems and common measures, e.g., Is it possible to develop meaningful common measures for a field as diverse as the nonprofit sector? What can we learn from the experiences of fairly well-known, sector-wide approaches such as Charity Navigator, GreatNonprofits, etc.? Considering what we know about existing approaches, what is the effect on traditional program evaluation? |
| Session Title: Comparative Effectiveness Research in Program Evaluation | |||
| Panel Session 206 to be held in Lone Star E on Thursday, Nov 11, 9:15 AM to 10:45 AM | |||
| Sponsored by the Quantitative Methods: Theory and Design TIG | |||
| Chair(s): | |||
| James Michael Menke, University of Arizona, menke@email.arizona.edu | |||
| Abstract: Interest in comparative effectiveness research (CER) is increasing rapldly. Although the main focus of interest is in medicine, pressure toward, perhaps even demand for, CER in other areas will almost certainly follow along, probably led by health services research, but quickly followed by education, and then other policy areas. CER poses particular problems in all research, and program evaluation will be no exception. It will eventually not be sufficient simply to conclude that some intervention has positive effects; it will be required that those effects be shown to be as good as or better than alternative interventions or even alternative policy strategies. CER poses special challenges with respect to conceptual and design issues and for appropriate statistical analysis and interpretation of findings. Some of these challenges are described, along with proposed solutions, and illustrations of their applications are presented. | |||
| |||
| |||
| |||
|
| Session Title: Serving Two Masters: Local Evaluators Trying to Maintain Evaluation Quality and Use While Participating in a National Multi-site Evaluation | |||
| Panel Session 207 to be held in Lone Star F on Thursday, Nov 11, 9:15 AM to 10:45 AM | |||
| Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG | |||
| Chair(s): | |||
| Tom Kelly, Annie E Casey Foundation, tkelly@aecf.org | |||
| Discussant(s): | |||
| Mary Achatz, Westat, maryachatz@westat.com | |||
| Abstract: Multi-site initiatives are complex programs to implement and evaluating them can be an even more complex undertaking. Making Connections is a community change initiative (CCI) in 10 urban neighborhoods across the U.S. A key component of its national cross-site evaluation has been the implementation of local evaluations that are relevant to and integrated in the neighborhood work on the ground. These local evaluations have been responsible not only for collecting data on the implementation and outcomes of the initiative but also for the building of community capacity to understand and use data for learning and accountability, while also contributing to the cross-site evaluation. The local evaluators have had to navigate the multiple challenges, demands, and differential capacity local evaluation clients and the national funder and cross-site evaluators. This panel will identify lessons learned in strengthening local evaluation quality and relevance. | |||
| |||
| |||
|
| Roundtable Rotation I: Truth, Beauty, And Justice For All: A Conversation With Graduate Students Examining Issues of Power, Control, and Evaluation Quality Within the Realm of Graduate Assistantships |
| Roundtable Presentation 208 to be held in MISSION A on Thursday, Nov 11, 9:15 AM to 10:45 AM |
| Sponsored by the Graduate Student and New Evaluator TIG |
| Presenter(s): |
| Ayesha Boyce, University of Illinois at Urbana-Champaign, boyce3@illinois.edu |
| Maria Jimenez, University of Illinois at Urbana-Champaign, mjimene2@illinois.edu |
| Jeehae Ahn, University of Illinois at Urbana-Champaign, jahn1@illinois.edu |
| Holly Downs, University of Illinois at Urbana-Champaign, hadowns@illinois.edu |
| Abstract: Graduate students are often limited in their capacity to create effective change due to role limitations and boundaries. Thus, how do graduate students conduct high quality evaluations within the power constraints of their assistantships? This roundtable seeks to begin a conversation starting with four evaluation graduate students of different ethnicities (Black, Latina, Asian and White), school status, evaluation experience and perspectives on the matter. With the understanding that most graduate students lack power and control over the design and implementation of evaluations, we invite other graduate students, novice evaluators, and experts, to join the conversation. The roundtable participants will attempt to clarify how to navigate various graduate student roles and values, all while being responsive to stakeholder and audience needs in order conduct evaluations of high quality. |
| Roundtable Rotation II: The Role of Evaluation and Research Support in Ensuring Evaluation Quality |
| Roundtable Presentation 208 to be held in MISSION A on Thursday, Nov 11, 9:15 AM to 10:45 AM |
| Sponsored by the Graduate Student and New Evaluator TIG |
| Presenter(s): |
| Laricia Longworth-Reed, University of Denver, laricia.longworth-reed@du.edu |
| Kathryn Schroeder, University of Denver, kathryn.schroeder@du.edu |
| Anna de Guzman, University of Denver, anna.deguzman@du.edu |
| Abstract: The impact of research and evaluation assistants on evaluation quality is an important topic to the evaluation field. The purpose of the current presentation is to explore how research and evaluation assistants contribute to quality evaluation through their skill sets, to discuss how skills can be developed to mutually benefit assistants and evaluators, to explore the role of research and evaluation assistance as new evaluators, and to further explore the roles of support staff in guaranteeing quality evaluation. |
| Session Title: Evaluating the Intervention: A Look at Clinical Treatments and Client Implications | |||||||||||||||||||||||||||||
| Multipaper Session 209 to be held in MISSION B on Thursday, Nov 11, 9:15 AM to 10:45 AM | |||||||||||||||||||||||||||||
| Sponsored by the Social Work TIG | |||||||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||||||
| Derrick Gervin, The Evaluation Group, derrick@evaluationgroup.com | |||||||||||||||||||||||||||||
| Jenny Jones, Virginia Commonwealth University, jljones@vcu.edu | |||||||||||||||||||||||||||||
|
| Session Title: Using a Multi-stage, Mixed Methods Approach to Improve the Design of System Change Evaluations | ||||
| Multipaper Session 210 to be held in BOWIE A on Thursday, Nov 11, 9:15 AM to 10:45 AM | ||||
| Sponsored by the Systems in Evaluation TIG | ||||
| Chair(s): | ||||
| Beth Stevens, Mathematica Policy Research, bstevens@mathematica-mpr.com | ||||
|
| Session Title: Third Annual Asa G. Hilliard III Think Tank on Culture and Evaluation |
| Think Tank Session 211 to be held in BOWIE B on Thursday, Nov 11, 9:15 AM to 10:45 AM |
| Sponsored by the Multiethnic Issues in Evaluation TIG |
| Presenter(s): |
| Cindy A Crusto, Yale University, cindy.crusto@yale.edu |
| Discussant(s): |
| Julie Nielsen, NorthPoint Health and Wellness Center Inc, niels048@umn.edu |
| Katherine A Tibbetts, Kamehameha Schools, katibbet@ksbe.edu |
| Joanne Farley, Human Development Institute, joanne.farley@uky.edu |
| Abstract: This annual session recognizes the important and relevant contributions of Dr. Asa G. Hilliard III, an African American professor of educational psychology and African history, to the field of evaluation generally, and to culturally competent and responsive evaluation specifically. We will first provide a brief overview of Dr. Hilliard's lifeworks and second will describe an evaluation that elevated and exemplified his understanding of the importance of “carrying oneself with a deep historical consciousness” and understanding of cultural values and socialization. We will then work in small groups to analyze evaluation scenarios to explore how Afrocentric approaches to research and evaluation (truth, commitment, justice, community, and harmony) might guide evaluation from the beginning to end of the process. Finally, we will reconvene as a large group for a facilitated discussion to translate the learning derived across all of the small groups to explore how these constructs impact evaluation quality in theory and practice. |
| Session Title: Maintaining Quality in Challenging Contexts | ||||||||||||||||||
| Multipaper Session 212 to be held in BOWIE C on Thursday, Nov 11, 9:15 AM to 10:45 AM | ||||||||||||||||||
| Sponsored by the Advocacy and Policy Change TIG | ||||||||||||||||||
| Chair(s): | ||||||||||||||||||
| Julia Coffman, Center for Evaluation Innovation, jcoffman@evaluationexchange.org | ||||||||||||||||||
|
| Roundtable Rotation I: Engaging Social Justice in a Graduate Course on Program Evaluation |
| Roundtable Presentation 213 to be held in GOLIAD on Thursday, Nov 11, 9:15 AM to 10:45 AM |
| Sponsored by the Teaching of Evaluation TIG and the Multiethnic Issues in Evaluation TIG |
| Presenter(s): |
| Leanne Kallemeyn, Loyola University, Chicago, lkallemeyn@luc.edu |
| Abstract: The purpose of this roundtable will be to share how issues of social justice are intentionally woven into a graduate course on program evaluation. The substance of what is taught, as well as the pedagogy of how it is taught, will aim to expose students to social justice as it relates to evaluation practice. I will share a syllabus that incorporates readings from evaluation theorists who address social justice, as well as course assignments to engage these readings. I will also consider how a course project doing an evaluation may serve as an experiential component for learning about social justice and evaluation. |
| Roundtable Rotation II: Issues of Quality: Guiding Principles for Culturally Competent Teaching and Practice |
| Roundtable Presentation 213 to be held in GOLIAD on Thursday, Nov 11, 9:15 AM to 10:45 AM |
| Sponsored by the Teaching of Evaluation TIG and the Multiethnic Issues in Evaluation TIG |
| Presenter(s): |
| Arthur Hernandez, Texas A&M University, art.hernandez@tamucc.edu |
| JoAnn Yuen, University of Hawaii, Manoa, joyuen@hawaii.edu |
| Abstract: This roundtable will provide an opportunity for individuals interested in promoting and teaching about cultural competence as part of a general preparation in evaluation methods to share philosophy, practices and challenges. The presenters will discuss evaluation theory and cultural competence from the perspective of teaching (Hernandez) and practice (Yuen) providing general guidelines and suggested practice from the perspective of successful and not-so-successful experience. Participants will discuss, elaborate and suggest related to their own interests and expertise and presenters will collect and organize the proceedings and email results to interested participants in an effort to facilitate self examination and further the development of evaluation skills related to teaching and practice for all involved. |
| Roundtable Rotation I: Integrating Website Use Analytics into a Mixed Method Evaluation of a Professional Development Website |
| Roundtable Presentation 214 to be held in SAN JACINTO on Thursday, Nov 11, 9:15 AM to 10:45 AM |
| Sponsored by the Integrating Technology Into Evaluation |
| Presenter(s): |
| Randahl Kirkendall, Carleton College, rkirkend@carleton.edu |
| Ellen Iverson, Carleton College, eiverson@carleton.edu |
| Monica Bruckner, Carleton College, mbruckne@carleton.edu |
| Abstract: On the Cutting Edge is a comprehensive program of workshops and related web-based resources that support professional development for geoscience faculty at all stages of their careers. The collection of online resources is referenced by those who attend the workshops, provides a venue for sharing teaching materials, and extends the reach of the program beyond those attending workshops. The evaluation of the program’s first five years involved integrating data from various surveys (large and small), interviews, focus-groups, and web use statistics into a comprehensive report of the progress and outcomes of the program. The authors will describe how their team analyzed and integrated Google Analytics, server-based website statistics, and web page visit logs into the report . Additionally, they will pose several questions to the group regarding their experiences and ideas for using web analytics in program evaluation. |
| Roundtable Rotation II: Using Technology for Efficiency in Evaluation |
| Roundtable Presentation 214 to be held in SAN JACINTO on Thursday, Nov 11, 9:15 AM to 10:45 AM |
| Sponsored by the Integrating Technology Into Evaluation |
| Presenter(s): |
| Marycruz Diaz, WestEd, mdiaz@wested.org |
| Donna Winston, WestEd, dwinsto@wested.org |
| Abstract: The roundtable discussion explores WestEd evaluators’ experiences evaluating educational programs in California using technology solutions and examines potential technology platforms for improving our work that save time and money. In a time of economic constraints where education funding has severely impacted how educators do their work, we have adapted to clients’ strategies to make do with fewer resources. One such strategy is the use of technology to replace in-person meetings and overcome travel freezes on school districts in California. Technology solutions have cut travel costs and introduced new ways of working efficiently. As advances are made in technology, we must advance our arsenal of evaluation tools. We will discuss the ways we have used technology to evaluate our clients’ work with technology solutions that have unveiled promising practices for evaluating programs. These technologies have also raised questions about how we improve our evaluations. |
| Session Title: Project Management Software: An Important Multi-Purpose Tool in an Evaluation Unit’s Toolbox |
| Demonstration Session 216 to be held in TRAVIS B on Thursday, Nov 11, 9:15 AM to 10:45 AM |
| Sponsored by the Evaluation Managers and Supervisors TIG |
| Presenter(s): |
| Stacey Farber, Cincinnati Children's Hospital Medical Center, stacey.farber@cchmc.org |
| Tracy Gnadinger, Cincinnati Children's Hospital Medical Center, tracy.gnadinger@cchmc.org |
| Janet Matulis, Cincinnati Children's Hospital Medical Center, janet.matulis@cchmc.org |
| Abstract: Managers of evaluation units or complex, multi-resourced projects require data to properly run their business, ensure its success, and clearly articulate its value. Project management (PM) software can be an extremely valuable tool in a manager’s or staff person’s toolbox. The long-term benefits of using PM software may include improved resource allocation / distribution, overall service quality, team communication, and articulation of business value. However, success utilizing and maximizing benefits from PM software is grounded in an evaluation unit’s ability to operationalize its work and establish management norms. Through this demonstration presentation, one evaluation unit will share (a) business needs that led to the adoption of a PM tool (specifically @task), (b) implementation, use, and functions of the tool, and (c) lessons learned. This demonstration will be ideal for managers and staff who are interested in, implementing, or looking to improve their use of a software solution for business management. |
| Session Title: Assessing Evaluation Capacity: Using the Evaluation Capacity Diagnostic Tool |
| Demonstration Session 217 to be held in TRAVIS C on Thursday, Nov 11, 9:15 AM to 10:45 AM |
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG |
| Presenter(s): |
| Lande Ajose, BTW Informing Cchange, lajose@informingchange.com |
| Kristi Kimball, William and Flora Hewlett Foundation, kkimball@hewlett.org |
| Abstract: This workshop will help evaluators determine a nonprofit’s readiness for evaluation. After presenting several stories of working with nonprofits with vastly different capacities, this session will share how to use the recently developed Evaluation Capacity Diagnostic Tool. Intended for nonprofits, this tool is designed to help organizations assess their readiness to take on many types of evaluation activities. It captures information on organizational context and the evaluation experience of staff, and can be used in various ways. The tool can pinpoint particularly strong areas of capacity as well as areas for improvement, and can calibrate changes over time in an organization’s evaluation capacity. In addition, this diagnostic can encourage staff to brainstorm about how their organization can enhance evaluation capacity by building on existing evaluation experience and skills. Finally, the tool can serve as a precursor to evaluation activities with an external evaluation consultant. This workshop is designed as a practicum that builds on the conference session Measuring the Immeasurable: Lessons for Building Grantee Capacity to Evaluate Hard-To-Assess Efforts. |
| Session Title: Improving Evaluation Quality Through and In the Arts | ||||||||||||||
| Multipaper Session 218 to be held in TRAVIS D on Thursday, Nov 11, 9:15 AM to 10:45 AM | ||||||||||||||
| Sponsored by the Evaluating the Arts and Culture TIG | ||||||||||||||
| Chair(s): | ||||||||||||||
| Ashlee Lewis, University of South Carolina, ashleealewis@hotmail.com | ||||||||||||||
| Discussant(s): | ||||||||||||||
| Debra Smith, Lesley University, dsmith22@lesley.edu | ||||||||||||||
|
| Session Title: Mixed-Method Evaluation in Human Services Settings | |||||
| Multipaper Session 219 to be held in INDEPENDENCE on Thursday, Nov 11, 9:15 AM to 10:45 AM | |||||
| Sponsored by the Human Services Evaluation TIG | |||||
| Chair(s): | |||||
| Cheryl Meyer, Wright State University, cheryl.meyer@wright.edu | |||||
|
| Session Title: Back to the Basics and Beyond | |||||||||||||||||||||||||
| Multipaper Session 220 to be held in PRESIDIO A on Thursday, Nov 11, 9:15 AM to 10:45 AM | |||||||||||||||||||||||||
| Sponsored by the Quantitative Methods: Theory and Design TIG | |||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||
| Raymond Hart, Georgia State University, rhart@gsu.edu | |||||||||||||||||||||||||
|
| Session Title: The Power of Metaphor: Using Images for Organizational Analysis |
| Skill-Building Workshop 221 to be held in PRESIDIO B on Thursday, Nov 11, 9:15 AM to 10:45 AM |
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG |
| Presenter(s): |
| Maggie Huff-Rousselle, Social Sectors Development Strategies, mhuffrousselle@ssds.net |
| Bonnie Shepard, Social Sectors Development Strategies, bshepard@ssds.net |
| Abstract: Using the American Evaluation Association as a practical unit of analysis, participants will practice an imaging technique that can be used as a qualitative method for highly participatory organizational evaluations. This technique has been used over the past 12 years on very different organizations working in Africa, Asia and North America, and examples from the purposes served by the technique and the insights gained will be provided as part of the workshop. The imaging exercise provides rapid insights, via contrasting or similar themes, and can be used as an ideal icebreaker to launch an organizational self-analysis and strategic planning process, where the evaluation approaches and techniques merge into the design of interventions. The technique was inspired by Gareth Morgan’s classic book, “Images of Organization,” considered a break through in thinking because of the ways in which it used metaphors to analyze and explain organizations. |
| Session Title: Evaluation of Youth Mental Health and Substance Abuse Interventions | ||||||||||||||||||||||||||||||
| Multipaper Session 222 to be held in PRESIDIO C on Thursday, Nov 11, 9:15 AM to 10:45 AM | ||||||||||||||||||||||||||||||
| Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG | ||||||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||||||
| Melissa Rivera, National Center for Prevention and Research Solutions, mrivera@ncprs.org | ||||||||||||||||||||||||||||||
|
| Roundtable Rotation I: Learning About Educational Reform From a Seven-Year Math-Science Partnership |
| Roundtable Presentation 223 to be held in BONHAM A on Thursday, Nov 11, 9:15 AM to 10:45 AM |
| Sponsored by the Pre-K - 12 Educational Evaluation TIG and the Independent Consulting TIG |
| Presenter(s): |
| Cynthia Tananis, University of Pittsburgh, tananis@pitt.edu |
| Cara Ciminillo, University of Pittsburgh, ciminill@pitt.edu |
| Tracy Pelkowski, University of Pittsburgh, ceac@pitt.edu |
| Keith Trahan, University of Pittsburgh, ceac@pitt.edu |
| Yuanyuan Wang, University of Pittsburgh, ceac@pitt.edu |
| Gail Yamnitzky, University of Pittsburgh, ceac@pitt.edu |
| Rebecca Price, University of Pittsburgh, ceac@pitt.edu |
| Abstract: Bring together 53 school districts, four IHEs, four intermediate units (local agencies of the state department of education), three evaluation groups, thousands of teachers and administrators --- focus on changing culture, professional development, teaching and learning --- add millions of dollars, and what do you get? The NSF and Education Department have funded Math-Science Partnerships (MSP) for a number of years, collectively designed to impact the math-science pipeline of qualified students through the PK-16 system and simultaneously increasing the quality of the math-science teacher workforce. This presentation and discussion summarizes the extensive, collaborative research and evaluation efforts in the Southwest Pennsylvania MSP across seven years and presents a summary of what we have learned, how we learned it, and, importantly, what we were unable to learn from the evaluation and project. The session focuses on the findings of the evaluation but also offers insights about conducting longer-term, collaborative evaluation in the area of educational reform across complex and evolving systems. |
| Roundtable Rotation II: When Quality and Policy Collide in Evaluating Math-Science Partnership Programs: Strategies for Resolution |
| Roundtable Presentation 223 to be held in BONHAM A on Thursday, Nov 11, 9:15 AM to 10:45 AM |
| Sponsored by the Pre-K - 12 Educational Evaluation TIG and the Independent Consulting TIG |
| Presenter(s): |
| MaryLynn Quartaroli, Professional Evaluation & Assessment Consultants, marylynn.quartaroli@nau.edu |
| Hollace Bristol, Coconino County Education Services Agency, hbristol@coconino.az.gov |
| Abstract: The US Department of Education’s Mathematics and Science Partnership (MSP) competitive grant programs encourage partnerships between local school districts and universities to collaboratively engage in professional development activities aimed at increasing teachers’ content knowledge and improving pedagogical practices. However, determining the quality of these programs can be a contested area, in terms of what constitutes meaningful evidence for the stakeholders: federal agency, state departments of education, local educational agency, higher education instructional teams, and participating teachers and administrators. This conflict most often arises when the context for the evaluation is so different: for the source(s) of funding, “quality-as-measured” is likely sufficient; for the local agency, instructional team, and teachers, “quality-as-experienced” may be more important and useful. In these circumstances, the independent evaluator can find it problematic to provide high quality evaluations and to maintain high quality relationships with both levels of funding agencies. This session will examine these critical issues. |
| Session Title: How Did You Do It? Implementing Performance Measurement and Monitoring Systems | ||||||||||||||||||||||||
| Multipaper Session 224 to be held in BONHAM B on Thursday, Nov 11, 9:15 AM to 10:45 AM | ||||||||||||||||||||||||
| Sponsored by the Non-profit and Foundations Evaluation TIG | ||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||
| Eric Barela, Partners in School Innovation, ebarela@partnersinschools.org | ||||||||||||||||||||||||
|
| Session Title: Evaluating K-8 Literacy Programs: Methods and Models | ||||||||||||||||||||||||
| Multipaper Session 225 to be held in BONHAM C on Thursday, Nov 11, 9:15 AM to 10:45 AM | ||||||||||||||||||||||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | ||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||
| Andrea Beesley, Mid-continent Research for Education and Learning, abeesley@mcrel.org | ||||||||||||||||||||||||
| Discussant(s): | ||||||||||||||||||||||||
| Lizanne Destefano, University of Illinois at Urbana-Champaign, destefan@illinois.edu | ||||||||||||||||||||||||
|
| Session Title: Methods Leading to Higher Quality Evaluations in Education Evaluation | |||||||||||||||||||||||||||||||||||
| Multipaper Session 226 to be held in BONHAM D on Thursday, Nov 11, 9:15 AM to 10:45 AM | |||||||||||||||||||||||||||||||||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | |||||||||||||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||||||||||||
| Tom McKlin, The Findings Group LLC, tom.mcklin@gmail.com | |||||||||||||||||||||||||||||||||||
| Discussant(s): | |||||||||||||||||||||||||||||||||||
| Anane Olatunji, Fairfax County Public Schools, aolatunji@fcps.edu | |||||||||||||||||||||||||||||||||||
|
| Session Title: Enhancing the Quality of Evaluation Through Collaboration Among Funders, Programs, and Evaluators: The Example of the New York City Health Bucks Program Evaluation | |||||||||
| Panel Session 227 to be held in BONHAM E on Thursday, Nov 11, 9:15 AM to 10:45 AM | |||||||||
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG | |||||||||
| Chair(s): | |||||||||
| Jan Jernigan, Centers for Disease Control and Prevention, ddq8@cdc.gov | |||||||||
| Abstract: In response to the growing public health crisis of childhood obesity, the Division of Nutrition, Physical Activity, and Obesity at the Centers for Disease Control and Prevention (CDC), is working to identify promising local obesity prevention and control interventions. In support of this activity, CDC has funded Abt Associates Inc. to conduct a process and outcome evaluation of one such program, New York City Health Bucks, an innovative financial incentive program operated by the New York City Department of Health and Mental Hygiene (DOHMH) to increase access to and purchase of fresh fruits and vegetables in three high-need, underserved NYC neighborhoods. Panel presenters from CDC, DOHMH, and Abt will discuss how collaboration among these key entities has led to the design and implementation of a high-quality, methodologically sound evaluation, which builds on DOHMH’s prior evaluation efforts, and which will inform both CDC and other localities interested in implementing similar initiatives. | |||||||||
| |||||||||
| |||||||||
|
| Session Title: Methodological Choices in Assessing the Quality and Strength of Evidence on Effectiveness | ||||
| Panel Session 228 to be held in Texas A on Thursday, Nov 11, 9:15 AM to 10:45 AM | ||||
| Sponsored by the Government Evaluation TIG | ||||
| Chair(s): | ||||
| Valerie J Caracelli, United States Government Accountability Office, caracelliv@gao.gov | ||||
| Abstract: This panel aims to explore the methodological choices evaluators face in attempting to review a body of evaluation evidence to learn “what works”, i.e., what interventions or approaches are effective in trying to achieve a given outcome. When asked to assess a new initiative to identify effective social interventions, GAO discovered that 6 federally-supported efforts with the same basic purpose had been operating in diverse content areas for several years. While all 7 evaluation reviews assess evaluation quality on similar social science research standards, some reviews included additional criteria or gave greater emphasis to some issues than others. They also differed prominently in the approaches they took to the next step - synthesizing credible evaluation evidence to draw conclusions about whether an intervention was effective or not. This panel will explore the methodological choices such efforts face, and what features of the evaluations or context influenced their decisions. | ||||
| ||||
| ||||
| ||||
|
| Session Title: Evaluating With Validity: Truth, Justice, and the Beautiful Way | |||
| Panel Session 229 to be held in Texas B on Thursday, Nov 11, 9:15 AM to 10:45 AM | |||
| Sponsored by the Theories of Evaluation TIG | |||
| Chair(s): | |||
| James Griffith, Claremont Graduate University, james.griffith@cgu.edu | |||
| Discussant(s): | |||
| Ernest House, University of Colorado, ernie.house@colorado.edu | |||
| Abstract: In Evaluating with Validity (1980), House proposed three standards for evaluation: truth, justice, and beauty. He argued that, if one must choose between these standards, justice always comes first, then truth, and finally beauty. In discussing these standards, House draws both on contemporary social scientists and on philosophers contemporary and classic. Three evaluators with differing perspectives will consider how the contemporary historical context and current theoretical notions support or undermine House’s perspective. | |||
| |||
| |||
|
| Session Title: Dealing With Technical Challenges in Mixed Methods Evaluation | ||||||||||||||||||
| Multipaper Session 230 to be held in Texas C on Thursday, Nov 11, 9:15 AM to 10:45 AM | ||||||||||||||||||
| Sponsored by the | ||||||||||||||||||
| Chair(s): | ||||||||||||||||||
| Virginia Dick, University of Georgia, vdick@cviog.uga.edu | ||||||||||||||||||
| Discussant(s): | ||||||||||||||||||
| Susan Labin, Independant Consultant, susan@susanlabin.com | ||||||||||||||||||
|
| Session Title: Analysis and Evaluation of Research Portfolios Using Quantitative Science Metrics: Practice | |||||||
| Panel Session 231 to be held in Texas D on Thursday, Nov 11, 9:15 AM to 10:45 AM | |||||||
| Sponsored by the Research, Technology, and Development Evaluation TIG | |||||||
| Chair(s): | |||||||
| Laurel Haak, Discovery Logic, laurel.haak@thomsonreuters.com | |||||||
| Abstract: Increasingly organizations involved in research and technology development are interested in applying quantitative approaches to evaluate research program impact on participants and to assess whether programs are achieving their stated mission. Science metrics can be leveraged to complement qualitative evaluation methodologies and include bibliometrics, or the use of publication and citation information to derive measures of performance and quality, and other direct measures such as funding amounts and public health impact. In this panel, practical applications of applying metrics to the evaluation of research programs will be discussed. In particular we will discuss the use of bibliometrics in different evaluation settings, the development of novel metrics to address evaluation goals, and the use of metrics that accommodate differences in the temporal aspect of research portfolio outcomes. | |||||||
| |||||||
| |||||||
|
| Session Title: Why Evaluators Need Graphic Design Skills |
| Demonstration Session 232 to be held in Texas E on Thursday, Nov 11, 9:15 AM to 10:45 AM |
| Sponsored by the Evaluation Use TIG |
| Presenter(s): |
| Stephanie Evergreen, Western Michigan University, stephanie.evergreen@wmich.edu |
| Abstract: Evaluators need graphic design skills so that findings can be made into more than doorstops or dust collectors. Building on the work of Jane Davidson’s report formatting suggestions and bringing in best practices from the field of graphic design, the demonstration will illustrate the power of presenting findings in ways that are specifically intended to resonate with the audience. Page layout, use of high quality images, and decluttered slideshows will help evaluators move out of the “death by Powerpoint” business as usual and into presentations of findings and recommendations that make a lasting impression. Presentations and written reports both will be addressed, particularly how their tandem, unredundant use can strengthen both pieces of communication. The demonstration platform will allow for ample examples of before/after work, orientation to web tools to support this endeavor, and comparisons to existing guidelines for increasing use of evaluation reports. |
| Session Title: New Directions for Research on Evaluation | ||||
| Panel Session 233 to be held in Texas F on Thursday, Nov 11, 9:15 AM to 10:45 AM | ||||
| Sponsored by the Research on Evaluation TIG | ||||
| Chair(s): | ||||
| John LaVelle, Claremont Graduate University, john.lavelle@cgu.edu | ||||
| Abstract: In recent years the practice of evaluation has grown substantially, as evidenced by a rise in the number of professional evaluation organizations, AEA membership, the demand for evaluation services, and the number of number of universities offering training in evaluation (LaVelle & Donaldson, 2010). In tandem, a growing emphasis has been placed on the importance of research on evaluation. Mark (2007) proposed an organizing taxonomy that suggests four major areas that researchers of evaluation might explore: Context, Activities, Consequences, and Professional Issues. This panel will provide an introduction to Mark’s taxonomy, highlight current research in each section of the taxonomy, and provide an opportunity for the audience to brainstorm with the presenters to generate specific research ideas to guide future efforts. | ||||
| ||||
| ||||
|
| Session Title: Respecting and Protecting Boundaries: Social Evaluation Competencies |
| Think Tank Session 234 to be held in CROCKETT A on Thursday, Nov 11, 9:15 AM to 10:45 AM |
| Sponsored by the |
| Presenter(s): |
| Phyllis Clay, Albuquerque Public Schools, phyllis.clay@aps.edu |
| Discussant(s): |
| Ranjana Damle, Albuquerque Public Schools, damle@aps.edu |
| River Dunavin, Albuquerque Public Schools, dunivan_r@aps.edu |
| Debra Heath, Albuquerque Public Schools, heath_d@aps.edu |
| Nancy Carillo, Albuquerque Public Schools, carrillo_n@aps.edu |
| Abstract: Have you ever wondered if you’ve over-extended your boundaries and crossed the line into the evaluand’s territory? On the other hand, have you felt taken advantage of by an evaluee who expects you to do just one more small thing at the last minute? The purpose of this think tank is to provide participants with an opportunity to become more aware of their own approach(es) to boundary setting within their evaluation responsibilities and to explore alternatives in a collegial setting. Facilitators will briefly introduce the topic by highlighting situations in their own work in which they have struggled to keep boundaries clear. Groups will form for participants to discuss personal evaluation boundary situations and potential alternatives for protecting our own boundaries and respecting the boundaries of the programs we evaluate as well as those of the people within those programs. Highlights of the group discussions will be reported. |
| Session Title: Evaluation Utilization and the Story of the Federal Railroad Administration’s 10 Year Research and Development Effort to Change Safety Culture in the United States Rail Industry | |||||
| Panel Session 235 to be held in CROCKETT B on Thursday, Nov 11, 9:15 AM to 10:45 AM | |||||
| Sponsored by the Business and Industry TIG | |||||
| Chair(s): | |||||
| Michael Quinn Patton, Utilization-Focused Evaluation, mqpatton@prodigy.net | |||||
| Discussant(s): | |||||
| Deborah Bonnet, Fulcrum Corporation, dbonnet@fulcrum-corp.com | |||||
| Abstract: Over the last 10 years the Human Factors group within the FRA’s R&D Division has been implementing and evaluating a series of innovative projects to change safety culture in the railroad industry from one focused on blame, to one focused on cooperative problem solving. The motivation was to improve safety beyond the limits of what could be done via technological and rule-based procedural changes alone. Each of six programs approached the challenge in a different manner, but they all focused on safety culture and the engagement process to support cooperative problem solving and root cause analysis. The collective finding was that safety culture can be changed, that cooperative problem solving can be done, that root causes can be identified, and ultimately, that safety can be improved. As a result of these findings, the FRA has embarked on a deliberate effort to promote these kinds of programs in the railroad industry. | |||||
| |||||
| |||||
| |||||
|
| Session Title: Assessing Impacts in Real World Evaluations: Alternatives to the Conventional Statistical Counterfactual |
| Think Tank Session 236 to be held in CROCKETT C on Thursday, Nov 11, 9:15 AM to 10:45 AM |
| Sponsored by the International and Cross-cultural Evaluation TIG |
| Presenter(s): |
| Michael Bamberger, Independent Consultant, jmichaelbamberger@gmail.com |
| Discussant(s): |
| Jim Rugh, Independent Consultant, jimrugh@mindspring.com |
| Megan Steinke, Save the Children, msteinke@savechildren.org |
| J Bradley Cousins, University of Ottawa, bcousins@uottawa.ca |
| Abstract: Only a small fraction of program evaluations can estimate impacts using a statistically defined counterfactual. However, it is widely recognized that the absence of a methodology for defining and testing alternative possible explanations (rival hypotheses) of the observed changes in the project population increases the risk of biased or unreliable estimates of project effects. So what advice can we offer to evaluators on alternatives to the conventional statistical? The proposed session is a follow-up to a 2009 AEA think tank attended by over 50 participants in which a range of quantitative, mixed methods and theory based approaches to defining alternative counterfactuals based on participants own experience in the field were identified. There has been active follow-up resulting in the documentation of these alternative approaches. The 2010 think tank will build on the approaches and challenges identified in Orlando and will explore methodological questions relating to these innovative approaches. |
| Session Title: Evaluation and Quality: Examples From Government | |||||||||||||||||||||||
| Multipaper Session 237 to be held in CROCKETT D on Thursday, Nov 11, 9:15 AM to 10:45 AM | |||||||||||||||||||||||
| Sponsored by the Government Evaluation TIG | |||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||
| Sam Held, Oak Ridge Institute for Science and Education, sam.held@orau.org | |||||||||||||||||||||||
|
| Session Title: Navigating the Intricacies of Culture and Context in International Program Evaluation | ||||||||||||||||||||
| Multipaper Session 238 to be held in REPUBLIC A on Thursday, Nov 11, 9:15 AM to 10:45 AM | ||||||||||||||||||||
| Sponsored by the International and Cross-cultural Evaluation TIG | ||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||
| Mary Crave, University of Wisconsin, crave@conted.uwex.edu | ||||||||||||||||||||
|
| Session Title: Evaluating Health Program Sustainability: Improving the Quality of Methods and Measures |
| Think Tank Session 239 to be held in REPUBLIC B on Thursday, Nov 11, 9:15 AM to 10:45 AM |
| Sponsored by the Health Evaluation TIG and the Non-profit and Foundations Evaluation TIG |
| Presenter(s): |
| Mary Ann Scheirer, Scheirer Consulting, maryann@scheirerconsulting.com |
| Abstract: This think tank will discuss methods and measures to evaluate health program sustainability in the context of program life cycles. What happens to programs funded by foundations and governmental entities after their initial funding has ended? We will briefly review the state of recent evaluations addressing sustainability, then facilitate discussion among participants around key topics concerning the quality of sustainability evaluation. What methods are most appropriate and feasible for collecting data about sustainability? What sustainability outcomes should be measured and how can we assess the predictors or facilitators of sustainability? This session will help to further develop the paradigms for addressing the quality of evaluation in this relatively new content area. |
| Session Title: Using a Practical Lens to Develop National-Level Participatory Projects |
| Demonstration Session 240 to be held in REPUBLIC C on Thursday, Nov 11, 9:15 AM to 10:45 AM |
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG |
| Presenter(s): |
| Tobi Lippin, New Perspectives Consulting Group, tobi@newperspectivesinc.org |
| Thomas McQuiston, Tony Mazzocchi Center for Health, Safety and Environment, tmcquiston@uswtmc.org |
| Kristin Bradley-Bull, New Perspectives Consulting Group, kristin@newperspectivesinc.org |
| Abstract: How can participatory evaluators successfully facilitate high-quality evaluation of programs that are national in scope, including ensuring meaningful participation by frontline program staff and program participants? Join this session for a practical look at some of the key strategies we have developed over a decade of facilitating these kinds of evaluations and assessments with teams comprised of labor union staff, rank and file workers, and external consultants. We will discuss some of the broader participatory strategies we use and the types of evaluation projects we conduct (national in scope and designed to leverage change at worksite, industry, and national policy levels). Then, we will walk through the specifics of how we: cultivate and maintain a “representative” team from various areas of the country; often rely on an organizational unit of analysis; and tap additional opportunities to gain broader input during the evaluation process |