| Session Title: Improving the Quality of Evaluation Practice by Attending to Context | |||
| Panel Session 102 to be held in Lone Star A on Wednesday, Nov 10, 4:30 PM to 6:00 PM | |||
| Sponsored by the Presidential Strand | |||
| Chair(s): | |||
| George Julnes, University of Baltimore, gjulnes@ubalt.edu | |||
| Discussant(s): | |||
| Eleanor Chelimsky, Independent Consultant, eleanor.chelimsky@gmail.com | |||
| Abstract: This panel, comprised of the three individuals involved in developing the 2009 AEA Conference Presidential Strand on context, will draw on publications developed from the strand to explore the ways in which attending to context can improve the quality of evaluation practice. Context has an influence on how we as evaluators approach and design our studies, how we carry them out, and how we report our findings. Using the five aspects of context covered by Rog in her Presidential address -- the nature of the problem being addressed, the context of the intervention being examined, the setting and broader environment in which the intervention is being studied, the parameters of the evaluation itself and the broader decision-making context—the panel will explore the ways in which attending to these areas and the dimensions within them (physical, organizational, social, cultural, tradition, and historical) may heighten the quality of evaluation practice. | |||
| |||
| |||
|
| Session Title: Waawiyeyaa (Circular) Evaluation Tool |
| Skill-Building Workshop 103 to be held in Lone Star B on Wednesday, Nov 10, 4:30 PM to 6:00 PM |
| Sponsored by the Indigenous Peoples in Evaluation TIG |
| Presenter(s): |
| Andrea LK Johnston, Johnston Research Inc, andrea@johnstonresearch.ca |
| Abstract: Developed by Johnston Research Inc., this holistic evaluation tool, grounded in Anishnawbe traditional knowledge was created for program providers. It's a self-evaluation tool allowing programs to document both meaningful process and outcomes over time. It's also a learning tool that promotes growth and self-development among the program participants. By applying the tool at various program-milestones a full picture can be documented of the personal journeys of the participants in a systematic manner. The traditional knowledge tool provides a framework from which program participants can easily relate. Participants like the tool because the storytelling is driven by them through their own eyes and at their own pace. We will review the manual, see the 20-minute video, complete the paper and crayon exercise and incorporate our stories into an evaluation report. You will take home your story, as well as a copy of the DVD and manual. We offer additional training. |
| Session Title: Building a Learning Culture Within Community Initiatives and Organizations | |||
| Panel Session 104 to be held in Lone Star C on Wednesday, Nov 10, 4:30 PM to 6:00 PM | |||
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG | |||
| Chair(s): | |||
| David Scheie, Touchstone Center for Collaborative Inquiry, dscheie@touchstone-center.com | |||
| Discussant(s): | |||
| Prudence Brown, Independent Consultant, pruebrown@aol.com | |||
| Abstract: This session explores principles and strategies for establishing a learning culture within initiatives and organizations, particularly in an urban community development context. Three case examples will be presented, each with aspirations of creating a lively, participatory learning and evaluation culture within a community organization or initiative. Ways to engage staff and project participants in planning, data collection, analysis and reflection will be examined. Tensions and pitfalls encountered in the three cases will also be considered. Challenges in navigating race and class issues – e.g. multi-cultural and lower income contexts, professional-citizen differentials in authority and credibility, and divergent interests among staff, participants, and sponsoring funders, will be given special attention. Timelines required and stages encountered in the effort to develop an effective participatory learning culture will be explored. A discussant with experience in many other learning and community change initiatives will put lessons from these three cases into broader national perspective. | |||
| |||
| |||
|
| Session Title: Successful Outcome Measurement in Nonprofits: Overcoming Challenges |
| Think Tank Session 105 to be held in Lone Star D on Wednesday, Nov 10, 4:30 PM to 6:00 PM |
| Sponsored by the Non-profit and Foundations Evaluation TIG |
| Presenter(s): |
| Lora Warner, University of Wisconsin, Green Bay, warnerl@uwgb.edu |
| Abstract: Successful Outcome Measurement in Nonprofits: Overcoming Challenges: How can a nonprofit organization overcome common challenges and successfully implement an effective outcome measurement system? We will present an actual case study of a local nonprofit organization attempting to implement outcome measurement The case study will illustrate common challenges faced by nonprofit organizations as they develop an outcome measurement system. Small group breakouts will each discuss a typical challenge, including: 1. Gaining the commitment of the board and top leaders 2. Increasing the organization’s evaluation capacity (staff time and expertise) 3. Identifying which outcomes to measure and developing simple, useful tools 4. Collecting and managing data efficiently 5. Using data to learn and improve the program 6. Integrating outcome data with existing management systems Through this think tank, participants will explore strategies to overcome each challenge and will take away new ideas on how to implement successful outcome measurement in nonprofits. |
| Session Title: Place Randomized Trials and Alternatives in a Field Setting: Examples From Psychotherapy Research | |||
| Panel Session 106 to be held in Lone Star E on Wednesday, Nov 10, 4:30 PM to 6:00 PM | |||
| Sponsored by the Quantitative Methods: Theory and Design TIG | |||
| Chair(s): | |||
| Lee Sechrest, University of Arizona, sechrest@u.arizona.edu | |||
| Discussant(s): | |||
| Frederick Newman, Florida International University, newmanf@fiu.edu | |||
| Abstract: Two large scale program evaluation studies researching the effects of feedback systems about patients progress in ambulatory psychotherapy have been conducted in Germany. One study used a longitudinal place randomized design where the randomization took place at the therapist level. The intervention group had access to a new feedback system and the control group did treatment as usual. The other study used a simple follow-up design without a control group. All therapists had access to the same feedback system but got feedback about their patients where they stood compared to other patients. | |||
| |||
| |||
|
| Session Title: Balancing Autonomy and Uniformity in a Multi-site Evaluation: Evaluation of Program Integration Efforts at the Centers for Disease Control and Prevention (CDC) | ||||
| Multipaper Session 107 to be held in Lone Star F on Wednesday, Nov 10, 4:30 PM to 6:00 PM | ||||
| Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG | ||||
| Chair(s): | ||||
| Thomas Chapel, Centers for Disease Control and Prevention, tchapel@cdc.gov | ||||
|
| Roundtable Rotation I: Using Technological Pedagogical Content Knowledge and an Evaluation Framework to Evaluate Online Courses and Tools |
| Roundtable Presentation 108 to be held in MISSION A on Wednesday, Nov 10, 4:30 PM to 6:00 PM |
| Sponsored by the Distance Ed. & Other Educational Technologies TIG |
| Presenter(s): |
| Andrea Velasquez, Brigham Young University, andrea_velasquez@byu.net |
| David D Williams, Brigham Young University, david_williams@byu.edu |
| Abstract: Evaluating quality online courses has become a complex challenge as designers and evaluators are faced with multiple dimensions of the online context. This presentation will examine the use of three frameworks- an evaluation framework (Williams & Graham, 2010), Technological Pedagogical Content Knowledge (Mishra & Koehler, 2006), and ADDIE- that can be used together to help evaluators of online courses identify specific criteria and consider important questions during formative and summative evaluation of online courses. These three frameworks are presented in the context of a design case to help evaluators identify and consider relevant criteria in online course evaluations. Mishra, P., & Koehler, M. J. (2006). Technological Pedagogical Content Knowledge: A new framework for teacher knowledge. Teachers College Record, 108 (6), 1017-1054. Williams, D. D., & Graham, C. R. (2010). Evaluating e-learning. In B. McGaw, E. Baker,, & P. P. Peterson (Eds.), International Encyclopedia of Education (3rd ed.). Oxford, UK: Elsevier. |
| Roundtable Rotation II: Regional Education Master’s Online Training in Evaluation (REMOTE) Messages: The Job Value of a Distance Learning Graduate Program for the Pacific Region |
| Roundtable Presentation 108 to be held in MISSION A on Wednesday, Nov 10, 4:30 PM to 6:00 PM |
| Sponsored by the Distance Ed. & Other Educational Technologies TIG |
| Presenter(s): |
| Charles Giuli, Pacific Resources for Education and Learning (PREL), giulic@prel.org |
| Abstract: During 2007–2009, the University of Hawaii and the Pacific Resources for Education and Learning (PREL) offered an NSF sponsored, online, master’s degree in evaluation practice called the Regional Education Master’s Online Training in Evaluation (REMOTE)program. Previous presentations at AEA have described the challenges of implementation and the lessons learned. The purpose of this session is to present and discuss results from a follow-up study of the relevance of the program to on-the-job improvement for REMOTE participants. The study was conducted about a year after the program ended so that graduates would have time to apply the skills and knowledge they acquired in the course to their work. Graduates were asked how the program helped them understand their work and improve their decision-making. They were interviewed and completed a survey. |
| Roundtable Rotation I: Increasing Nonprofit Sustainability Activities With Effective Request for Proposals (RFP's): A Mixed Methods Evaluation of RFPs as an Instrument for Success |
| Roundtable Presentation 109 to be held in MISSION B on Wednesday, Nov 10, 4:30 PM to 6:00 PM |
| Sponsored by the and the Pre-K - 12 Educational Evaluation TIG |
| Presenter(s): |
| Nakia James, Western Michigan University, nakia.s.james@wmich.edu |
| Abstract: Nonprofit organization’s primary purpose is to provide programs and/or services. For this, they tend to rely heavily upon grant funding to sustain and deliver their programs/services. Accordingly, NPOs often generate Request for Proposals (RFPs) to procure needed services. Since most grants require an organizational assessment or program evaluation as part of their annual report, an RFP is often developed to retain the services of an external evaluator. However, though the grant may include this as a requirement, no additional information may be offered that may assist the NPO in formulating an appropriate RFP. Subsequently, they often fail to include pertinent and appropriate information for the evaluation services requested. Potential external consultants are often ill-equipped to develop an appropriate proposal due to the limitations with deficient RFPs. Clarity and inclusion of key elements are necessary and the lack of, can lead to sub-standard proposals and even unfulfilled services. |
| Roundtable Rotation II: Improving the Process of Reviewing Research Proposals: Reflections of a Research Review Committee |
| Roundtable Presentation 109 to be held in MISSION B on Wednesday, Nov 10, 4:30 PM to 6:00 PM |
| Sponsored by the and the Pre-K - 12 Educational Evaluation TIG |
| Presenter(s): |
| River Dunavin, Albuquerque Pubic Schools, dunavin_r@aps.edu |
| Nancy Carrillo, Albuquerque Public Schools, carrillo_n@aps.edu |
| Ranjana Damle, Albuquerque Public Schools, damle@aps.edu |
| Abstract: Annually, dozens of evaluation and educational research proposals are submitted to Albuquerque Public Schools (APS) by universities, agencies, and individuals aspiring to conduct a remarkable range of projects. A Research Review Committee (RRC) convenes to examine submitted research and evaluation proposals for approval to conduct research at APS. Beyond ensuring projects are ethically sound, members of RRC must balance the interests of the District with research needs of applicants, grant requirements, and burden to the District and schools. Some of the questions we have considered include: What does ‘of interest and benefit to the District’ mean? Should we ensure results are made available to the District? Should research quality be a key consideration? Few guidelines are available. The purpose of this roundtable is to consider these and other questions in order to make the research review processes more systematic and efficient while providing a collaborative forum for colleagues in a research review role. |
| Session Title: A Tool for Designing Evaluations of Paradigm Shifts in Complex System Interventions |
| Skill-Building Workshop 110 to be held in BOWIE A on Wednesday, Nov 10, 4:30 PM to 6:00 PM |
| Sponsored by the Systems in Evaluation TIG |
| Presenter(s): |
| Beverly A Parsons, InSites, bparsons@insites.org |
| Pat Jessup, InSites, pjessup@insites.org |
| Marah Moore, i2i Institute Inc, marah@i2i-institute.com |
| Abstract: The session begins with an overview of a one-page framework for displaying a theory of change in complex systems. It focuses on patterns of change over time across multiple levels of a social system. It incorporates attention to system boundaries, relationships, and perspectives as well as system dynamics. Working in pairs or triads, the participants will practice constructing a similar visual framework based on a situation that one of them is evaluating that involves a fundamental paradigm shift (e.g., a shift from a focus on deficits to a focus on assets; from delivery of information to engaged learning; from waste to recycling). Then they will practice using their framework as the basis for designing an evaluation that helps evaluation users leverage complex dynamics in the social systems involved to achieve the desired shift with attention to sustainability and scalability. |
| Roundtable: Maximizing Our Collective Talent: Conversations With Senior Evaluators |
| Roundtable Presentation 111 to be held in BOWIE B on Wednesday, Nov 10, 4:30 PM to 6:00 PM |
| Sponsored by the Multiethnic Issues in Evaluation TIG |
| Presenter(s): |
| Tamara Bertrand Jones, Florida State University, tbertrand@fsu.edu |
| Pamela Frazier-Anderson, Lincoln University, pfanderson@lincoln.edu |
| Abstract: In many cultures community elders are respected and revered for their wisdom and years of personal and/or professional experiences, as well as the guidance provided to younger generations. As the field of evaluation continues to grow and evaluation training programs become available to increasingly diverse populations, the relationships with and experiences of senior evaluators become a valuable resource. It is rare that senior evaluators, considered experts in their respective fields, are readily accessible to those who would like to learn from them. Participants in Bertrand (2006) identified participation in professional association meetings/conferences as a major influence on the development of professional relationships. This roundtable session brings national and/or international senior evaluation leaders, novice, and mid-career evaluators to engage in evaluation discourse. The expectation is that these conversations will assist participants with improving the quality of their own evaluation activities, as well as lay the foundation for lasting professional relationships. |
| Session Title: Understanding, Building, and Evaluating Advocacy Capacity | |||||
| Multipaper Session 112 to be held in BOWIE C on Wednesday, Nov 10, 4:30 PM to 6:00 PM | |||||
| Sponsored by the Advocacy and Policy Change TIG | |||||
| Chair(s): | |||||
| Brian Quinn, Robert Wood Johnson Foundation, bquinn@rwjf.org | |||||
|
| Roundtable Rotation I: A Conceptual Framework to Assess the Sustainability of Community Coalitions Post-Federal Funding |
| Roundtable Presentation 113 to be held in GOLIAD on Wednesday, Nov 10, 4:30 PM to 6:00 PM |
| Sponsored by the Government Evaluation TIG and the Health Evaluation TIG |
| Presenter(s): |
| Alycia Infante, University of Chicago, infante-alycia@norc.org |
| Jennifer Benz, University of Chicago, benz-jennifer@norc.org |
| Hilary Scherer, University of Chicago, scherer-hilary@norc.org |
| Caitlin Oppenheimer, University of Chicago, oppenheimer-caitlin@norc.org |
| Wilma Tilson, United States Department of Health and Human Services, wilma.tilson@hhs.gov |
| Abstract: We present a framework to assist evaluators with defining and measuring the sustainability of community coalitions after initial funding has ended. The federal government increasingly uses community coalitions as a programmatic approach to address emerging community health issues. The presumption is that successful community coalitions will be able to identify new resources to continue their activities and to sustain their impact in the community beyond the initial grant period. In defining sustainability, the framework considers coalition structure, goals, and activities. The framework includes a number of enabling characteristics affecting sustainability, intermediate outcomes (e.g., the expansion or retraction of the coalition, its goals or activities), and the coalition’s long-term impact on the community. Finally, the framework includes the influence of contextual factors (e.g., economy) on the sustainability of the coalition and its ability to generate outcomes. This framework is applied to a sustainability assessment of the Healthy Communities Access Program grantees. |
| Roundtable Rotation II: Using Community Partnerships to Reach Hard-to-Reach Populations in Health-Related Evaluations |
| Roundtable Presentation 113 to be held in GOLIAD on Wednesday, Nov 10, 4:30 PM to 6:00 PM |
| Sponsored by the Government Evaluation TIG and the Health Evaluation TIG |
| Presenter(s): |
| Julia Alvarez, JVA Consulting LLC, julia@jvaconsulting.com |
| Nancy Zuercher, JVA Consulting LLC, nancy@jvaconsulting.com |
| Brian O'Connell, JVA Consulting LLC, brian@jvaconsulting.com |
| Abstract: What is the best way to reach out to hard-to-reach populations? How can you conduct a culturally responsive evaluation and get the response rates you need? Looking for tips and advice on how to conduct a health or healthcare program evaluation that enables you to collaborate more with the communities you serve? Come share your knowledge and ask questions in this roundtable discussion where participants will have opportunities to talk through the sensitive underpinnings of health-related evaluations and brainstorm with colleagues on effective ways of reaching out to hard-to-reach populations. Session facilitators will share three unique experiences of using community partnerships as a way of reaching hard-to-reach people in health-related evaluations. Facilitators will guide conversations about the advantages and disadvantages of partnering, strategies for gathering data from these sensitive groups and the potential impacts partnerships can have on an evaluation. |
| Roundtable Rotation I: Non-response Bias a Limitation: Practical Perspectives of Evaluation Quality Using Survey and Questionnaire Data |
| Roundtable Presentation 114 to be held in SAN JACINTO on Wednesday, Nov 10, 4:30 PM to 6:00 PM |
| Sponsored by the Quantitative Methods: Theory and Design TIG |
| Presenter(s): |
| Michelle Bakerson, Indiana University South Bend, mmbakerson@yahoo.com |
| Abstract: Evaluation quality from a practical stand point depends on the quality of the data gathered. Surveys and questionnaires are commonly used tools to gather data, however this type of data gathering comes with certain limitations and biases. One major limitation to this type of data is non-response bias, which exists when there is a difference in the interpretation of results that would be made regarding those who respond and those who do not respond. The bias created by non-response is a function of both the level of non-response and the extent to which non-respondents are different from respondents (Kano, Franke, Afifi & Bourque, 2008). An explanation of what occurs within survey and questionnaire data is examined using detailed alternatives of interpretation taking non-response into account by making sure the data is valid and does not contain non-response bias. Taking this extra step when examining data will help ensure quality in evaluation findings. |
| Roundtable Rotation II: Coding Open-Ended Survey Items: A Discussion of Codebook Development and Coding Procedures |
| Roundtable Presentation 114 to be held in SAN JACINTO on Wednesday, Nov 10, 4:30 PM to 6:00 PM |
| Sponsored by the Quantitative Methods: Theory and Design TIG |
| Presenter(s): |
| Heather Bennett, University of South Carolina, bennethl@mailbox.sc.edu |
| Joanna Gilmore, University of South Carolina, jagilmor@mailbox.sc.edu |
| Grant Morgan, University of South Carolina, morgang@mailbox.sc.edu |
| Abstract: Responses to open-ended items are generally analyzed inductively through the examination of themes. Unfortunately, key decisions in this process (such as how to segment open-ended responses and the number of codes to include in a codebook) are often smoothed over in published research articles (Draugalis, Coons, & Plaza, 1998; Lupia, 2008). To address this call for greater transparency, this round-table presentation will provide information about the decision-making process researchers from the Office of Program Evaluation (OPE) used to code open-ended items. OPE researchers will also share lessons learned in how to facilitate the coding of open-ended items among a team of researchers and ways to present findings to clients. This round-table will be useful for introducing coding procedures to novice qualitative researchers. Additionally, researchers will encourage a discussion among advanced researchers concerning key decisions in analyzing and reporting data from open-ended survey items. |
| Session Title: Assessing Student Learning Outcomes I: Incorporating Feeback | ||||||||||||||||||||
| Multipaper Session 115 to be held in TRAVIS A on Wednesday, Nov 10, 4:30 PM to 6:00 PM | ||||||||||||||||||||
| Sponsored by the Assessment in Higher Education TIG | ||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||
| Jean-Marc Wise, Florida State University, jwise@fsu.edu | ||||||||||||||||||||
|
| Session Title: Viabilities of Technologies in Evaluation Research | ||||||||||||||||||||||||||||
| Multipaper Session 116 to be held in TRAVIS B on Wednesday, Nov 10, 4:30 PM to 6:00 PM | ||||||||||||||||||||||||||||
| Sponsored by the Integrating Technology Into Evaluation | ||||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||||
| Margaret Lubke, Utah State University, mlubke@gmail.com | ||||||||||||||||||||||||||||
|
| Session Title: Planning Programs: Allocating Scarce Resources Based on Needs Assessment | |||||||||||||||||||||||||||||||||
| Multipaper Session 117 to be held in TRAVIS C on Wednesday, Nov 10, 4:30 PM to 6:00 PM | |||||||||||||||||||||||||||||||||
| Sponsored by the Needs Assessment TIG | |||||||||||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||||||||||
| Ann Del Vecchio, Alpha Assessment Associates, delvecchio.nm@comcast.net | |||||||||||||||||||||||||||||||||
| Sue Hamann, National Institutes of Health, sue.hamann@nih.gov | |||||||||||||||||||||||||||||||||
| Discussant(s): | |||||||||||||||||||||||||||||||||
| James Altschuld, The Ohio State University, altschuld.1@osu.edu | |||||||||||||||||||||||||||||||||
| Sue Hamann, National Institutes of Health, sue.hamann@nih.gov | |||||||||||||||||||||||||||||||||
|
| Session Title: Emerging Strategies and Tools for Evaluating Environmental and Policy Change Approaches to Chronic Disease Prevention | |||||
| Multipaper Session 118 to be held in TRAVIS D on Wednesday, Nov 10, 4:30 PM to 6:00 PM | |||||
| Sponsored by the Health Evaluation TIG | |||||
| Chair(s): | |||||
| Nicola Dawkins, ICF Macro, nicola.u.dawkins@macrointernational.com | |||||
| Discussant(s): | |||||
| Laura Leviton, Robert Wood Johnson Foundation, llevito@rwjf.org | |||||
|
| Session Title: Are Universal School-based Prevention Programs Effective? It Depends on the Students and Outcomes Targeted | |||||
| Multipaper Session 119 to be held in INDEPENDENCE on Wednesday, Nov 10, 4:30 PM to 6:00 PM | |||||
| Sponsored by the Human Services Evaluation TIG | |||||
| Chair(s): | |||||
| Wendy Garrard, University of Michigan, wgarrard@umich.edu | |||||
| Discussant(s): | |||||
| Ann Doucette, George Washington University, doucette@gwu.edu | |||||
|
| Session Title: Innovative Applications of Propensity Scores and Propensity Score Methodology Adjustments to Address Data Constraints | |||||||
| Panel Session 120 to be held in PRESIDIO A on Wednesday, Nov 10, 4:30 PM to 6:00 PM | |||||||
| Sponsored by the Quantitative Methods: Theory and Design TIG | |||||||
| Chair(s): | |||||||
| Vajeera Dorabawila, New York State Office of Children and Family Services, vajeera.dorabawila@ocfs.state.ny.us | |||||||
| Discussant(s): | |||||||
| MH Clark, Southern Illinois University, mhclark@siu.edu | |||||||
| Abstract: The objective of this presentation is to illustrate innovative applications of propensity scores and methodology adjustments that can be made to address data constraints. This session is of particular interest to the Quantitative Methods Topical Interest Group as it covers propensity scores in a way that will appeal to both novices and experts. It will be of interest and accesible to novices as the first presentation will discuss various propensity score matching techniques and share computer programs. Both experts and novices will find it of interest as the applications outline how data constraints and evaluation needs can be addressed through the use of propensity scores. In doing so, the presentors will describe novel applications and methods of addressing data issues. | |||||||
| |||||||
| |||||||
|
| Session Title: Organizational Learning and Evaluation Capacity Building TIG Business Meeting and Presentations: Advancing Quality in Evaluation Capacity Building | ||||||||||
| Business Meeting and Multipaper Session 121 to be held in PRESIDIO B on Wednesday, Nov 10, 4:30 PM to 6:00 PM | ||||||||||
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG | ||||||||||
| TIG Leader(s): | ||||||||||
| Michelle Baron, The Evaluation Baron LLC, michelle@evaluationbaron.com | ||||||||||
| Gary Skolits, University of Tennessee, Knoxville, gskolits@utk.edu | ||||||||||
| Stephen J Ruffini, Wested, sruffin@wested.org | ||||||||||
| Megan Bennett, Training Evaluation and Metrics, megan_bennett@cable.comcast.com | ||||||||||
| Chair(s): | ||||||||||
| Michelle Baron, The Evaluation Baron LLC, michelle@evaluationbaron.com | ||||||||||
| Gary Skolits, University of Tennessee, Knoxville, gskolits@utk.edu | ||||||||||
|
| Session Title: Integrating Realist Evaluation Strategies in a Substance Abuse and Mental Health Services Administration (SAMHSA) System of Care Local Evaluation |
| Skill-Building Workshop 122 to be held in PRESIDIO C on Wednesday, Nov 10, 4:30 PM to 6:00 PM |
| Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG |
| Presenter(s): |
| Mansoor Kazi, State University of New York at Buffalo, mkazi@buffalo.edu |
| Connie Maples, ICF Macro, connie.j.maples@macrointernational.com |
| Rachel Ludwig, Chautauqua Tapestry, mesmerr@co.chautauqua.ny.us |
| Abstract: This demonstration will illustrate how realist evaluation strategies can be applied in the evaluation of 100% natural samples in agencies that are providing mental health and other services to youth and families. Mental health agencies routinely collect data that is typically not used for evaluation purposes. This demonstration will include new data analysis tools drawn from both the efficacy and epidemiology traditions to investigate patterns in this data in relation to outcomes, interventions and the contexts of practice. For example, binary logistic regression can be used repeatedly with whole school databases at every marking period to investigate the effectiveness of school-based interventions and their impact on school outcomes. The demonstration will include practice examples drawn from the SAMHSA funded System of Care community mental health services for children with serious emotional disturbance and their families in Chautauqua County, New York State. |
| Roundtable Rotation I: Evaluation Quality in Measuring Teacher Quality: The Impact of the Targeted Assistance Coaching Model on Local Education Agency (LEA) Improvement |
| Roundtable Presentation 123 to be held in BONHAM A on Wednesday, Nov 10, 4:30 PM to 6:00 PM |
| Sponsored by the Pre-K - 12 Educational Evaluation TIG |
| Presenter(s): |
| Bruce Yelton, Praxis Research Inc, praxisresearchinc@gmail.com |
| MaryBeth Gilbert, Praxis Research Inc, marybethgilbert@bellsouth.net |
| Paula Plonski, Praxis Research Inc, pplonski@windstream.net |
| Abstract: This evaluation focused on the impact of the Targeted Assistance (TA) coaching model implemented by a large southeastern school system on instructional quality and student learning as required by No Child Left Behind legislation. The TA coaching model provides intensive coaching assistance to teachers within schools that are selected based on a greatest need data-driven rubric. Data was collected concerning teacher performance, classroom instructional behaviors, the intensity of coaching services provided, and student achievement. Evaluation quality was enhanced by a theory-driven program model constructed with stakeholder input, school administrative involvement, utilization of a growth model that included sub-group differences to measure student academic progress, and training for use of a classroom observation instrument that was analyzed for inter-rater reliability. Results to-date have shown that there have been significant positive changes in pre- to post-coaching classroom observations regarding teacher instructional behavior. |
| Roundtable Rotation II: Evaluating the Effects of Year-Long Professional Development on Teachers: Final Refinement of the University of California, Los Angeles (UCLA) Center X and Social Research Methodology (SRM) Evaluation Group Teacher Pre/Post Survey |
| Roundtable Presentation 123 to be held in BONHAM A on Wednesday, Nov 10, 4:30 PM to 6:00 PM |
| Sponsored by the Pre-K - 12 Educational Evaluation TIG |
| Presenter(s): |
| Nicole Gerardi, University of California, Los Angeles, gerardi_nicole@yahoo.com |
| Janet Lee, University of California, Los Angeles, janet.lee@ucla.edu |
| Abstract: The U.S. Department of Education, state governments, school districts, universities, and private educational programs are forming partnerships to deliver large scale and individualized Professional Development (PD) to lower performing school districts and schools to raise the quality of teachers. In this Round Table we share an instrument, under development for 5 years with UCLA Center X and the SRM Evaluation Group, geared at evaluating year-long PD efforts. The survey is informed by multiple survey sources, evaluators, educational researchers, PD program directors, expert subject matter professionals, professional learning partners, and teachers. The survey has been piloted in 3 different PD programs, at 4 separate administrations, in a combined total of 33 different schools. We now believe it is time to receive feedback from the larger evaluation community. We hope to further refine the instrument in this Round Table and share the resource with other evaluators, thereby improving the practice of evaluation. |
| Session Title: The Intersection of Strategy and Evaluation: What Are We Learning? | |||
| Panel Session 124 to be held in BONHAM B on Wednesday, Nov 10, 4:30 PM to 6:00 PM | |||
| Sponsored by the Non-profit and Foundations Evaluation TIG | |||
| Chair(s): | |||
| Sarah Stachowiak, Organizational Research Services, sarahs@organizationalresearch.com | |||
| Discussant(s): | |||
| Julia Coffman, Center for Evaluation Innovation, jcoffman@evaluationexchange.org | |||
| Abstract: Increasingly, philanthropic organizations want to look beyond individual grant-level evaluations to see what they can learn across their portfolios to make strategic decisions. For evaluators, this raises important questions: What is the intersection between strategy development and evaluation? What levels of evidence are necessary? What approaches to data collection are a good fit and match strategy and budget cycles? What kinds of products are created? How do the needs of philanthropic organizations differ from public sector or non-profit organizations? What are unique dynamics or contexts in collecting information from grantees (e.g., sun-setting funding, balancing expectations among grantees)? This session will explore these questions from the perspectives of evaluation consultants with experience working on strategy-level evaluation and foundation evaluation staff. Using a “fishbowl” approach, panelists will engage in a dialogue with each other, providing real-world examples of how they wrestle with the unique opportunities and challenges associated with this work. | |||
| |||
| |||
| |||
| |||
| |||
|
| Session Title: Managing Quality Through the Stages of Science, Technology, Engineering and Mathematics (STEM) Educational Evaluation | |||
| Panel Session 125 to be held in BONHAM C on Wednesday, Nov 10, 4:30 PM to 6:00 PM | |||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | |||
| Chair(s): | |||
| Leslie Goodyear, National Science Foundation, lgoodyea@nsf.gov | |||
| Abstract: The Program Evaluation Standards and the AEA Guiding Principles represent overarching goals to foster quality in evaluation studies. Nonetheless, different contexts shape the concept of quality during the stages of a program evaluation. Consistently during evaluation studies, evaluation managers make methodological choices to resolve challenges and maintain quality. This panel will discuss a collection of challenges faced by education program evaluators at different evaluation stages: planning, data collection, data analysis, and reporting. Real-world challenges, such as sensitivity to educational settings (formal, informal, and afterschool), balancing partner needs, multisite logistics, data burden on participants, and use in reporting are situated within The Program Evaluation Standards and illustrated using STEM education evaluation case examples. Challenges presented will be balanced with successful strategies and lessons learned from practice. | |||
| |||
| |||
| |||
|
| Session Title: Evaluating Supplementary Programs in Educational Settings | ||||||||||||||||||||||||||||
| Multipaper Session 126 to be held in BONHAM D on Wednesday, Nov 10, 4:30 PM to 6:00 PM | ||||||||||||||||||||||||||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | ||||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||||
| Sheila A Arens, Mid-Continent Research for Education and Learning, sarens@mcrel.org | ||||||||||||||||||||||||||||
| Discussant(s): | ||||||||||||||||||||||||||||
| Javan Ridge, Colorado Springs School District 11, ridgejb@d11.org | ||||||||||||||||||||||||||||
|
| Session Title: Environmental and Energy Evaluations: Strategies for Pursuing Innovative Approaches That Demonstrate Impact, Promote Continuous Improvement and Foster Organizational Learning | |||||||||||||||||||||||||
| Multipaper Session 127 to be held in BONHAM E on Wednesday, Nov 10, 4:30 PM to 6:00 PM | |||||||||||||||||||||||||
| Sponsored by the Environmental Program Evaluation TIG | |||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||
| Mary McEathron, University of Minnesota, mceat001@umn.edu | |||||||||||||||||||||||||
|
| Session Title: Standards of Evidence for Evaluating Extension Programs: A Changing Picture? | |||
| Panel Session 128 to be held in CROCKETT A on Wednesday, Nov 10, 4:30 PM to 6:00 PM | |||
| Sponsored by the Extension Education Evaluation TIG | |||
| Chair(s): | |||
| Nicki King, University of California, Davis, njking@ucdavis.edu | |||
| Abstract: This panel session will examine a variety of perspectives regarding what types of evidence may be best suited to demonstrate program effectiveness within the Extension system. The question is particularly challenging for Extension because of its numerous partners at the federal and state levels, as well as its varied funding streams for program development and evaluation. The three presentations will each cover a distinct viewpoint, including the use of randomized trials, the challenges presented by Extension’s broad diversity of programs, and the contributions of program logic models to decisions about acceptable evidence. The evolution of Extension at the federal and state levels, in terms of organizational structure and funding patterns, creates the need to examine how determinations are made regarding program outcomes and impacts. Implications for organizational learning and evaluation capacity building will be discussed. | |||
| |||
| |||
|
| Session Title: Using an Interest-Driven Project to Teach Program Planning and Evaluation |
| Demonstration Session 129 to be held in CROCKETT B on Wednesday, Nov 10, 4:30 PM to 6:00 PM |
| Sponsored by the Teaching of Evaluation TIG |
| Presenter(s): |
| David Diehl, University of Florida, dcdiehl@ufl.edu |
| Abstract: This workshop provides an overview of an interest-driven approach to teaching undergraduate and graduate program planning and evaluation. The approach optimizes student learning by connecting the course project to a social policy interest of the student’s choosing. Using each student’s proposed social program as a foundation, the project focuses on the following components: 1) Situation Statement, 2) Key Informant Interview, 3) “What Works” (evidence-based programs), 4) Program Logic Model, and 5) Evaluation Plan. In a course where students sometimes struggle with the core content, the interest-driven project approach engages the students in the key issues related to program planning and evaluation. An overview of the approach, student samples, key challenges, and lessons learned will be presented. Discussion will focus on the ways in which the approach can be adapted for different audiences. Attendees will learn practical strategies for teaching program planning and evaluation that engage students and optimize learning. |
| Session Title: Networking and Getting Involved With the American Evaluation Association |
| Skill-Building Workshop 130 to be held in CROCKETT C on Wednesday, Nov 10, 4:30 PM to 6:00 PM |
| Sponsored by the Graduate Student and New Evaluator TIG |
| Presenter(s): |
| Nicole Cundiff, University of Alaska, Fairbanks, karim@siu.edu |
| Cady Berkel, Arizona State University, cady.berkel@asu.edu |
| Abstract: This session will help new evaluators make the best use of their time at the conference, by emphasizing the development of professional connections that are essential for successful evaluation careers. First, a theoretical model linking networking to career success will be presented. Then, we will provide information about networking at the AEA conference (including concurrent sessions, TIG meetings, social events, volunteering, and hospitality suites) and specific strategies for connecting with colleagues. For example, we will overview AEA and TIG structures, addressing how to get involved in leadership positions and what to expect. Further, we will explain the conference program and use examples from the audience to demonstrate how to search for relevant sessions. Participants will leave this skill-building workshop with a strategy for making connections with colleagues, supporting their career success. |
| Session Title: Informing Government Policy Through Evaluation: A Cross-site Evaluation of the Self-Employment Initiative (Start-Up USA) | ||||
| Multipaper Session 131 to be held in CROCKETT D on Wednesday, Nov 10, 4:30 PM to 6:00 PM | ||||
| Sponsored by the Government Evaluation TIG | ||||
| Chair(s): | ||||
| Teserach Ketema, United States Department of Labor, teserachk@yahoo.com | ||||
| Discussant(s): | ||||
| Richard Horne, United States Department of Labor, horne.richard@dol.gov | ||||
|
| Session Title: Evaluation and the Complexities of International Financial Assistance Programs | ||||||||||||||||||||||||||||||
| Multipaper Session 132 to be held in REPUBLIC A on Wednesday, Nov 10, 4:30 PM to 6:00 PM | ||||||||||||||||||||||||||||||
| Sponsored by the International and Cross-cultural Evaluation TIG | ||||||||||||||||||||||||||||||
|
| Session Title: Evaluation of the Clinical and Translational Science Awards (CTSA) Programs: A Focus on Quality | ||||||
| Panel Session 133 to be held in REPUBLIC B on Wednesday, Nov 10, 4:30 PM to 6:00 PM | ||||||
| Sponsored by the Health Evaluation TIG | ||||||
| Chair(s): | ||||||
| D Paul Moberg, University of Wisconsin, dpmoberg@wisc.edu | ||||||
| Discussant(s): | ||||||
| William M Trochim, Cornell University, wmt1@cornell.edu | ||||||
| Abstract: This panel addresses evaluation quality in a complex organizational environment implementing health research infrastructure interventions -- specifically, the 46 academic institutions receiving Clinical and Translational Science Awards (CTSAs). Program evaluation happens at multiple levels as required by funders. CTSA evaluators with broad disciplinary backgrounds apply a range of approaches and mechanisms to evaluating these interventions. The settings and context raise many questions regarding the very concept/definition of evaluation, necessary level of rigor, range of purposes, and level of independence versus integration, leading us to a constant need to “evaluate our evaluation”. Our presentations explore: 1) applying evaluation standards to improve programs; 2) integrating external evaluative input into quality improvement; 3) using qualitative data to enhance evaluation utility; 4) linking program needs to evaluation quality; and 5) examining the utility of publication data as a key metric measuring the quality of biomedical research in the context of the CTSA program. | ||||||
| ||||||
| ||||||
| ||||||
| ||||||
|
| Session Title: Round Robin Focus Groups: Participatory Inquiry - From Data Gathering to Reporting in an Hour |
| Skill-Building Workshop 134 to be held in REPUBLIC C on Wednesday, Nov 10, 4:30 PM to 6:00 PM |
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG |
| Presenter(s): |
| Cynthia Tananis, University of Pittsburgh, tananis@pitt.edu |
| Cara Ciminillo, University of Pittsburgh, ciminill@pitt.edu |
| Abstract: The Round-Robin Focus Group technique has been developed with colleagues to facilitate gathering data in nested small groups of up to 10 (within large groups of even 100) through focused questioning that then involves participants in actively summarizing, analyzing and interpreting, and reporting out findings to the larger group --- all within an hour of participative inquiry. The technique not only serves as a superb self-contained evaluation activity from data gathering to reporting, but also offers small budget evaluation alternatives for professional evaluators and program staff, alike. |