| Session Title: Teaching About Evaluation: Methods With an Admixture of Epistemology and Ontology |
| Expert Lecture Session 701 to be held in International Ballroom A on Saturday, November 10, 9:35 AM to 10:20 AM |
| Sponsored by the Presidential Strand |
| Presenter(s): |
| Sandra Mathison, University of British Columbia, sandra.mathison@ubc.ca |
| Abstract: Teaching evaluation, in University programs or workshops, focuses largely on how to do evaluation: the methods used to define, conduct and report evaluation take center stage. The emphasis on methods is a reflection of evaluation as a professional practice that supports the development of craft knowledge for those who would call themselves evaluators. Implicitly though, in teaching how to do evaluation one hears refrains of foundational conceptions: what counts as appropriate and useful evidence, in what form, as well as the desirable end goal of evaluation and how evaluative knowledge can and should be used to serve that end goal. This approach to teaching evaluation results in differentiated and specialized cadres of evaluators, who share a tenuous sense of solidarity in their work. The implications of teaching about evaluation by focusing on valuing, with a secondary focus on methods will be considered. |
| Session Title: Evaluation in the Context of High Stakes Assessments |
| Multipaper Session 702 to be held in International Ballroom B on Saturday, November 10, 9:35 AM to 10:20 AM |
| Sponsored by the Pre-K - 12 Educational Evaluation TIG |
| Chair(s): |
| M David Miller, University of Florida, dmiller@coe.ufl.edu |
| Abstract: Funding for many educational programs is often tied to an evaluation of the effectiveness of the programs. With the increased emphasis on accountability and high stakes assessments as required by the No Child Left Behind legislation and similar state initiatives, policy makers often define program effectiveness in terms of growth on the high stakes required assessments. At the same time, there is a growing concern for experimental design to measure program effectiveness which has resulted in a push for large scale experimental designs with random assignment of units and well established control groups. This presentation will focus on the use of high stakes accountability data to examine the effectiveness of education programs, particularly the issues of interpreting results of experimental studies using high stakes accountability testing. |
| Design Alternatives to Measure Effectiveness of Programs With High Stakes Assessments |
| M David Miller, University of Florida, dmiller@coe.ufl.edu |
| This paper will consider issues related to data collection when adequate control groups are not possible because high stakes assessments are the required outcome measure and all schools are working to increase their scores. Designs include measurement over time and looking at growth models; regression discontinuity designs; and experimental designs with alternative treatment options. Data are reported for the Florida Reading Initiative that has been tracking 53 schools for the last five years in a state testing program that spans more than a decade. Other examples will be discussed. |
| Interpreting High Stakes Test Data: Consequential Evidence and Multiple Stakeholders |
| Jenny Bergeron, University of Florida, jennybr@ufl.edu |
| This paper focuses on the consequential evidence for valid interpretation of high stakes test scores. Within the context of evaluation, it is important to examine the interpretation of testing data particularly in high stakes testing environments. This paper reports the effects of high stakes assessments in Florida which combines results from three studies. In the first study, principals were interviewed to determining the effects of the high stakes tests on test interpretation. In the second and third studies, teachers and students were given surveys to determining the effects on instruction and student psychological variables (e.g., feelings of control over their ability to do well on tests). |
| Session Title: What Have We Learned About Evaluation Principles and Practice in International Non-governmental Organizations? | |||
| Panel Session 703 to be held in International Ballroom C on Saturday, November 10, 9:35 AM to 10:20 AM | |||
| Sponsored by the International and Cross-cultural Evaluation TIG | |||
| Chair(s): | |||
| Michael Scriven, Western Michigan University, scriven@aol.com | |||
| Discussant(s): | |||
| Jim Rugh, CARE International, jimrugh@mindspring.com | |||
| Abstract: This session will discuss the findings of a major study conducted as part of a PhD dissertation on evaluation principles and practice in International Non-Governmental Organizations (INGOs). The study had three main objectives: (i) assess evaluation practice and principles adopted by international development agencies, with special focus on INGOs, but also how they relate to evaluation practice and expectations of other development institutions, (ii) assess the evaluation standards being developed by the American Council for Voluntary International Action (InterAction) and explore the extent to which they have been adopted by (or reflect practice of) its membership, and (iii) propose specific improvements to InterAction standards, getting eventually to specific methodological frameworks tailored to evaluate initiatives in a few areas of international development. The study included in-depth analysis of the current literature, consultation with experts in the field, and surveys and in-depth interviews of a sample of INGOs, members of InterAction. | |||
| |||
|
| Session Title: Stakeholder Identification and Assessment in Nonprofit Organizations and Public Agencies |
| Demonstration Session 704 to be held in Liberty Ballroom Section A on Saturday, November 10, 9:35 AM to 10:20 AM |
| Sponsored by the Non-profit and Foundations Evaluation TIG |
| Presenter(s): |
| Barbara Wygant, Western Michigan University, barbara.wygant@wmich.edu |
| Abstract: Broadly defined, stakeholders are the people and groups that affect or are affected by an organization. A key stakeholder assessment for an organization may be conducted with relatively minimal resources, especially in comparison to the costs and problems that may arise if key stakeholders are not identified or are ignored. This demonstration is based on an extensive literature review of stakeholder groups related to nonprofit organizations and public agencies. The various definitions and key concepts related to this area will be discussed. Various techniques of stakeholder identification and analysis will be presented. The facilitator will demonstrate the steps in conducting a complete stakeholder assessment through the use of a public administration graduate-level classroom assignment. Stakeholder diagrams, issue networks, political environment, and power rankings will also be discussed. Evaluation tools for stakeholder assessment will be provided and challenges and future trends in evaluation-related stakeholder management and measurement will be discussed. |
| Session Title: Identifying Critical Processes and Outcomes Across Evaluation Approaches: Empowerment, Practical Participatory, Transformative, and Utilization-focused |
| Expert Lecture Session 705 to be held in Liberty Ballroom Section B on Saturday, November 10, 9:35 AM to 10:20 AM |
| Sponsored by the Theories of Evaluation TIG |
| Chair(s): |
| Tanner LeBaron Wallace, University of California, Los Angeles, twallace@ucla.edu |
| Presenter(s): |
| Marvin Alkin, University of California, Los Angeles, alkin@gseis.ucla.edu |
| Discussant(s): |
| J Bradley Cousins, University of Ottawa, bcousins@uottawa.ca |
| David Fetterman, Stanford University, profdavidf@yahoo.com |
| Donna Mertens, Gallaudet University, donna.mertens@gallaudet.edu |
| Michael Quinn Patton, Utilization-Focused Evaluation, mqpatton@prodigy.net |
| Abstract: Inspired by the recent American Journal of Evaluation article by Robin Miller and Rebecca Campbell (2006), this session proposes a set of identifiable processes and outcomes for four particular evaluation approaches-Empowerment, Practical Participatory, Transformative, and Utilization-Focused. The four evaluation theorists responsible for each approach will serve as discussants to critique our proposed set of evaluation principles. This session seeks to answer the following two questions for each approach: What process criteria would identify each specific evaluation approach in practice? And what observed outcomes are necessary in order to make a judgment that the evaluation was "successful" in regards to the particular evaluation approach? Providing answers to these questions through both the presentation and the discussion among the theorists will provide comparative insights into common and distinct elements among the approaches. Our ultimate aim is to advance the discipline of evaluation through increasing conceptual clarity. |
| Session Title: Thinking About Systems Thinking | |||||||||
| Multipaper Session 706 to be held in Mencken Room on Saturday, November 10, 9:35 AM to 10:20 AM | |||||||||
| Sponsored by the Systems in Evaluation TIG | |||||||||
| Chair(s): | |||||||||
| Derek A Cabrera, Cornell University, dac66@cornell.edu | |||||||||
|
| Session Title: Engaging Communities in Disaster and Emergency Management Planning, Education, and Evaluation | ||||||||||||||
| Multipaper Session 707 to be held in Edgar Allen Poe Room on Saturday, November 10, 9:35 AM to 10:20 AM | ||||||||||||||
| Sponsored by the Disaster and Emergency Management Evaluation TIG | ||||||||||||||
| Chair(s): | ||||||||||||||
| Liesel Ritchie, Western Michigan University, liesel.ritchie@wmich.edu | ||||||||||||||
|
| Session Title: Story Bank: Learning through Story-telling |
| Demonstration Session 708 to be held in Carroll Room on Saturday, November 10, 9:35 AM to 10:20 AM |
| Sponsored by the Qualitative Methods TIG |
| Presenter(s): |
| Cassie Bryant, Cassandra Drennon & Associates, cassie@drennonassoc.net |
| Diane Monaghan, Cassandra Drennon & Associates, diane@drennonassoc.net |
| Abstract: The evaluators used an intranet website to gather data for an external evaluation of a statewide adult education program. Teachers at five pilot sites submitted weekly stories about their project-related experiences in the classroom. Using overhead projection, presenters will navigate attendees through the features of the website, showing how teachers used the site and how evaluators organized and analyzed resulting data. Handouts will include basic steps for creating a story bank, issues to consider, and examples of stories and logic models that emerged from the process. The concept, based on the change model of evaluation, became an action research project for teachers. Strengths included richer data collection and a deliberate mechanism for teachers to reflect on and advance their strategies. Weaknesses were teacher resistance to participating in the discussion board and lack of support from the state program coordinator. |
| Session Title: Approaches to Evaluation in Social work settings | |||||||||||||||
| Multipaper Session 709 to be held in Pratt Room, Section A on Saturday, November 10, 9:35 AM to 10:20 AM | |||||||||||||||
| Sponsored by the Social Work TIG | |||||||||||||||
| Chair(s): | |||||||||||||||
| Sarita Davis, Clark Atlanta University, sdavis@cau.edu | |||||||||||||||
|
| Session Title: Retention in a Longitudinal Outcomes Study: Exploring Two Sides of the Same Coin, Who Asks and Who Answers |
| Multipaper Session 710 to be held in Pratt Room, Section B on Saturday, November 10, 9:35 AM to 10:20 AM |
| Sponsored by the Human Services Evaluation TIG |
| Chair(s): |
| Stacy Johnson, Macro International Inc, stacy.f.johnson@orcmacro.com |
| Discussant(s): |
| Christine Walrath, Macro International Inc, christine.m.walrath-greene@orcmacro.com |
| Abstract: This session examines two factors that impact retention in longitudinal research targeting marginalized or distressed populations: the internal and the external variables around the interviewer and the interviewee. The two papers draw on analysis of data from a large sample youth and caregivers from multiple communities who have participated in the National Evaluation of the Comprehensive Community Mental Health Services for Children and Their Families Program from 2002 to 2007. Youth with serious emotional disturbance and their caregivers provided quantitative data at 6-month intervals for up to 3 years. Local evaluation teams collecting the data provided qualitative and quantitative information on their staffing models, incentives, training, support, and life experiences of data collectors. This session will provide an analysis of conditions versus retention rates across a wide range of program and client characteristics. Lessons learned for informing future evaluation practice will be shared. |
| Retention in a Longitudinal Outcomes Study: Impact of Staffing Structure, Agency Policies and Staff Characteristics on Participants |
| Stacy Johnson, Macro International Inc, stacy.f.johnson@orcmacro.com |
| Connie Maples, Macro International Inc, connie.j.maples@orcmacro.com |
| The most critical aspect of longitudinal research is the ability to maintain long-term contact with participants and to track their outcomes over extended periods of time (van Wheel, et al., 2006). Attrition in longitudinal studies impacts their statistical power, introduces bias and threatens internal and external validity (Crom, D., et al., 2006). This paper focuses on data collection staffing models and their impact on participant retention in a 3-year longitudinal outcomes study of the Comprehensive Community Mental Health Services for Children and Their Families Program. Qualitative and quantitative analytical methods are used to explore how staffing variables such as the demographic characteristics and the role of data collectors in the community of study impact participant retention. Policies, procedures and staff structures that support data collection activities are also analyzed. Finally, we will share lessons learned related to overcoming obstacles to retaining participants longitudinally for informing future evaluation practice. |
| Retention in a Longitudinal Outcomes Study: An Exploration of the Effects of Respondent Characteristics, Roles and Consistency |
| Tisha Tucker, Macro International Inc, alyce.l.tucker@orcmacro.com |
| Laura Whalen, Macro International Inc, laura.g.whalen@orcmacro.com |
| Longitudinal studies are often challenged by participant reluctance to take part in the study, family changes and crises, competing priorities, and situational stressors (Ryan & Hayman, 1996). Hunt and White (1998) have identified that along with study design, the study population of interest may impact retention. Though there is extensive research on the respondent characteristics that affect retention, the field is lacking a consensus around which characteristics have the greatest impact. This paper explores how characteristics and roles of research respondents who have participated in the national evaluation of the Comprehensive Community Mental Health Services for Children and Their Families Program impact retention. By exploring respondent characteristics by gender and race/ethnicity, respondent role by caregiver types, and respondent consistency by change in respondents, we identify which variables are most strongly correlated with high retention. Findings identify needs for strategic development to maximize retention rates among certain respondent types and structures. |
| Roundtable: Evaluation and the Institutional Review Board (IRB): A Tale of Two Cities |
| Roundtable Presentation 711 to be held in Douglas Boardroom on Saturday, November 10, 9:35 AM to 10:20 AM |
| Presenter(s): |
| Oliver Massey, University of South Florida, massey@fmhi.usf.edu |
| Abstract: Evaluation straddles the gap between research and service, concerning itself with both the methodological issues of traditional research studies and practical issues of program management and improvement. These roles lead evaluators to collect and analyze a broad variety of information to better inform program managers about the effectiveness and functioning of their agency and programs. At times these activities clearly involve research, at times activities are clearly consultative. Unfortunately, a huge grey area exists regarding the question of what constitutes research, and what are adequate protections for individuals about whom information is collected. This roundtable is proposed as an opportunity for evaluators, whether university based or not, to discuss the business and activities of evaluation, the concerns for adequate and appropriate protection for human subjects and the interface with Institutional Review Boards (IRBs) that are charged with ensuring the rights of research participants in university linked or federally funded sites. |
| Session Title: The Theory Based Models as a Guide to Stakeholder Collaboration, Ownership, and Engagement | |||||||||
| Multipaper Session 712 to be held in Hopkins Room on Saturday, November 10, 9:35 AM to 10:20 AM | |||||||||
| Sponsored by the Program Theory and Theory-driven Evaluation TIG | |||||||||
| Chair(s): | |||||||||
| Dustin Duncan, Harvard University, dduncan@hsph.harvard.edu | |||||||||
|
| Session Title: Fighting Poverty: What Works? Running Randomized Evaluations of Poverty Programs in Developing Countries |
| Expert Lecture Session 713 to be held in Peale Room on Saturday, November 10, 9:35 AM to 10:20 AM |
| Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG |
| Presenter(s): |
| Rachel Glennerster, Massachusetts Institute of Technology, rglenner@mit.edu |
| Abstract: Should limited education budgets in the developing world be spent on textbooks, teachers, or smaller classes? Should scarce health resources be spent on more doctors of basic sanitation? Policymakers lack the evidence they need to tackle these dilemmas. Researchers at the Abdul Latif Jameel Poverty Action Lab at MIT (J-PAL) are seeking to improve the effectiveness of poverty programs by providing rigorous evidence on what works in reducing poverty by implementing randomized evaluations around the world. Dr. Rachel Glennerster, Executive Director of J-PAL will talk about how to overcome the challenges of running rigorous randomized evaluations in developing countries. She will discuss ways to introduce elements of randomization into programs in ways that fit naturally with the work of those running poverty programs on the ground. She will also discuss the techniques used by J-PAL researchers to rigorously measure issues such as women's empowerment, social capital, and trust. |
| Session Title: Starting Out Right: How to Begin Evaluating Community Organizing, Advocacy, and Policy Change Efforts Using a Prospective Approach |
| Demonstration Session 714 to be held in Adams Room on Saturday, November 10, 9:35 AM to 10:20 AM |
| Sponsored by the Advocacy and Policy Change TIG |
| Presenter(s): |
| Justin Louie, Blueprint Research & Design Inc, justin@blueprintrd.com |
| Catherine Crystal Foster, Blueprint Research & Design Inc, catherine@blueprintrd.com |
| Abstract: How can we overcome the challenges of evaluating advocacy-the long time frames for change, the complex political environments, the constantly shifting strategies? How can we ensure that our evaluations remain relevant to community organizers, advocates, and their funders? Over the last three years, Blueprint Research & Design, Inc. has developed and refined a prospective approach to evaluating community organizing, advocacy, and policy change efforts that addresses these challenges head on. This session will delve into the lessons we've learned over our last year of work with advocates, organizers and funders in many fields (education, the environment, health care, media reform) and at many levels (coalitions, funder collaboratives, foundation-nonprofit partnerships, grantee clusters). We will compare across fields and levels to pull out lessons about what works, for whom, and in what contexts, and we will discuss our process, note obstacles, and describe how we've worked to overcome those obstacles. |
| Roundtable: Overcoming Mountains and Valleys: Examining the Dynamics of Evaluation With Underserved Populations |
| Roundtable Presentation 715 to be held in Jefferson Room on Saturday, November 10, 9:35 AM to 10:20 AM |
| Presenter(s): |
| Sylvette La Touche, University of Maryland, College Park, latouche@umd.edu |
| Amy Billing, University of Maryland, College Park, billing@umd,edu |
| Nancy Atkinson, University of Maryland, College Park, atkinson@umd.edu |
| Jing Tian, University of Maryland, tianjing@umd.edu |
| Robert Gold, University of Maryland, rsgold@umd.edu |
| Abstract: The U.S. Department of Health and Human Services, in its policy directive Healthy People 2010, identified home Internet access as a national priority, especially among traditionally excluded populations namely minority, low-income and rural groups. To address this need, the University of Maryland, College Park and Maryland Cooperative Extension launched “Eat Smart, Be Fit, Maryland!”. This research effort sought to assess the capacity of technology to promote positive health behaviors among low literate audiences, using a web portal as its primary component (http://www.eatsmart.umd.edu). Extensive efforts to assess the project's effectiveness have been made, including the use of both process and impact evaluation tools. A unique evaluation strategy was employed, utilizing traditional data collection methods, in conjunction with online evaluation techniques. The results will enable other web-based efforts an opportunity to identify effective evaluation strategies and help to assure that health disparities resulting from literacy barriers to e-health materials are addressed. |
| Session Title: Coalitions and Participatory Approaches in Health Partnership Evaluations | ||||||||||||||||
| Multipaper Session 716 to be held in Washington Room on Saturday, November 10, 9:35 AM to 10:20 AM | ||||||||||||||||
| Sponsored by the Health Evaluation TIG | ||||||||||||||||
| Chair(s): | ||||||||||||||||
| Kathryn Bowen, BECS Inc, drbowen@hotmail.com | ||||||||||||||||
|
| Session Title: Tying it Together: Developing a Web-based Data Collection System for a Multi-site Tobacco Initiative |
| Demonstration Session 717 to be held in D'Alesandro Room on Saturday, November 10, 9:35 AM to 10:20 AM |
| Sponsored by the Non-profit and Foundations Evaluation TIG |
| Presenter(s): |
| Stephanie Herbers, Saint Louis University, herberss@slu.edu |
| Nancy Mueller, Saint Louis University, mueller@slu.edu |
| Abstract: In 2004, a Missouri health foundation committed significant funding to establish a nine-year, comprehensive, multi-site initiative to reduce tobacco use in Missouri. Projects funded through the initiative share common goals, but vary in design, structure, and implementation. This can pose challenges when evaluating progress, including capturing data on the overarching goals; detecting broad, cross-site effects; and ensuring valid results. As the initiative evaluators, we will demonstrate the process we took to identify a minimum data set to be collected from each of the funded projects. We will also describe the development of a web-based data collection system that provides a centralized location for submission and access to the minimum data set by grantees, the evaluator, and the foundation. In addition, we will demonstrate the system and discuss lessons learned in creating a flexible and user-friendly data collection and management tool that can be used by multiple stakeholders. |
| Session Title: Contextuality in Needs Assessment: Attention to Divergent Needs | ||||||||||
| Multipaper Session 718 to be held in Calhoun Room on Saturday, November 10, 9:35 AM to 10:20 AM | ||||||||||
| Sponsored by the Needs Assessment TIG | ||||||||||
| Chair(s): | ||||||||||
| Jeffry L White, Ashland University, jwhite7@ashland.edu | ||||||||||
| Discussant(s): | ||||||||||
| Deborah H Kwon, The Ohio State University, kwon.59@osu.edu | ||||||||||
|
| Session Title: Online Evaluation Systems: One-stop Shops for Administrators, Managers, and Evaluators? |
| Demonstration Session 719 to be held in McKeldon Room on Saturday, November 10, 9:35 AM to 10:20 AM |
| Sponsored by the Integrating Technology Into Evaluation |
| Presenter(s): |
| Susanna Kung, Academy for Educational Development, skung@aed.org |
| Paul Bucci, Academy for Educational Development, pbucci@aed.org |
| Abstract: With the No Child Left Behind (NCLB) Act and the public's increasing emphasis on accountability, school just got tougher-that is, for those in the business of education. Consequently, numerous online evaluation systems and tools have been developed in recent years that deliver data integration and data analysis services in real-time to: - Simplify data collection, analysis, and reporting across multiple sites; - Automate completion of annual performance reports (APRs); - Increase accountability; and - Facilitate data-driven decision making. This session provides a comprehensive demonstration of one such tool, the GEAR UP Online Evaluation System (GOES), which has been implemented in a number of states to track demographic, program participation, academic performance, and survey results at the individual student-, parent-, and teacher-levels from multiple sites longitudinally. |
| Session Title: Assessing Strategic Alignment of Learning in Organizations Where Profits are Not the Bottom Line | |||||
| Panel Session 720 to be held in Preston Room on Saturday, November 10, 9:35 AM to 10:20 AM | |||||
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG | |||||
| Chair(s): | |||||
| Marlaine Lockheed, Independent Consultant, mlockheed@verizon.net | |||||
| Abstract: This panel will discuss the needs for assessing the strategic alignment of knowledge and learning to broader organization goals, their challenges, and the methodologies for such assessments. Several World Bank evaluations will be discussed to illustrate the case of strategic learning assessments in large decentralized organizations whose results can't be measured on their bottom line. | |||||
| |||||
|
| Session Title: Building Local Evaluation Capacity in K-12 Settings | |||||||||||||
| Multipaper Session 721 to be held in Schaefer Room on Saturday, November 10, 9:35 AM to 10:20 AM | |||||||||||||
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG and the Pre-K - 12 Educational Evaluation TIG | |||||||||||||
| Chair(s): | |||||||||||||
| Katherine Ryan, University of Illinois at Urbana-Champaign, k-ryan6@uiuc.edu | |||||||||||||
| Discussant(s): | |||||||||||||
| Rita O'Sullivan, University of North Carolina, Chapel Hill, ritao@email.unc.edu | |||||||||||||
|
| Session Title: Methodological Challenges and Solutions for Business and Industry Evaluators | ||||||||||||||
| Multipaper Session 722 to be held in Calvert Ballroom Salon B on Saturday, November 10, 9:35 AM to 10:20 AM | ||||||||||||||
| Sponsored by the Business and Industry TIG | ||||||||||||||
| Chair(s): | ||||||||||||||
| Judith Steed, Center for Creative Leadership, steedj@leaders.ccl.org | ||||||||||||||
|
| Session Title: Get Those Data off the Shelf and Into Action: Encouraging Utilization Through Innovative Reporting Strategies |
| Skill-Building Workshop 723 to be held in Calvert Ballroom Salon C on Saturday, November 10, 9:35 AM to 10:20 AM |
| Sponsored by the Evaluation Use TIG |
| Presenter(s): |
| Mindy Hightower King, Indiana University, minking@indiana.edu |
| Courtney Brown, Indiana University, coubrown@indiana.edu |
| Marcey Moss, Indiana University, marmoss@indiana.edu |
| Abstract: This session will provide practical strategies to increase utilization of evaluation results, promote program improvement, and help develop strategic vision among a variety of stakeholders. Innovative formats tailored to the diverse information needs of various stakeholder groups will be shared with participants, providing techniques for determining which format best meets stakeholder needs. Participants will be provided with examples used in external evaluation work with small programs and organizations as well as statewide coalitions and initiatives. In addition, participants will be provided with an opportunity to consider and develop alternative reporting strategies for their current projects. |
| Session Title: Do Serious Design Flaws Compromise the Objectivity and Credibility of the Office of Management and Budget's Program Assessment Rating Tool (PART) Evaluation Process? |
| Expert Lecture Session 724 to be held in Calvert Ballroom Salon E on Saturday, November 10, 9:35 AM to 10:20 AM |
| Sponsored by the Government Evaluation TIG |
| Presenter(s): |
| Eric Bothwell, Independent Consultant, ebothwell@verizon.net |
| Abstract: The Office of Management and Budget's Program Assessment Rating Tool (PART) evaluation process has been used to assess federal programs for the past six budget cycles. Since its inception it has continued to receive both praise and criticism but has not been formally evaluated in the context of evaluation standards and has remained relatively stable in its design the past several years. This session will address whether the PART evaluation process meets its own standard of being free of design flaws as expressed in its Question 1.4 that reads: Is the program design free of major flaws that would limit the program's effectiveness or efficiency? |
| Session Title: Linking Smaller Learning Communities to Student Achievement and Related Outcomes Measures |
| Think Tank Session 725 to be held in Fairmont Suite on Saturday, November 10, 9:35 AM to 10:20 AM |
| Sponsored by the Pre-K - 12 Educational Evaluation TIG |
| Presenter(s): |
| Shana Pribesh, Old Dominion University, spribesh@odu.edu |
| Discussant(s): |
| Linda Bol, Old Dominion University, lbol@odu.edu |
| John Nunnery, Old Dominion University, jnunnery@odu.edu |
| Alexander Koutsares, Old Dominion University, akoutsares@odu.edu |
| Abstract: Creating smaller schools/learning communities (SLCs) have been advocated as a specific reform for improving high school student engagement and graduation rates (NRC, 2002). The linkages of smaller learning communities to student achievement have been found to be promising (Felner, Ginter, and Primavera, 1982; NRC, 2002). However, the research connecting the SLC structure with student performance is tenuous - mostly due to methodological issues. We propose a think tank to discuss strategies to evaluate the effect of smaller learning communities on student achievement. This think tank will use case studies to propose innovative, rigorous designs to yield more valid evaluation findings. In addition, we would identify other constructs (e.g., school climate, student self-concept and motivation) theoretically linked to achievement that could be employed as additional outcome measures. The think tank session will be useful for practitioners faced with evaluating smaller schools within schools in public school districts. |
| Session Title: Higher Education Assessment and Evaluation in a Context of Use and Policy Development | |||||||||
| Multipaper Session 726 to be held in Federal Hill Suite on Saturday, November 10, 9:35 AM to 10:20 AM | |||||||||
| Sponsored by the Assessment in Higher Education TIG | |||||||||
| Chair(s): | |||||||||
| William Rickards, Alverno College, william.rickards@alverno.edu | |||||||||
| Discussant(s): | |||||||||
| Molly Engle, Oregon State University, molly.engle@oregonstate.edu | |||||||||
|
| Session Title: Evaluation as an Agent of Program Change: An Example From Austria | ||||||||
| Panel Session 727 to be held in Royale Board Room on Saturday, November 10, 9:35 AM to 10:20 AM | ||||||||
| Sponsored by the Research, Technology, and Development Evaluation TIG | ||||||||
| Chair(s): | ||||||||
| Klaus Zinoecker, Vienna Science and Technology Fund, klaus.zinoecker@wwtf.at | ||||||||
| Abstract: This session demonstrates the use of evaluation as an agent of change in a major Austrian research program. The evaluation study from which it draws bears not only on program management--that is, how the program in question could be managed more effectively, but also it informs public policy-that is, should the program exist and would it benefit from restructuring. The program in question is the Austrian Genome Research Program, GEN-AU (GENome Research in AUstria), Austria's first top-down research grant program. The two presentations will treat the background of the evaluation study, its aims, methods, major findings, implications for program management and public policy, and observations about changes subsequently made in response to the findings. | ||||||||
| ||||||||
|
| Session Title: Leaving No Stone Unturned: Examining the Evaluation of a Statewide Program at the Local Level |
| Think Tank Session 728 to be held in Royale Conference Foyer on Saturday, November 10, 9:35 AM to 10:20 AM |
| Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG |
| Presenter(s): |
| Laura Feldman, University of Wyoming, lfeldman@uwyo.edu |
| Discussant(s): |
| Laura Feldman, University of Wyoming, lfeldman@uwyo.edu |
| Tiffany Comer Cook, University of Wyoming, tcomer@uwyo.edu |
| Shannon Williams, University of Wyoming, swilli42@uwyo.edu |
| Abstract: How does one define community readiness for change? How does one assess a program manager's passion, drive, commitment, and wisdom? How does one evaluate unexpected outcomes and incorporate local accomplishments into a statewide comparison? These questions relate to the University of Wyoming Survey and Analysis Center's (WYSAC) evaluation of Wyoming's Tobacco Prevention and Control Program. WYSAC evaluated 21 local programs to assess how well their outcomes correlated with state goals. The evaluation assessed individual community readiness to accept tobacco-related policies by using cultural, environmental, risk, and protective factors. The evaluation also documented community activities, including program manager capability, as well as community-specific outcomes. Attendees may choose to participate in one of three groups that will address each of the evaluation components: community readiness, program manager characteristics, and community outcomes. |
| Session Title: Consumer and Family Member Involvement in Evaluating Federally-Funded Initiatives |
| Multipaper Session 729 to be held in Hanover Suite B on Saturday, November 10, 9:35 AM to 10:20 AM |
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG |
| Chair(s): |
| Cindy Crusto, Yale University, cindy.crusto@yale.edu |
| Abstract: This session will highlight how consumers and family members of children served by three federally-funded system of care initiatives have been integrated in the evaluation of each. The first paper will describe family member participation in all aspects of evaluation decision making, data collection and management, and continuous quality improvement processes. The second paper will compare a consumer-led needs assessment that was conducted in an urban community with an evaluator-led needs assessment that occurred in the same community. The benefits of engaging consumers in evaluation process along with some of the struggles encountered will be discussed. |
| Facilitating Family Member Involvement in the Evaluation of a Children's Mental Health Initiative |
| Cindy Crusto, Yale University, cindy.crusto@yale.edu |
| This paper will present the evaluation plan of a federally funded system of care initiative for children 11 years and younger with severe social, emotional, and behavioral health challenges and their families. A guiding principle of the federal funder and the statewide initiative focuses on family-driven practices and includes the participation and perspectives of family members at all levels of the initiative's development, implementation, and evaluation. The paper will focus on how family members of children with behavioral health challenges have been integrated into the evaluation process, including participation in evaluation decision making, collection and management of data, and involvement in the initiative's continuous quality improvement process. Strategies and lessons learned for increasing meaningful participation of family members and negotiating their roles as evaluation team members will be presented. |
| Comparison of a Consumer Led and an Evaluator Led Needs Assessments |
| Joy Kaufman, Yale University, joy.kaufman@yale.edu |
| This paper will compare a consumer-led needs assessment that was conducted in an urban community with an evaluator-led needs assessment that occurred in the same community. In the first assessment, 6 parents of children receiving services in the community were trained in all aspects of focus group assessment including protocol development, facilitation, data coding and analysis and data feedback. The second assessment was completed by a university-based evaluator. The presentation will highlight aspects of the parent training and review the methodology and results from both needs assessments. The presenter will also discuss the benefits and struggles that were encountered during each assessment. |
| Session Title: Increasing the Value of Items on a Measure: A Practitioner's Guide to Item Response Theory Analysis |
| Demonstration Session 730 to be held in Baltimore Theater on Saturday, November 10, 9:35 AM to 10:20 AM |
| Sponsored by the Quantitative Methods: Theory and Design TIG |
| Presenter(s): |
| Heather Chapman, EndVision Research and Evaluation, hjchapman@cc.usu.edu |
| Catherine Callow-Heusser, EndVision Research and Evaluation, cheusser@endvision.net |
| Abstract: High-stakes testing, increased accountability, and a focus on evidence-based designs and decision making all indicate that evaluators need to pay more attention to the quality of assessment instruments. Traditional statistical methods used in instrument development yield results with many limitations. Item response theory (IRT) offers a robust statistical technique that can be used in conjunction with or as a replacement to older, more traditional statistical methods when creating new assessment instruments. IRT has several benefits that often make it a more suitable choice for the purpose of instrument development. Many evaluators are unaware of how to use IRT techniques or of the benefit of these techniques to evaluation goals. This demonstration session aims to introduce evaluators to IRT through an explanation of the statistical assumptions of IRT, a demonstration of how to use IRT statistical packages effectively, and an explanation of how to interpret and apply the results. |
| Session Title: Summative Confidence: How Accurate are Your Evaluative Conclusions? |
| Expert Lecture Session 731 to be held in International Room on Saturday, November 10, 9:35 AM to 10:20 AM |
| Sponsored by the Quantitative Methods: Theory and Design TIG |
| Chair(s): |
| Brooks Applegate, Western Michigan University, brooks.applegate@wmich.edu |
| Presenter(s): |
| Cristian Gugiu, Western Michigan University, crisgugiu@yahoo.com |
| Abstract: One of the cornerstones of methodology is that "a weak design yields unreliable conclusions." While this principle is certainly true, the constraints of conducting evaluations in real-world settings often necessitate the implementation of less than ideal designs. To date, no quantitative or qualitative method exists for estimating the impact of sampling, measurement error, and design on the precision of an evaluative conclusion. Consequently, evaluators formulate recommendations and decision makers implement program and policy changes without full knowledge of the robustness of an evaluative conclusion. In light of the billions of dollars spent annually on evaluations and the countless millions of lives that are affected, the impact of decision error can be disastrous. This paper will introduce an analytical method that can be used to estimate the degree of confidence that can be placed on an evaluative conclusion and discuss the factors that impact the precision of a summative conclusion. |
| Session Title: A Discussion of AEA's Evaluation Policy Initiative |
| Panel Session 733 to be held in Versailles Room on Saturday, November 10, 9:35 AM to 10:20 AM |
| Sponsored by the AEA Conference Committee |
| Abstract: To Be Announced |
| William Trochim, Cornell University, wmt1@cornell.edu |
| Hallie Preskill, Claremont Graduate University, hallie.preskill@cgu.edu |
| George Grob, Center for Public Program Evaluation, georgeandsuegrob@cs.com |