| Session Title: Supporting Value Judgments in Evaluations in the Public Interest | ||||
| Panel Session 901 to be held in Pacific A on Saturday, Nov 5, 12:35 PM to 2:05 PM | ||||
| Sponsored by the Government Evaluation TIG and the Presidential Strand | ||||
| Chair(s): | ||||
| George Julnes, University of Baltimore, gjulnes@ubalt.edu | ||||
| Discussant(s): | ||||
| Michael Morris, University of New Haven, mmorris@newhaven.edu | ||||
| Stephanie Shipman, United States Government Accountability Office, shipmans@gao.gov | ||||
| Abstract: To better understand valuing in the public interest, it is important to encourage a dialogue among evaluators in government and related organizations. This session provides presentations from Francois Dumaine describing Canadian evaluations, Martin Alteriis discussing GAO evaluations, and Christina Christie and Anne Vo presenting a model of the role of evaluators in valuing. | ||||
| ||||
| ||||
|
| Session Title: The Value of Organizational Modeling of Evaluation Protocols and Standards From the State and National Level in Extension | ||||||||||||||||||
| Multipaper Session 902 to be held in Pacific B on Saturday, Nov 5, 12:35 PM to 2:05 PM | ||||||||||||||||||
| Sponsored by the Extension Education Evaluation TIG | ||||||||||||||||||
| Chair(s): | ||||||||||||||||||
| Karen Ballard, University of Arkansas, kballard@uaex.edu | ||||||||||||||||||
|
| Session Title: Fidelity Checks and Interpretation in Producing Evaluation Value | |||||||||||||||
| Multipaper Session 903 to be held in Pacific C on Saturday, Nov 5, 12:35 PM to 2:05 PM | |||||||||||||||
| Sponsored by the Quantitative Methods: Theory and Design TIG | |||||||||||||||
| Chair(s): | |||||||||||||||
| M H Clark, University of Central Florida, mhclark@mail.ucf.edu | |||||||||||||||
|
| Session Title: Increasing Knowledge and Use of Evaluation in the Nonprofit Sector | |||||||||||||||||||
| Multipaper Session 904 to be held in Pacific D on Saturday, Nov 5, 12:35 PM to 2:05 PM | |||||||||||||||||||
| Sponsored by the Non-profit and Foundations Evaluation TIG | |||||||||||||||||||
| Chair(s): | |||||||||||||||||||
| Michelle Baron, The Evaluation Baron LLC, michelle@evaluationbaron.com | |||||||||||||||||||
|
| Roundtable Rotation I: Coalition Evaluation: The Power of Coalitions to Change a Cultural or Organizational Norm |
| Roundtable Presentation 905 to be held in Conference Room 1 on Saturday, Nov 5, 12:35 PM to 2:05 PM |
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG |
| Presenter(s): |
| Lesli Johnson, Ohio University, johnsol2@ohio.edu |
| Abstract: Using coalitions to address complex social problems has become a recognized strategy for change. We explore the importance of coalition evaluation reviewing three initiative evaluations that employed coalitions as vehicles to change cultural or organizational norms. The relationship between individual coalition characteristics including coalition effectiveness and the ability of multiple coalitions to achieve a change in a cultural or organizational norm is discussed. A statewide tobacco use prevention initiative, using community coalitions to change the social acceptability of the use of tobacco products; a statewide initiative, changing the organizational norm of local Adult Basic Literacy Education (ABLE) programs towards encouraging their students to pursue education and training beyond the high school equivalency exam; and a foundation initiative, promoting school-based childhood wellness and obesity prevention through the development of school wellness councils are reviewed in terms of their effectiveness to alter a community or organizational norm. |
| Roundtable Rotation II: 'Trickle Down' or 'Bubble Up': Using Evaluation to Build a Useful Model for Implementing a Policy of Collaboration |
| Roundtable Presentation 905 to be held in Conference Room 1 on Saturday, Nov 5, 12:35 PM to 2:05 PM |
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG |
| Presenter(s): |
| Rachael Lawrence, University of Massachusetts, Amherst, rachaellawrence@ymail.com |
| Sharon Rallis, University of Massachusetts, Amherst, sharonrallis@earthlink.net |
| Abstract: Evaluating collaboration between education providers and mental health workers serving institutionalized youth proved challenging due to complexity in and power differentials across the system. First, two state agencies mandated policy that all providers and levels collaborate. Second, these agencies contracted multiple service providers who hired site level practitioners. Policy was expected to 'trickle down' through at least three levels to program operations. Further, each agency and level interpreted collaboration differently. What collaboration meant - or if collaboration existed - was ultimately defined by varying practices at sites. As external evaluators, we quickly realized that simply measuring collaboration was not feasible, nor would it produce useful information to either program leadership or practitioners. Instead, we refocused to document practices in operation from which we generated a theory-based model that proved useful to understand and support a policy 'bubbling up' from practice. This roundtable will discuss how practitioners and leadership used our model. |
| Roundtable: Evaluation as a Methodology for Understanding and Enabling Interdisciplinary Team Science |
| Roundtable Presentation 906 to be held in Conference Room 12 on Saturday, Nov 5, 12:35 PM to 2:05 PM |
| Sponsored by the Research, Technology, and Development Evaluation TIG |
| Presenter(s): |
| Deana Pennington, University of Texas at El Paso, ddpennington@utep.edu |
| Allison Titcomb, ALTA Consulting LLC, altaconsulting@cox.net |
| Marcia Nation, Arizona State University, marcia.nation@asu.edu |
| Abstract: In recent years, there has been increasing emphasis in science and technology research on interdisciplinary team efforts, particularly in health, environmental, and global change research. These complex team efforts are fraught with difficulties that have been identified through numerous case studies. In this Roundtable we will discuss opportunities for integrating Theory-Driven Developmental Evaluation methods with Design-Based Research methods to inform our understanding of interdisciplinary research teams, assess team effectiveness, and improve team outcomes. Coryn et al (2010) A Systematic Review of Theory-Driven Evaluation Practice From 1990 to 2009. Am. J. of Eval, Online URL: http://aje.sagepub.com/content/early/2010/11/11/1098214010389321 Sandoval (2004). Developing Learning Theory by Refining Conjectures Embodied in Educational Designs',Ed. Psych. 39:4,213-223. Stokols et al (2008). The Ecology of Team Science. Amer. J. Prev. Med. 35(2S):S96-S115. |
| Session Title: Building Evaluation Capacity in International Development Programs: Theory and Practice | |||||||||||||||||||||||||
| Multipaper Session 907 to be held in Conference Room 13 on Saturday, Nov 5, 12:35 PM to 2:05 PM | |||||||||||||||||||||||||
| Sponsored by the International and Cross-cultural Evaluation TIG | |||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||
| Maby Palmisano, ACDI/VOCA, mpalmisano@acdivoca.org | |||||||||||||||||||||||||
|
| Session Title: Learning From Community Evaluations: Theory and Practice | |||||||||||||||||||||||||||
| Multipaper Session 908 to be held in Conference Room 14 on Saturday, Nov 5, 12:35 PM to 2:05 PM | |||||||||||||||||||||||||||
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG | |||||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||||
| Connie Walker, University of South Florida, cwalkerpr@yahoo.com | |||||||||||||||||||||||||||
|
| Session Title: Developing Effective Recommendations: From Theory to Action | ||||||||||||||||
| Multipaper Session 909 to be held in Avila A on Saturday, Nov 5, 12:35 PM to 2:05 PM | ||||||||||||||||
| Sponsored by the Evaluation Use TIG | ||||||||||||||||
| Chair(s): | ||||||||||||||||
| Edward McLain, University of Alaska, Anchorage, afeam1@uaa.alaska.edu | ||||||||||||||||
| Discussant(s): | ||||||||||||||||
| Edward McLain, University of Alaska, Anchorage, afeam1@uaa.alaska.edu | ||||||||||||||||
|
| Session Title: Systems Thinking Evaluation Tools and Approaches for Measuring System Change | |||||||||||||||||||||||||||
| Multipaper Session 910 to be held in Avila B on Saturday, Nov 5, 12:35 PM to 2:05 PM | |||||||||||||||||||||||||||
| Sponsored by the Systems in Evaluation TIG | |||||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||||
| Mary McEathron, University of Minnesota, mceat001@umn.edu | |||||||||||||||||||||||||||
|
| Roundtable Rotation I: Addressing Cultural Validity of Measurement and Evaluation Among Immigrant Youth for the Implementation of Program Development |
| Roundtable Presentation 911 to be held in Balboa A on Saturday, Nov 5, 12:35 PM to 2:05 PM |
| Sponsored by the Multiethnic Issues in Evaluation TIG |
| Presenter(s): |
| Nida Rinthapol, University of California, Santa Barbara, rinthapol@gmail.com |
| Edwin Hunt, University of California, ehunt@education.ucsb.edu |
| Richard Duran, University of California, duran@education.ucsb.edu |
| Abstract: The focus of this study is the analysis and evaluation of validity in goal orientation (GO) measurement among secondary school students from low-income immigrant families participating in a college preparation program in Santa Barbara, CA schools. The notion of culturally appropriate measurement is crucial in the context of program evaluation. The study will incorporate psychometric methods to examine the validity and reliability of the GO measure called Pattern of Adaptive Learning Survey (PALS) among immigrant youth. The verification of cultural validity in GO measurement helps us learn about how students participating in the program learn and process information and how we can further improve the program by tailoring it to the needs of students from historically underrepresented groups, and enhance their access to higher education. |
| Roundtable Rotation II: Evaluating the Implementation of a Culturally-based Intervention in Hawaii |
| Roundtable Presentation 911 to be held in Balboa A on Saturday, Nov 5, 12:35 PM to 2:05 PM |
| Sponsored by the Multiethnic Issues in Evaluation TIG |
| Presenter(s): |
| Sarah Yuan, University of Hawaii, sarah.yuan@hawaii.edu |
| Mei-Chih Lai, University of Hawaii, meichih@hawaii.edu |
| Karen Heusel, University of Hawaii, kheusel@hawaii.edu |
| Abstract: The Hawaii Department of Health supported Evidence-Based Programs to address underage drinking at the local level. This study focuses on Project Venture which is a youth program demonstrated to decrease drinking among American Indian adolescents. Cultural adaptations were integrated in the program, including translating the curriculum into Hawaiian language, to tailor to the diverse cultures in Hawaii. This study is the first empirical evaluation of a culturally based prevention intervention to reduce underage drinking in Hawaii. The populations served, participants' characteristics, program design and adaptations, and participants' experience and satisfaction were examined through in-depth interviews, focus groups, and program surveys. Program outcomes were analyzed using data from pre-and-post surveys of participants. A repeated measure GLM analyzed 1) program effectiveness, 2) outcome differences among settings, and 3) factors for successful program implementation. The lessons learned are provided to assist future intervention practices in multiethnic communities. |
| Session Title: Pitfalls in Reporting Services and Coverage to a Donor Community Hungry for Positive Results |
| Think Tank Session 912 to be held in Balboa C on Saturday, Nov 5, 12:35 PM to 2:05 PM |
| Sponsored by the Non-profit and Foundations Evaluation TIG |
| Presenter(s): |
| Dale Hill, American Red Cross, hilldal@usa.redcross.org |
| Discussant(s): |
| Dale Hill, American Red Cross, hilldal@usa.redcross.org |
| Scott Chaplowe, International Federation of Red Cross and Red Crescent Societies, scott.chaplowe@ifrc.org |
| Gregg Friedman, American Red Cross, friedmang@usa.redcross.org |
| Abstract: Over the last decade, the international humanitarian and development community has given increasing attention to accountability. Program managers have called on analysts to focus on design and measurement of impact indicators and higher level outcomes --while assuming data collection on "lower-level indicators" for outputs and coverage is more straightforward. However, large umbrella organizations face special challenges in guiding both data collection and reporting for key outputs, such as counting those reached. Issues such as double counting, and distinguishing between direct and indirect recipients of services arise, particularly when organizations or branches operate in multiple sectors over different time periods in overlapping locations. This think tank will present the challenges experienced by the Red Cross Movement in reporting on programming in complex emergency settings (i.e. Tsunami and Haiti response), as well as longer term recovery and development interventions, inviting participants to share their own lessons learned with reporting and aggregation. |
| Session Title: Transferring Evaluation Experience Across Program Contexts: Discursive Evaluation With two National Science Foundation ITEST Programs, Carnegie Mellon University's ACTIVATE and ALICE |
| Multipaper Session 913 to be held in Capistrano A on Saturday, Nov 5, 12:35 PM to 2:05 PM |
| Sponsored by the Organizational Learning and Evaluation Capacity Building |
| Chair(s): |
| Cynthia Tananis, University of Pittsburgh, tananis@pitt.edu |
| Abstract: School reform programs are often faced with an atmosphere of diversity and complexity where communication and organizational learning are increasingly difficult. As evaluators, our challenge is to create common ground from which to speak about program inputs, implementation, and goals. We have found that discursive evaluation strengthens working relationships with clients and allows for transferring and leveraging knowledge across projects. This panel focuses on discursive evaluation with two National Science Foundation ITEST funded programs of Carnegie Mellon University's computer science department. Our discursive relationship with the program teams has increased evaluation capacity across both projects, both in content expertise and effective evaluation practices in computing education. By investing in the relationships with our clients, not only are the common educational endeavors strengthened but so too are individual capacities. In this paper, we describe the strengths and challenges that accompany a discursive evaluation approach. |
| The Role of the Activate Workshops in Teachers' Professional Growth and Student Learning: Measuring the Effectiveness of Teachers' Professional Development in Computer Science Within a K-12 Education Context |
| Yuanyuan Wang, University of Pittsburgh, yuw21@pitt.edu |
| The ACTIVATE year 1 evaluation included summer workshop surveys, follow-up survey, and follow-up interviews. The instruments attempted to conduct an integrated series of evaluation activities for K-12 teacher professional development in computer science. The summer workshop surveys consisted of a baseline survey for all participants, and pre-post workshop surveys and post-workshop skills assessment for each of the three workshops (Alice, Computational Thinking, and Java). The follow-up survey and interviews focused on teachers' implementation of workshop materials/activities and their impact on students' interests in their computer science course and future computer-related careers. Findings indicated that the goals of the ACTIVATE are substantially achieved. Specifically, teachers made good use of the workshop materials/activities in their classroom. Teachers strengthened their content knowledge and skills in computer science after their participation in the workshop(s), and this strengthened knowledge-base benefited student learning and contributed to students' increased interests in computer-related careers. |
| Go Ask Alice: Faculty Mentoring and the Implementation of Alice 3.0 in Community College Contexts |
| Keith Trahan, University of Pittsburgh, kwt2@pitt.edu |
| Cara Ciminillo, University of Pittsburgh, ciminill@pitt.edu |
| Community college faculties are notoriously disconnected. Thus, reform programs designed to change the way faculty teach and students learn must find a way to gain traction. For CMU's Alice program, the solution was the combination of a faculty mentor network and student interest in graphics, animation, and storytelling. The ALICE year 1 evaluation consisted of baseline and end of course surveys administered to the students of participating faculty at community colleges in New Jersey, Texas, and Pennsylvania. Courses in which ALICE was implemented included introductory, general, and advanced computer programming courses. At the end of the year interviews with participating faculty and key program personnel were conducted to collect information on both the implementation of instructional practices and the experience in the ALICE mentoring network. The focus of the ALICE evaluation was threefold: student and faculty experience in courses utilizing Alice and faculty perspectives on the Alice mentoring network. |
| Discursive Evaluation: A Process of Capacity Building for Both Evaluators and Program Leaders |
| Cara Ciminillo, University of Pittsburgh, ciminill@pitt.edu |
| Keith Trahan, University of Pittsburgh, kwt2@pitt.edu |
| Having a discursive relationship with the program teams has increased evaluation capacity across both the ACTIVATE and ALICE projects, in both computer programming content expertise and effective evaluation practices in computing education. As well, transfer and application of knowledge has helped to inform future funding proposals and in fact has already helped to inform the design of one of our newest projects, Duke Scale Up. By investing in the relationships with our clients, not only are the common educational endeavors strengthened but so too are individual capacities. In this paper, we describe the strengths and challenges that accompany a discursive evaluation. |
| Session Title: Establishing Best Practices for Public Health emergency Preparedness and Response Evaluation | ||||||||||||||||||||||||||||||
| Multipaper Session 914 to be held in Capistrano B on Saturday, Nov 5, 12:35 PM to 2:05 PM | ||||||||||||||||||||||||||||||
| Sponsored by the Disaster and Emergency Management Evaluation | ||||||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||||||
| Brandi Gilbert, University of Colorado at Boulder, brandi.gilbert@colorado.edu | ||||||||||||||||||||||||||||||
|
| Session Title: Identifying and Using Evidence-Based Practices | ||||||||||||||||||||
| Multipaper Session 915 to be held in Carmel on Saturday, Nov 5, 12:35 PM to 2:05 PM | ||||||||||||||||||||
| Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG | ||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||
| Gintanjali Shrestha, Washington State University, gintanjali.shrestha@email.wsu.edu | ||||||||||||||||||||
|
| Session Title: Evaluating Emerging Educational Technologies in K-12 and Higher Education |
| Multipaper Session 916 to be held in Coronado on Saturday, Nov 5, 12:35 PM to 2:05 PM |
| Sponsored by the Distance Ed. & Other Educational Technologies TIG |
| Chair(s): |
| S Marshall Perry, Dowling College, perrysm@dowling.edu |
| Abstract: This multipaper session examines the relative contributions that emerging educational technologies such as online instruction and web-based assessment tools might have upon student academic growth, teacher practice, and assessment. Researchers will describe methodological challenges, promising research designs, the creation of quantifiable indicators, and the transition process towards effectively leveraging emerging technologies. Paper presentations will also discuss findings from a current evaluation of a K-12 online instructional program and another study of a college's transition to online assessment and data management. We believe that this research should useful to educational service providers, K-12 school systems, higher education officers, and policymakers. While the findings of the studies themselves are of interest, researchers hope that the session encourages the broad participation of attendees in a discussion of methodological and ethical implications of evaluating educational technologies generally. |
| Evaluating Effective Technology Enhanced Instruction Using the International Society of Technology Education: National Educational Technology Standards |
| Maria Esposito, Molloy College, mesposito@molloy.edu |
| Maria Esposito, M.A. will discuss a framework for teacher evaluation using the International Society of Technology Education - National Educational Technology Standards. She will describe how school systems can potentially operationalize the areas of creativity and innovation, communication and collaboration, research and information fluency, critical thinking, problem solving, and decision making, digital citizenship, and technology operations and concepts. She will also discuss essential conditions to support skilled pedagogy that effectively leverages existing technology. Maria is currently teaching Instructional Technology at Molloy College in Long Island, New York. Maria has also served as a K-12 technology administrator, K-12 educator, and was a technology administrator at Cravath, Swaine, & Moore, LLP, one of the world's largest corporate law firms. Maria received her Master's Degree in Educational Communication and Technology at New York University and is currently in the dissertation phase of her Doctorate in Educational Administration at Dowling College. |
| Adopting Pass-Port: A Systems Approach to Changing Culture and Practices |
| Richard Bernato, Dowling College, bernator@dowling.edu |
| Richard Bernato, Ed.D. will describe the process the Dowling College School of Education undertook in 2009 to maintain its NCATE recognition status. The school recognized the need to formalize its use of a variety of data sources to diagnose and prescribe for program improvement. The process Dowling College took to choose the web-based data gathering system, Pass-Port and to weave it among the professional and leadership practices throughout the school has many systems-based and high involvement factors worthy of consideration. Dr. Bernato is the Assistant Dean for the School of Education. He is serving the school as its NCATE Coordinator and is a member of the Department of Educational Leadership, Administration, and Technology. He also consults with school districts to facilitate strategic planning, shared decision making, and school improvement reform efforts. Previously, he has served in many leadership roles in public education, such as Assistant Superintendent for Educational Services. |
| Supplemental Learning Online for Middle School Students: An Evaluation and Discussion |
| S Marshall Perry, Dowling College, perrysm@dowling.edu |
| S. Marshall Perry, Ph.D. will discuss a federally-funded evaluation of an online individualized tutoring service. The evaluation focused on supplemental instruction in reading and mathematics at the middle school level. Over two years, over 700 middle school students were offered 25 hours of programming, including approximately 22 hours of instruction and three hours of assessments. The evaluation included of a randomized control trial to determine the relationship between academic achievement and involvement in the program. Dr. Perry is an Assistant Professor at the Dowling College School of Education. He holds a Ph.D. in Administration and Policy Analysis from the Stanford University School of Education. He also holds a B.A. with distinction in Political Science from Yale University. Previously, Dr. Perry was a Senior Research Associate at Rockman et al, a research, evaluation, and consulting firm. Currently, he consults with public schools involved in restructuring to improve student achievement. |
| Online Versus Face-to-Face Learning in the College Classroom: A Discussion |
| Janet Caruso, Nassau Community College, jxc133@dowling.edu |
| Janet Caruso, M.B.A. will discuss the existing literature surrounding online learning in college courses and discuss the methodological and logistical challenges of conducting rigorous evaluations. For example, when comparing face-to-face to online classes, evaluators might have difficulty distinguishing programmatic effects from teacher effects, student characteristics, teacher orientation, testing effects, or subject matter applicability. Janet is currently the Dean of Business and Professional Education and an adjunct assistant professor at Nassau Community College. She is a member of the academic administrative team that focuses on the development of new programs, policies, and procedures which will address the future needs of the College. She has been involved in higher education for over 25 years and has served in various academic and administrative positions at other institutions including, Chair, Business Administration Department, Director, Office of Adult Learning, and Dean of Faculty. Janet is currently pursuing her doctoral degree at Dowling College. |
| Session Title: HIV, Tuberculosis and Pregnancy Prevention: Methods and Strategies for Evaluation | |||||||||||||||||||||||||||||||
| Multipaper Session 917 to be held in El Capitan A on Saturday, Nov 5, 12:35 PM to 2:05 PM | |||||||||||||||||||||||||||||||
| Sponsored by the Health Evaluation TIG | |||||||||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||||||||
| Herb Baum, REDA International Inc, drherb@jhu.edu | |||||||||||||||||||||||||||||||
|
| Session Title: Valuing Innovation in Democracy Assistance Evaluation | |||
| Panel Session 918 to be held in El Capitan B on Saturday, Nov 5, 12:35 PM to 2:05 PM | |||
| Sponsored by the International and Cross-cultural Evaluation TIG | |||
| Chair(s): | |||
| Georges Fauriol, National Endowment for Democracy, georgesf@ned.org | |||
| Abstract: Democracy assistance presents a particular set of challenges to the field of evaluation. By their very nature, these types of projects and programs are extremely difficult to evaluate. Traditional methods are not always feasible given the conditions under which democracy assistance projects and programs take place. This has led democracy assistance organizations to explore innovative methods to monitor and evaluate their work. This panel will explore the evaluation innovations of a grantmaking organization and its four core institutes working in the field of democracy assistance around the world. | |||
| |||
| |||
| |||
| |||
|
| Session Title: Using Research Electronic Data Capture (REDCap) for Designing a Data Collection System in the Field |
| Demonstration Session 920 to be held in Huntington A on Saturday, Nov 5, 12:35 PM to 2:05 PM |
| Sponsored by the Integrating Technology Into Evaluation |
| Presenter(s): |
| Teri Garstka, University of Kansas, garstka@ku.edu |
| Jared Barton, University of Kansas, jaredlee@ku.edu |
| Karin Chang-Rios, University of Kansas, kcr@ku.edu |
| Abstract: Collecting real-time data securely on a mobile device can pose many challenges, particularly when that data requires HIPAA compliance safeguards. Research Electronic Data Capture (REDCap) is a secure, web-based application designed exclusively to support data capture for research and evaluation. This demonstration will show how to use REDCap to design field assessments and collect data on an iPad in naturalistic settings such as in-home or in the community. This is a particularly useful way to establish rapport with participants and to ensure safe, secure data collection of sensitive information. This session will provide an overview of REDCap features and its interface with mobile devices such as iPads or netbooks. REDCap also makes it easy to import case-level data from a service agency's Management Information System (MIS) securely. We will demonstrate how this system works for our evaluation of a home visiting program for pregnant teens and their parents. |
| Session Title: Elephants in the Evaluation Room: Managing Evaluations Amid Clashing Values of Program Staff and Professional Evaluators |
| Think Tank Session 921 to be held in Huntington B on Saturday, Nov 5, 12:35 PM to 2:05 PM |
| Sponsored by the Evaluation Managers and Supervisors TIG |
| Presenter(s): |
| Michelle Mandolia, United States Environmental Protection Agency, mandolia.michelle@epa.gov |
| Discussant(s): |
| Yvonne Watson, United States Environmental Protection Agency, watson.yvonne@epa.gov |
| Michelle Mandolia, United States Environmental Protection Agency, mandolia.michelle@epa.gov |
| Matt Keene, United States Environmental Protection Agency, keene.matt@epa.gov |
| Abstract: Evaluation results have the greatest potential to effect program or policy change when certain criteria have been met: 1) objectivity (quality control against subjective bias); 2) active stakeholder involvement in the evaluation; 3) attention to rigorous design issues appropriate for evaluation context; and 4) granting the evaluator license to provide nuanced interpretation of complex results. Attention to these non-exhaustive characteristics of good evaluation illustrates that sometimes these values conflict, particularly when program staff members become heavily invested in shaping the final evaluation product. Those with evaluation management oversight must manage this conflict. Evaluation managers will discuss composite case examples that illustrate the situation whereby program staff's vested interests in the evaluation interfere with the objectivity and rigor of the final evaluation. The presenters will solicit feedback and concrete strategies for negotiating this dilemma with breakout groups focusing on managing evaluation process, evaluation design development, evaluation personnel, and evaluation results. |
| Session Title: Evaluating the Wicked: Implications of the "Wicked Problem" Concept for Program Evaluation and Organizational Leadership | |||
| Panel Session 922 to be held in Huntington C on Saturday, Nov 5, 12:35 PM to 2:05 PM | |||
| Sponsored by the Non-profit and Foundations Evaluation TIG | |||
| Chair(s): | |||
| Gayle Peterson, Headwaters Group Philanthropic Services, gpeterson@headwatersgroup.com | |||
| Abstract: Many of the issues addressed by philanthropies and public agencies that hire evaluators - issues like poverty, global epidemics, food security, and climate change - are classic "wicked problems": they have multiple root causes and defy clear definition; attempted solutions are a matter of judgment and are likely to generate new, unpredictable problems; and they involve many diverse stakeholders, all of whom have different ideas about the problem and its solutions. Since 1973, when Rittel and Weber introduced the idea, the concept of wicked problems has stimulated a large body of research and practice in fields such as management, planning and organizational behavior. The presenters suggest that the concept has important implications for the values of the agencies attempting to address them, as well as for the design and conduct of program evaluations. | |||
| |||
| |||
|
| Session Title: Evaluating Technical Assistance Providers: Beyond Effectiveness to Measuring Impact | ||||
| Panel Session 923 to be held in La Jolla on Saturday, Nov 5, 12:35 PM to 2:05 PM | ||||
| Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG | ||||
| Chair(s): | ||||
| John Bosma, WestEd, jbosma@wested.org | ||||
| Abstract: WestEd is conducting an evaluation of three comprehensive centers, one that serves a single state, one that serves a region, and one that provides content expertise to the other centers. Although unique in how each center operates, WestEd identified an evaluation framework that cuts across all three technical assistance centers. The panel will present a three-tier framework for evaluation focused on 1) the assistance the centers provided, 2) how well the centers provided assistance, and 3) the overall impact of the assistance the centers provided. This framework acknowledges the unique features of each center and examines the effectiveness of each center's technical assistance and respective impact. | ||||
| ||||
| ||||
|
| Session Title: The Science of Team Science: Advances in the Evaluation of Interdisciplinary Team Science From the National Institutes of Health (NIH) National Evaluation of the Interdisciplinary Research Consortia | |||
| Panel Session 924 to be held in Laguna A on Saturday, Nov 5, 12:35 PM to 2:05 PM | |||
| Sponsored by the Research on Evaluation | |||
| Chair(s): | |||
| Jacob Tebes, Yale University School of Medicine, jacob.tebes@yale.edu | |||
| Abstract: Interdisciplinary team science addresses complex public health challenges that cannot be addressed by a single discipline. Its emergence has fostered a new type of evaluation research: the science of team science. This emerging field focuses on understanding the structures, processes, and outcomes of team-based research, its benefits and limitations relative to single-discipline inquiry, and its capacity for achieving accelerated scientific innovation. In recent years, the National Institutes of Health, through its Roadmap for Medical Research, funded nine programs in its Interdisciplinary Research Consortium (IRC) program to conduct interdisciplinary team science. Each IRC included both local and national evaluation components, thus offering a unique opportunity for advancing the science of team science. This panel session describes methods and results of the national evaluation and selected local evaluations, presents essential tools and resources for use by evaluators, and discusses current trends and future directions in the science of team science. | |||
| |||
| |||
| |||
|
| Session Title: Utilizing Values and Context to Build Global Program Evaluation Competency: CDC's Field Epidemiology and Training Program (FETP) for Non-Communicable Diseases and Injuries |
| Think Tank Session 925 to be held in Laguna B on Saturday, Nov 5, 12:35 PM to 2:05 PM |
| Sponsored by the International and Cross-cultural Evaluation TIG |
| Presenter(s): |
| Sue Lin Yee, Centers for Disease Control and Prevention, sby9@cdc.gov |
| Discussant(s): |
| Sue Lin Yee, Centers for Disease Control and Prevention, sby9@cdc.gov |
| Andrea Bader, Centers for Disease Control and Prevention, vbu6@cdc.gov |
| Abstract: CDC's Center for Global Health is developing training curricula on non-communicable diseases (NCDs) for Field Epidemiology Program (FETP) fellows in five countries. Participants may enroll in basic NCD courses such as epidemiology, surveillance, prevention and control, and advanced topics that include evaluation of interventions and surveillance systems. 2nd year fellows may work with mentors to apply their evaluation knowledge/skills in the field by evaluating an intervention/program using field product guidelines. The National Center for Injury Prevention and Control and Center for Global Health seek to examine two questions: 1) How can program evaluation modules effectively integrate the values and country context of fellows to enhance learning? and 2) How can field product guidelines better promote experiential and continued learning? Participants will be invited to provide feedback on the recently piloted evaluation module and offer insights on tweaking the approach or materials that could further enhance learning and application. |
| Roundtable Rotation I: Collaborative Evaluation to Enhance Undergraduate Coursework and Prepare Future Teachers |
| Roundtable Presentation 926 to be held in Lido A on Saturday, Nov 5, 12:35 PM to 2:05 PM |
| Sponsored by the Assessment in Higher Education TIG |
| Presenter(s): |
| Leigh D'Amico, University of South Carolina, damico@mailbox.sc.edu |
| Vasanthi Rao, University of South Carolina, vasanthiji@yahoo.com |
| Abstract: The University of South Carolina and Midlands Technical College are collaborating to improve undergraduate coursework to better prepare pre-service teachers to effectively educate students with differing needs. Faculty members at both institutions have been evaluating syllabi of core early childhood education courses to examine content and share strategies and resources to improve course delivery and student assessment. Upon the conclusion of the syllabi evaluation and redesign, faculty will evaluate implementation of the enhanced syllabi to refine and further improve pre-service teacher preparation programs. Goals of the project include facilitating communication and collaboration between 2-year and 4-year higher education institutions, understanding the needs, values, and realities of undergraduate students, providing coursework and preparation that promotes young childrens' growth and development, and preparing pre-service teachers to effectively work in classrooms with diverse students with differing needs. |
| Roundtable Rotation II: Finding Chemistry With Science Faculty: Engaging Stakeholders in Evaluation of Student Learning |
| Roundtable Presentation 926 to be held in Lido A on Saturday, Nov 5, 12:35 PM to 2:05 PM |
| Sponsored by the Assessment in Higher Education TIG |
| Presenter(s): |
| Jennifer Lewis, University of South Florida, jennifer@usf.edu |
| Abstract: Successful learning in a college-level biochemistry course depends on correct understanding of a number of basic concepts from general chemistry and biology, but there are few existing high-quality measures. To further complicate the situation, college science faculty, who are key stakeholders, often do not value assessment. This talk discusses the collaborative process of instrument design and development undertaken as part of the evaluation of a curriculum reform project in biochemistry involving biology, chemistry, and biochemistry faculty from multiple institutions. The most current results across the project and the implications of this work will be discussed, including the importance of the collaborative development process as professional development for college science faculty and the value of using a pre/post diagnostic instrument to maintain awareness of the need for the project's work to continue. |
| Session Title: Evaluation Within Complex Health Systems | |||||||||||||||
| Multipaper Session 927 to be held in Lido C on Saturday, Nov 5, 12:35 PM to 2:05 PM | |||||||||||||||
| Sponsored by the Health Evaluation TIG | |||||||||||||||
| Chair(s): | |||||||||||||||
| Kim van der Woerd, Reciprocal Consulting, kvanderwoerd@gmail.com | |||||||||||||||
|
| Session Title: Measuring Research Interdisciplinarity and Knowledge Diffusion |
| Multipaper Session 928 to be held in Malibu on Saturday, Nov 5, 12:35 PM to 2:05 PM |
| Sponsored by the Research, Technology, and Development Evaluation TIG |
| Chair(s): |
| Alan Porter, Georgia Tech and Search Technology Inc, alan.porter@isye.gatech.edu |
| Discussant(s): |
| Jan Youtie, Georgia Tech, jan.youtie@innovate.gatech.edu |
| Abstract: Interest in the attributes of cross-disciplinary research and in the distribution of research knowledge is strong. This has inspired introduction of several new measures of interdisciplinarity and research diffusion. This session brings together several explorations of the Integration and Diffusion scores, along with other measures and visualizations, to help understand their behavior. The first paper introduces the Diffusion score and examines its behavior in a substantial benchmarking exercise, augmented by in-depth lab studies. The second paper investigates development of a companion Integration score to gauge the diversity of patent sets. The third paper applies Integration and other scoring in research program assessment. It compares researcher and proposal level variations of the metric. We then discuss the opportunities and limitations in applying these measures on behalf of research evaluation -- e.g., sensitivities to disciplinary citation norms, Web of Science coverage, temporal distributions, and so forth. |
| A New Measure of Knowledge Diffusion |
| Stephen Carley, Georgia Tech, stephen.carley@gmail.com |
| Alan Porter, Georgia Tech and Search Technology Inc, alan.porter@isye.gatech.edu |
| The Diffusion score is a new interdisciplinary metric to assess the degree to which research is cited across disciplines. It is the analog to the Integration score, which measures diversity among a given publication's cited references. Together these metrics enable tracking the transfer of research knowledge across disciplines and citation generations. The two scores share a consistent formulation based on distribution of citations over Web of Science Subject Categories (SCs). The Integration score measures diversity among cited SCs; the Diffusion score measures diversity among citing SCs. Here we study the behavior of Integration and Diffusion scores for benchmark samples of research publications in six major fields (SCs) spanning interests of the National Academies Keck Futures Initiative (NAKFI). We also probe their behavior via two laboratory level analyses. Through long term observation of these labs, and interviews with their senior investigators, we explore "exactly what" Integration and Diffusion scores are tapping. |
| Analyzing the Effect of Interdisciplinary Research on Patent Evaluation: Case Studies in Nbs and Dsscs |
| Wenping Wang, Beijing Institute of Technology, wangwenping1009@gmail.com |
| Alan Porter, Georgia Tech and Search Technology Inc, alan.porter@isye.gatech.edu |
| Ismael Rafols, University of Sussex, i.rafols@sussex.ac.uk |
| Nils Newman, Intelligent Information Services, newman@iisco.com |
| Yun Liu, Beijing Institute of Technology, liuyun@bit.edu.cn |
| Policies facilitating interdisciplinary research (IDR) appear to be based more on conventional wisdom than empirical evidence. This study examines whether IDR leads to higher technological performance. Patents, as major outputs of technological invention, are adopted as the representative measure of technological performance. To look into the relationship between IDR and patent evaluation, we address "patent quality" and "IDR." Disciplinary diversity indicators of patents are developed from the properties of variety, balance, and similarity. Basing the research on patent abstract documents, we evaluate patent quality along three dimensions: technology, market, and legal. We then examine correlations between the diversity and patent quality measures. The case study builds on the emerging domains of Nanobiosensors and Dye-Sensitized Solar Cells. By devising patent metrics commensurate with publication measures, comparison also informs the relationship between research (publication) and patent activity. |
| Measuring Interdisciplinarity: A Unique Comparison Between the Researcher and Research Proposal |
| Asha Balakrishnan, IDA Science & Technology Policy Institute, abalakri@ida.org |
| Vanessa Pena, IDA Science & Technology Policy Institute, vpena@ida.org |
| Bhavya Lal, IDA Science & Technology Policy Institute, blal@ida.org |
| Measuring interdisciplinarity of the researchers and research teams is of major interest to agencies funding interdisciplinary research programs. One particular program funding potentially transformative research through interdisciplinary team science at a key R&D agency requested the Science and Technology Policy Institute (STPI) to conduct an assessment of the program's awards funded between FY2007 to FY2009. In this talk, we present a unique analysis that compares the interdisciplinarity of individual researchers compared with the interdisciplinarity of their awarded proposals funded by this particular program. We will describe the methodology behind measuring both the interdisciplinarity of the principle investigator's publication history and interdisciplinarity of the proposals, and compare how a PI's interdisciplinarity maps to his/her proposal interdisciplinarity. |
| Session Title: Juxtapositions in Evaluations With Explicit Human Rights and Social Justice Values | ||||
| Panel Session 929 to be held in Manhattan on Saturday, Nov 5, 12:35 PM to 2:05 PM | ||||
| Sponsored by the AEA Conference Committee | ||||
| Chair(s): | ||||
| Donna Mertens, Gallaudet University, donna.mertens@gallaudet.edu | ||||
| Abstract: When evaluators position themselves within an explicit social justice and human rights value system, this has implications for how the evaluation is planned, implemented, and used. Frequently, tensions arise when these values are made explicit for a variety of reasons based on differences in perceptions of various stakeholder groups related to the need for evaluations to be value-free; the purpose of doing an evaluation when the project is “over”; views about the purpose of the program; and effects of including culturally appropriate rituals and manners of interaction on the rigor of the evaluation. Each of these reasons for tensions will be illustrated by means of juxtaposing the differences in viewpoints, followed by presentations to address each of these reasons by means of role plays, use of data to stimulate action, visual displays, and re-enactment of appropriate cultural rituals and manners of interaction. The focus will be on constructive strategies for addressing the tensions in evaluations that are explicitly value-laden in evaluations with ethnic/racial minorities, deaf people, people with disabilities, women in developing countries, and indigenous peoples. | ||||
| ||||
| ||||
| ||||
| ||||
|
| Session Title: Tools and Ideas for Mainstreaming Evaluation |
| Demonstration Session 930 to be held in Monterey on Saturday, Nov 5, 12:35 PM to 2:05 PM |
| Sponsored by the Graduate Student and New Evaluator TIG |
| Presenter(s): |
| Amy Gullickson, Western Michigan University, amy.m.gullickson@wmich.edu |
| Abstract: As an evaluator, helping an organization or program mainstream evaluation into its culture, thinking, practices, and systems enables you and the staff to get and use information to (i) create strategies and program designs that meet the needs of your stakeholders, (ii)improve processes and programs, and (iii) understand outputs, outcomes, and impact. In my dissertation research, I studied four organizations that were mainstreaming evaluation. I witnessed how integrating evaluation into the daily life of the staff benefited all through reduction of the evaluation workload, and increases in the accessibility of good data, staff's appreciation of evaluation processes, and their use of findings. Attendees of this demonstration session will learn about a variety of tools and ideas found in my research that they can adapt, integrate, and use in their own evaluation efforts to realize the aforementioned benefits. |
| Session Title: Rights and Poverty Assessments: Using New Impact Assessment Tools for Community-Company Engagement and Accountability | |||
| Panel Session 931 to be held in Oceanside on Saturday, Nov 5, 12:35 PM to 2:05 PM | |||
| Sponsored by the Business and Industry TIG | |||
| Chair(s): | |||
| Gabrielle Watson, Oxfam America, gwatson@oxfamamerica.org | |||
| Abstract: We present two experiences using new approaches to assess the impacts of private sector actors on local communities in order to foster greater understanding and an improved basis for engagement and improved development outcomes. The first, Human Rights Impact Assessments (HRIAs), have emerged in recent years as a new tool for corporate accountability. Oxfam America has used a community-based methodology, developed by Rights & Democracy of Canada, to put knowledge and power in the hands of communities and the organizations working with them. The second, Oxfam's Poverty Footprint methodology, builds on concepts like 'food miles' and 'environmental footprints', to assess how businesses impact communities touched by their value chains. Panelists present Oxfam's interest in pursuing these assessment methodologies and the experiences of organizations in applying them in actual cases. | |||
| |||
| |||
|
| Session Title: What Works - and for Whom? Value and Relevance in Arts and Culture Evaluation | |||||||||||
| Multipaper Session 932 to be held in Palisades on Saturday, Nov 5, 12:35 PM to 2:05 PM | |||||||||||
| Sponsored by the Evaluating the Arts and Culture TIG | |||||||||||
| Chair(s): | |||||||||||
| Kathleen Tinworth, Denver Museum of Nature and Science, kathleen.tinworth@dmns.org | |||||||||||
|
| Session Title: Using Evaluation to Support Innovation and Strategic Planning in Large Scale National and International Environmental Programs | ||||||||||||||
| Multipaper Session 933 to be held in Palos Verdes A on Saturday, Nov 5, 12:35 PM to 2:05 PM | ||||||||||||||
| Sponsored by the Environmental Program Evaluation TIG | ||||||||||||||
| Chair(s): | ||||||||||||||
| Kara Crohn, Research Into Action, karac@researchintoaction.com | ||||||||||||||
|
| Session Title: Appreciative Inquiry and Evaluation: The "What?" and the "How?" in Building Evaluation Capacity |
| Skill-Building Workshop 934 to be held in Palos Verdes B on Saturday, Nov 5, 12:35 PM to 2:05 PM |
| Sponsored by the Organizational Learning and Evaluation Capacity Building |
| Presenter(s): |
| Anna Dor, Claremont Graduate University, annador@hotmail.com |
| Abstract: Appreciative Inquiry (AI) is a phenomenon that looks at what is working right in organizations by engaging people to look at the best of their past experiences in order to imagine the future they want and find capacity to move into that future. In AI, language that describes deficiencies is replaced by positive questions and approaches. While AI has been as an organizational behavior intervention, its application as an evaluation tool leads to building evaluation capacity and enabling an organization to become a learning organization. In AI all stakeholders are actively involved in the evaluation process which eliminates the "us" vs. "them" perception. This workshop is designed to help participants learn about Appreciative Inquiry (AI) and how they can facilitate an AI in their organization as internal/external evaluators, consultants, and leaders. |
| Session Title: Making Contribution Analysis Work: The Benefits of Using Contribution Analysis in Public Sector Settings |
| Multipaper Session 935 to be held in Redondo on Saturday, Nov 5, 12:35 PM to 2:05 PM |
| Sponsored by the Program Theory and Theory-driven Evaluation TIG |
| Chair(s): |
| Steve Montague, Performance Management Network, steve.montague@pmn.net |
| Abstract: John Mayne's introduction of contribution analysis (CA) has attracted widespread attention within the global evaluation community. Yet, despite this vivid attention, as Mayne himself notes (2011), there haven't been a lot of published studies that involve the systematic application of contribution analysis. This begs a number of questions: How can we bridge the divide between the sustained interest in and actual application of CA? How can we make CA work? What are the actual benefits of CA? In this panel the aim is twofold. First, contributors to this debate will convene and discuss the similarities and differences of their practical implementation of contribution analysis - how to make contribution analysis work. Second, they will discuss the benefits of using contribution analysis. |
| Contribution Analysis: What is it? |
| Sebastian Lemire, Ramboll Management Consulting, setl@r-m.com |
| Steve Montague, Performance Management Network, steve.montague@pmn.net |
| Contribution analysis appears to have strong potential in the dynamic and complex environments typically faced in public enterprise. The first presentation briefly outlines the main steps and methodological tenets of contribution analysis. How is it done? How is it different from other approaches in theory-driven evaluation? What types of causal claims are supported by CA? The presentation will introduce some of the key challenges to making CA work in public sector settings. |
| The Participatory Approach to Contribution Analysis |
| Steve Montague, Performance Management Network, steve.montague@pmn.net |
| The use of a participatory approach to contribution analysis in the form of 'Results Planning' transforms CA into a 'generative' group learning (and at least partially inductive) process. The presentation will showcase a means of telling the performance story and an 'alternative' approach to the judging of evidence for attribution. The notion developed in this paper is that evidence is assed in terms of determining the attribution of impacts and effects as one would assess the evidence in a court case (see Patton 2008 for the suggested use of this concept for advocacy.). In situations of high complexity, the 'court' would be more akin to a civil trial - judging on the balance of evidence, as opposed to a criminal court, rendering a judgement beyond a reasonable doubt. |
| Engaging the Client Through Contribution Analysis: Reflections on Practical Experiences and Benefits |
| Line Dybdal, Ramboll Management Consulting, lind@r-m.com |
| Sebastian Lemire, Ramboll Management Consulting, setl@r-m.com |
| This presentation outlines the presenters' current use of and experiences with CA in the Danish public sector. In his concept of the "embedded theory of change" (2011) Mayne addresses the underlying assumptions and risks, external factors, and principal competing explanations embedded in the program being evaluated. This is a central step in CA, as one can only infer credible and, we would argue, useful contribution stories if the embedded theory of change accounts for other influencing factors and disproves alternative explanations. However, CA in its current manifestation is not very prescriptive about how to do this from a practical perspective. First, the presenters will discuss their experiences of embedding clients in the "embedded theory of change" by engaging clients in assembling and assessing the contribution story. Second, the presenters will engage in a discussion on the client benefits of engaging in contribution analysis. |
| Session Title: Evaluation of STEM Programs | |||||||||||||||||||||||
| Multipaper Session 936 to be held in Salinas on Saturday, Nov 5, 12:35 PM to 2:05 PM | |||||||||||||||||||||||
| Sponsored by the Assessment in Higher Education TIG | |||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||
| Howard Mzumara, Indiana University Purdue University Indianapolis, hmzumara@iupui.edu | |||||||||||||||||||||||
|
| Session Title: Before It's Too Late: Lessons Learned From Strategic Learning Evaluations of Advocacy Efforts | |||
| Panel Session 937 to be held in San Clemente on Saturday, Nov 5, 12:35 PM to 2:05 PM | |||
| Sponsored by the Advocacy and Policy Change TIG | |||
| Chair(s): | |||
| Sarah Stachowiak, Organizational Research Services, sarahs@organizationalresearch.com | |||
| Discussant(s): | |||
| Sarah Stachowiak, Organizational Research Services, sarahs@organizationalresearch.com | |||
| Huilan Krenn, WK Kellogg Foundation, hyk@wkkf.org | |||
| Abstract: Advocates for policy change - in this country and around the world - are impassioned and focused. They may resist advice about strategy from "outsiders" like evaluators. Effective formative evaluation tools and approaches can help them see the value of evaluation and improve their strategies. This session will share some methods, tools and approaches for working with clients to use evaluative data and lessons learned to adjust advocacy strategy. We will also examine how we as evaluators - in turn - adjust our approach in response to our clients' changes in strategy and in response to changes in the political or social environment. Speakers will draw on experiences in the United States, Europe and Africa helping clients and partners make mid-course corrections to their policy change efforts. | |||
| |||
| |||
|
| Session Title: Participatory Methodologies: Innovations and Effectiveness in Impact Evaluation |
| Think Tank Session 938 to be held in San Simeon A on Saturday, Nov 5, 12:35 PM to 2:05 PM |
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG |
| Presenter(s): |
| Cheryl Francisconi, Institute of International Education, Ethiopia, cfrancisconi@iie-ethiopia.org |
| Amparo Hoffman-Pinilla, New York University, ahp1@nyu.edu |
| Discussant(s): |
| Judith Kallick Russell, Independent Consultant, jkallickrussell@yahoo.com |
| Abstract: How can Action Research contribute to evaluation methodologies? What is the relevance of innovative and traditional methodologies for assessing the impact of programs in a particular field? In this session we will introduce the discussion topic by briefly sharing the innovative methodology of a Participatory Action evaluation conducted by New York University's Research Center for Leadership in Action of the Institute of International Education's Leadership Development for Mobilizing Reproductive Health Program based in Ethiopia, India, Nigeria, Pakistan and the Philippines. Group discussions will focus on AEA participants' experiences associated with: a) the advantages and challenges associated with a Participatory approach to evaluation; and b) the task of evaluating programs to measure their influence or impact. The session will close with a group reflection articulating lessons learned for using participatory methodologies in program evaluation. |
| Session Title: Evaluating Intra- and Inter-institutional Collaboration to Enhance Student Learning | ||||||||||||||||||||||||||||
| Multipaper Session 939 to be held in San Simeon B on Saturday, Nov 5, 12:35 PM to 2:05 PM | ||||||||||||||||||||||||||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | ||||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||||
| Tara Shepperson, Eastern Kentucky University, tara.shepperson@eku.edu | ||||||||||||||||||||||||||||
| Discussant(s): | ||||||||||||||||||||||||||||
| Chad Green, Loudoun County Public Schools, chad.green@loudoun.k12.va.us | ||||||||||||||||||||||||||||
|
| Roundtable Rotation I: The 'Roadmap to Effectiveness': A Discussion on Assessing Evaluation Readiness, Program Development & Valuing Multiple Voices |
| Roundtable Presentation 940 to be held in Santa Barbara on Saturday, Nov 5, 12:35 PM to 2:05 PM |
| Sponsored by the Internal Evaluation TIG |
| Presenter(s): |
| Lisa M Chauveron, The Leadership Program, lisa@tlpnyc.com |
| Amanda C Thompkins, The Leadership Program, athompkins@tlpnyc.com |
| Abstract: The Leadership Program is an urban youth development organization that serves more than 18,000 youth, 6,000 parents, and 500 teachers annually. The Leadership Program encourages staff to innovate current and develop new programs to best meet participants' needs. As a result, the internal evaluation team reviews programs that run the gamut of program development and evaluation readiness, yet grants and contracts require outcomes for every program. To help our staff understand the range of evaluation options available, we created an internal tool called the Roadmap to Effectiveness, which gives voice to multiple stakeholders and identifies seven stages of program development ranging from exploratory (for pilot programs) to boxing (for programs that have been tested at scale). This roundtable session will review the Roadmap tool and discuss the process and politics of evaluating multiple programs at different points of development, and including clashing values. |
| Roundtable Rotation II: Ensuring Objectivity in Evaluation: Merging Continuous Quality Improvement and Outcome Evaluation Using a Triad Approach |
| Roundtable Presentation 940 to be held in Santa Barbara on Saturday, Nov 5, 12:35 PM to 2:05 PM |
| Sponsored by the Internal Evaluation TIG |
| Presenter(s): |
| Tamika Howell, Harlem United Community AIDS Center Inc, thowell@harlemunited.org |
| Abstract: Harlem United Community AIDS Center, Inc (HU) routinely incorporates of a tri-modal approach (The Triad) to organizing and focusing on the most critical information needed to ensure objective and effective decision-making, problem solving, and program management in a setting where data is crucial to the longevity of programs and service to clients. This roundtable will present a general overview of the Triad, focusing specifically on the integration of continuous quality improvement (CQI) and outcome evaluation. The discussion will offer members of the evaluation community the opportunity to communicate their experiences, and to participate in a dialog about strategies for ensuring an objective approach to internal evaluation. Participants will also be asked to provide feedback about the Triad and offer suggestions for strengthening the link between CQI and outcome evaluation. |
| Session Title: Making Sense of Measures | |||||
| Panel Session 941 to be held in Santa Monica on Saturday, Nov 5, 12:35 PM to 2:05 PM | |||||
| Sponsored by the Quantitative Methods: Theory and Design TIG | |||||
| Chair(s): | |||||
| Mende Davis, University of Arizona, mfd@u.arizona.edu | |||||
| Abstract: In every evaluation, there is an overabundance of measures to choose from. Which instrument should we choose? What can we use to guide our decisions? Evaluators often select instruments based on their previous experience, ready access to an instrument, its cost, whether the evaluation staff can administer the measure, and the time and effort required to collect the data. Every evaluation has a budget, in time, money, and respondent burden, and measures must fit within the budget. When inundated with measures, it is hard to see the differences between them. We suggest that a taxonomy can be used to categorize social science methods and guide a more effective selection of study measures. Some methods require considerable effort to collect, others are simple and inexpensive. Low-cost alternatives are often overlooked. In this panel, we will provide an overview of method characteristics and how to take advantage of them in evaluation designs. | |||||
| |||||
| |||||
| |||||
|
| Session Title: Integrating Realist Evaluation Strategies to Achieve 100% Evaluation of all Education, Social Work, Health, Youth Justice and Other Human Services: Example of Chautauqua County, NY and Moray Council, Scotland |
| Demonstration Session 942 to be held in Sunset on Saturday, Nov 5, 12:35 PM to 2:05 PM |
| Sponsored by the Human Services Evaluation TIG and the Social Work TIG |
| Presenter(s): |
| Mansoor Kazi, State University of New York, Buffalo, mkazi@buffalo.edu |
| Rachel Ludwig, Chautauqua County Department of Mental Hygiene, mesmerr@co.chautauqua.ny.us |
| Jeremy Akehurst, The Moray Council, Scotland, jeremy.akehurst@moray.gov.uk |
| Anne Bartone, University at Buffalo, bartonea@hotmail.com |
| Abstract: This demonstration will illustrate how realist evaluation strategies can be applied in the evaluation of 100% natural samples in schools, health, youth justice and other human service agencies for youth and families. These agencies routinely collect data that is typically not used for evaluation purposes. This demonstration will include new data analysis tools drawn from both the efficacy and epidemiology traditions to investigate patterns in this data in relation to outcomes, interventions and the contexts of practice. For example, binary logistic regression can be used repeatedly with whole school databases at every marking period to investigate the effectiveness of school-based interventions and their impact on school outcomes. The demonstration will include practice examples drawn from the SAMHSA funded System of Care that has enabled a 100% evaluation of over 40 agencies in Chautauqua County, New York State; and education, social work and youth justice services in Moray Council, Scotland. |
| Session Title: Experimental and Quasi-experimental Designs in Educational Evaluation | |||||||||||||||||||||
| Multipaper Session 943 to be held in Ventura on Saturday, Nov 5, 12:35 PM to 2:05 PM | |||||||||||||||||||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | |||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||
| Andrea Beesley, Mid-continent Research for Education and Learning, abeesley@mcrel.org | |||||||||||||||||||||
| Discussant(s): | |||||||||||||||||||||
| Eric Barela, Partners in School Innovation, ebarela@partnersinschools.org | |||||||||||||||||||||
|