| Session Title: Research on Evaluation TIG Business Meeting |
| Business Meeting Session 567 to be held in International Ballroom A on Friday, November 9, 11:15 AM to 12:00 PM |
| Sponsored by the Research on Evaluation TIG |
| TIG Leader(s): |
| Tarek Azzam, University of California, Los Angeles, tazzam@ucla.edu |
| Christina Christie, Claremont Graduate University, tina.christie@cgu.edu |
| Session Title: Yes, When Will We Ever Learn? How Evaluators Can Learn Better Ways to Understand Cause and Effect |
| Expert Lecture Session 569 to be held in International Ballroom C on Friday, November 9, 11:15 AM to 12:00 PM |
| Sponsored by the Systems in Evaluation TIG |
| Chair(s): |
| Bob Williams, Independent Consultant, bobwill@actrix.co.nz |
| Presenter(s): |
| Patricia Rogers, Royal Melbourne Institute of Technology, patricia.rogers@rmit.edu.au |
| Discussant(s): |
| Bob Williams, Independent Consultant, bobwill@actrix.co.nz |
| Abstract: The substantial international efforts currently underway to improve the quality of evaluations, particularly in international development, have drawn attention to inadequacies in providing credible evidence of impact - most notably in the report "When will we ever learn?". Remarkably, these efforts have focused almost exclusively on the use of randomized control trials, with little or no recognition of their limitations or the development of alternatives that are more suited to the evaluation of complex interventions in open implementation environments. This session will turn the question of evaluation and learning onto the evaluation community itself and ask why the theory and practice of evaluation has been so slow to learn from current scientific thought, and remains largely bogged down in outdated approaches to causal attribution. Advocates of so-called scientific approaches to impact evaluation rely exclusively on the counter-factual argument for causal attribution - developing information about what would have happened in the absence of the intervention. This type of analysis fails to take into account more complex causal relationships - such as where an intervention is necessary but not sufficient (with other contributing factors needed for success), or sufficient but not necessary (with alternative causal paths available), or where the causal relationships are of interdependence not simple linear causality. This paper compares examples of the logic and methods of causal analysis using traditional 'scientific' evaluation and those that draw on complexity science. It discusses possible reasons for the failure of advocates for 'scientific' evaluation to learn from current scientific thinking and how this might be done. |
| Session Title: Exploring the Sacrifice Fly Phenomenon in Evaluation Use |
| Think Tank Session 570 to be held in International Ballroom D on Friday, November 9, 11:15 AM to 12:00 PM |
| Sponsored by the Evaluation Use TIG |
| Presenter(s): |
| Emmalou Norland, Institute for Learning Innovation, norland@ilinet.org |
| Discussant(s): |
| Joe Heimlich, The Ohio State University, heimlich.1@osu.edu |
| Beverly Sheppard, The Institute for Learning Innovation, sheppard@ilinet.org |
| Julia Washburn, National Park Service, julia@jlwashburn.com |
| Abstract: The Parks as Resources for Knowledge in Science- (PARKS) project evaluation, completed in 2000, is one of the largest and most sophisticated cluster evaluations ever conducted of US National Park Service education programs. A stakeholder approach guided the planning, implementation, and sharing of the findings. Methodologically, it had all the bells and whistles. Results showed impact across programs. So why, several years later, were the evaluation and its results relatively unknown to those who could have benefited the most? Evaluators followed all the 'rules' that should have placed it in the 'home run' category of evaluation use. Instead, until October 2006, it wasn't even sitting on the shelves of likely users of the information. Building upon this example and others, participants in the think tank will wrestle with the issues of evaluations in which immediate evaluation use seems to be sacrificed for future evaluation influence. |
| Session Title: Lessons Learned From Evaluation Practice | |||||||||||
| Multipaper Session 571 to be held in International Ballroom E on Friday, November 9, 11:15 AM to 12:00 PM | |||||||||||
| Sponsored by the Graduate Student and New Evaluator TIG | |||||||||||
| Chair(s): | |||||||||||
| Gary Miron, Western Michigan University, gary.miron@wmich.edu | |||||||||||
|
| Session Title: Measuring Effectiveness, Efficiency, and Sustainability in Innovative Health Programs Reaching the Underserved | |||||||||||
| Multipaper Session 572 to be held in Liberty Ballroom Section A on Friday, November 9, 11:15 AM to 12:00 PM | |||||||||||
| Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG | |||||||||||
| Chair(s): | |||||||||||
| Samuel Bickel, United Nations Children's Fund, sbickel@unicef.org | |||||||||||
|
| Session Title: Learning (More) About Evaluation: Unfinished Business | |||
| Panel Session 573 to be held in Liberty Ballroom Section B on Friday, November 9, 11:15 AM to 12:00 PM | |||
| Sponsored by the Theories of Evaluation TIG | |||
| Chair(s): | |||
| Thomas Schwandt, University of Illinois at Urbana-Champaign, tschwand@uiuc.edu | |||
| Abstract: This panel invites the audience to think about two ways in which we learn about and develop a self-understanding of evaluation. One commonly accepted self-understanding of evaluation is that it is a logic, set of methods, procedures, and evaluation models used by an individual evaluator (agent) or team of evaluators (agents) to judge merit and worth, and that evaluations are 'used' for the purposes of 'improvement' or 'betterment' (variously understood). Another way of learning about evaluation is to regard it as a socially constituted discursive practice (or set of practices) and to ask 'what is accomplished in the name of evaluation' and 'how do social practices of evaluation shape other social practices in education, health care, public administration, and social service.' In this panel we draw out differences between these self-understandings of evaluation and point to some consequences for what that means to 'learn about' evaluation. | |||
| |||
|
| Session Title: Strategies for Building and Evaluating Organizational Capacity: A Case Study of 30 Children's Residential Homes Utilizing Strategies to Address Childhood Obesity | |||
| Panel Session 574 to be held in Mencken Room on Friday, November 9, 11:15 AM to 12:00 PM | |||
| Sponsored by the Non-profit and Foundations Evaluation TIG | |||
| Chair(s): | |||
| Toni Freeman, The Duke Endowment, tfreeman@tde.org | |||
| Discussant(s): | |||
| Toni Freeman, The Duke Endowment, tfreeman@tde.org | |||
| Abstract: Structural or environmental interventions demonstrate promise in addressing obesity in young people. However, these interventions, which focus on organizational change rather than individual change, are challenging to design, implement and evaluate. Effective programs of these types require the development and maintenance of numerous partnerships, including participating organizations, funders, and other key stakeholders. This presentation will describe the approach utilized in The ENRICH (Environmental Interventions in Children's Homes) Duke Endowment Wellness Program for developing and evaluating a structural intervention to promote and support physical activity and healthful nutrition among children and adolescents residing in approximately 30 residential children's homes (RCHs) in North Carolina and South Carolina. The presenters will discuss the successful implementation of the processes described above. Additionally, they will provide copies of the project's conceptual framework, logic model, and comprehensive evaluation plan. Copes of project instruments will also be available for participants to review. | |||
| |||
|
| Session Title: The South Central Center for Public Health Preparedness Training Evaluation Process: A Comprehensive Approach to Evaluating the Effectiveness of Emergency Preparedness and Response Training |
| Demonstration Session 575 to be held in Edgar Allen Poe Room on Friday, November 9, 11:15 AM to 12:00 PM |
| Sponsored by the Disaster and Emergency Management Evaluation TIG |
| Presenter(s): |
| Sue Ann Sarpy, Tulane University, ssarpy@tulane.edu |
| Laurita Santacaterina, Tulane University, lsantaca@tulane.edu |
| Abstract: The literature has long recognized the need for comprehensive, systematic evaluations of training effectiveness with respect to training-related knowledge, performance, and desired outcomes. In an attempt to address this need, an evaluation process was established for assessing the effectiveness of training delivered at the South Central Center for Public Health Preparedness (SCCPHP). The SCCPHP is an academic/practice partnership that provides competency-based training via distant delivery methods to prepare the public health workforce to respond to public health threats and emergencies including biological, chemical, nuclear, radiological, terrorism, and mass trauma. This presentation will demonstrate the SCCPHP training evaluation process that was developed, including the standardized measures and procedures associated with its use. Further, applied examples of use of the SCCPHP evaluation process in assessing the effectiveness of various emergency preparedness and response training initiatives will be presented. |
| Session Title: Arkansas Evaluation Center and Empowerment Evaluation: We Invite Your Participation as We Think About How to Build Evaluation Capacity and Facilitate Organizational Learning in Arkansas |
| Think Tank Session 576 to be held in Carroll Room on Friday, November 9, 11:15 AM to 12:00 PM |
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG |
| Presenter(s): |
| David Fetterman, Stanford University, profdavidf@yahoo.com |
| Discussant(s): |
| Linda Delaney, University of Arkansas, linda2inspire@earthlink.net |
| Abstract: A new Arkansas Evaluation Center will be housed at the University of Arkansas Pine Bluff. The Center emerged from empowerment evaluation training efforts in a tobacco prevention program (funded by the Minority Initiated Sub-Recipient Grant's Office). The aim of the Center is to help others help themselves through evaluation. The Center is designed to build local evaluation capacity in the State to help improve program development and accountability. The Center will consist of two parts: an academic program beginning with a certificate program and later offering a masters degree. It will combine face-to-face and distance learning. The second part of the Center will focus on professional development: including guest speakers, workshops, conferences, and publications. The Center will be grounded in an empowerment evaluation philosophical orientation and guided by pragmatic mixed-methods training. In addition, it will help evaluators learn how to use new technological and web-based tools. |
| Session Title: Using Images as Catalysts for Expression in Evaluation: A Demonstration of Photolanguage |
| Demonstration Session 577 to be held in Pratt Room, Section A on Friday, November 9, 11:15 AM to 12:00 PM |
| Sponsored by the Extension Education Evaluation TIG |
| Presenter(s): |
| Rebecca White, Louisiana State University, bwhite@agctr.lsu.edu |
| Diane Sasser, Louisiana State University, sdasser@agctr.lsu.edu |
| Katherine Pace, Louisiana State University, kpace@agcenter.lsu.edu |
| Emily Braud, Louisiana State University, elejeune@agcenter.lsu.edu |
| Abstract: Finding ways to encourage expression among evaluation research participants who are young, shy, reticent, or have limited verbal abilities can be challenging for evaluators. Often evaluation participants find it difficult to address certain sensitive topics or issues. Photolanguage is a resource that evaluators can use to aid personal expression and small group interaction. During this demonstration participants will learn to use Photolanguage as a tool to enhance qualitative evaluation activities. Participants will experience an innovative evaluation process that utilizes black and white photographic images, specifically chosen for their 'sthetic qualities, their ability to stimulate emotions, memory and imagination, and capacity to stimulate reflection in the viewer. |
| Session Title: Business and Industry TIG Business Meeting |
| Business Meeting Session 578 to be held in Pratt Room, Section B on Friday, November 9, 11:15 AM to 12:00 PM |
| Sponsored by the Business and Industry TIG |
| TIG Leader(s): |
| Amy Gullickson, Western Michigan University, amy.m.gullickson@wmich.edu |
| Sheri Hudachek, Western Michigan University, sherihudachek@yahoo.com |
| Eric Graig, Usable Knowledge LLC, egraig@usablellc.net |
| Otto Gustafson, Western Michigan University, ottonuke@yahoo.com |
| Roundtable: Evaluating Collaboration Between Science, Technology, Engineering and Mathematics Programs in the National Girls Collaborative Project |
| Roundtable Presentation 579 to be held in Douglas Boardroom on Friday, November 9, 11:15 AM to 12:00 PM |
| Presenter(s): |
| Brenda Britsch, Puget Sound Center for Teaching, Learning and Technology, bbritsch@psctlt.org |
| Karen Peterson, Puget Sound Center for Teaching, Learning and Technology, kpeterson@psctlt.org |
| Carrie Liston, Puget Sound Center for Teaching, Learning and Technology, cliston@psctlt.org |
| Vicky Ragan, Puget Sound Center for Teaching, Learning and Technology, vragen@psctitl.org |
| Abstract: Collaboration and its effects can be difficult to define, observe, and measure. Based on an evaluation of the National Girls Collaborative Project, a project structured to bring organizations that serve girls in science, technology, engineering and mathematics (STEM) together to compare needs and resources, share information, and plan strategically, we will discuss the measurable aspects of collaboration and initial and expected outcomes stemming from the effort to encourage organizations to work together in more complex ways. We will look at a “collaboration rubric”, adapted from the work of Hogue (1993), Borden and Perkins (1988, 1999) and Frey, Lohmeier, Lee, Tollefson & Johanning (2004), developed to capture increasing levels of collaboration between different groups and discuss preliminary results. The rubric describes five levels of collaboration, based on Hogue's Levels of Community Linkage model: networking, cooperation, coordination, coalition, and collaboration. |
| Session Title: Building Evaluation Capacity at the Society for Advancement of Chicanos and Native Americans in Science (SACNAS) |
| Multipaper Session 580 to be held in Hopkins Room on Friday, November 9, 11:15 AM to 12:00 PM |
| Sponsored by the Multiethnic Issues in Evaluation TIG |
| Chair(s): |
| Jack Mills, Independent Consultant, jackmillsphd@aol.com |
| Abstract: This session will describe the challenges, opportunities and lessons learned in using evaluation results to evolve services provided to underrepresented minority (URM) scholars in higher education. The Society for Advancement of Chicanos and Native Americans in Science (SACNAS) is nationally recognized for initiating and executing effective programs that provide underrepresented minority (URM) students and young scientists with the tools they need to successfully advance in the sciences and related technical fields. This session features a dialogue and interplay between the immediate past president of this nationally prominent minority serving organization and its external evaluator. Members of the audience will be able to hear from the organization's perspective what steps were taken to prepare for a major evaluation initiative as well as challenges, opportunities and lessons learned from the perspective of the external evaluator. |
| Preparing the Way for Evaluation: The Experience of the Society for Advancement of Chicanos and Native Americans in Science (SACNAS) |
| Marigold Linton, University of Kansas, mlinton@ku.edu |
| Jack Mills, Independent Consultant, jackmillsphd@aol.com |
| Over the past five years, program evaluation research has become a critical strategic initiative for the Society. Moving an organization toward increasingly more sophisticated evaluation approaches requires commitment on a number of levels. At the board of directors level there was a need to establish evaluation as a priority, allocate program resources to evaluation that otherwise might provide direct service to clients and be open to findings that might challenge traditions that evolved over a number of years. At the program staff level there was a need to establish a degree of comfort in working with a paid skeptic-someone who would both bear good news about the program's successes while pointing out areas in which the program operations could be strengthened. We will discuss ways in which evaluation has affected many aspects of program operations and future evaluation directions as we progress up the organizational learning curve. |
| A Theory-based Approach to Measuring Minority Career Advancement in the Sciences: A Case Study of the Society for Advancement of Chicanos and Native Americans in Science (SACNAS) |
| Jack Mills, Independent Consultant, jackmillsphd@aol.com |
| Marigold Linton, University of Kansas, mlinton@ku.edu |
| This presentation will describe the development of program evaluation research at SACNAS. We have distilled a model of program theory for factors we believe are essential in assisting URM students navigate scientific and technical careers. With this model in hand, we began to develop multiple methods to measure SACNAS' impact, including: a survey of students' developmental assets prior to SACNAS involvement, focus groups, interviews and participative observations. The Society is evolving a web-based process to track the outcomes of career progress. The presenters will describe how the program evaluation methodology has evolved at SACNAS and future directions we are taking to strengthen the organization's evaluation practice. The two SACNAS presentations in this session will highlight the dynamic interplay and the beneficial impact on strategy that can emerge when the leadership of an organization and an external evaluator develop a strong collaboration. |
| Session Title: Monitoring and Evaluating (M&E) in Sector-wide Approach (SWAps): A New Way of Thinking of Monitoring and Evaluation in the New International Development Framework |
| Expert Lecture Session 581 to be held in Peale Room on Friday, November 9, 11:15 AM to 12:00 PM |
| Sponsored by the International and Cross-cultural Evaluation TIG |
| Chair(s): |
| Nino Saakashvili, Horizonti Foundation, nino.adm@horizonti.org |
| Presenter(s): |
| Ryoh Sasaki, Western Michigan University, ryoh.sasaki@wmich.edu |
| Abstract: This session discusses lessons learned from M & E activities conducted in Sector-Wide Approach (SWAps), a new approach in international development. The lessons are obtained from the author's field experience as a M & E specialist in Tanzania's agricultural sector, as well as from an extensive review of study reports on SWAps. Based on lessons learned, a new system of M & E is proposed, consisting of: (i) prior (but flexible) mutual agreements of goals, indicators and target values among all relevant stakeholders; (ii) a focus on outputs by local entities and on outcomes by central governments; (iii) a mixed use of review and evaluation; and (iv) a reflection of local values through the conduct of periodic systematic needs assessments. Finally, the presenter will compare the proposal with other evaluation practices in the field, such as: Result-based M & E; Evidence-based evaluation, and the DAC evaluation criteria. |
| Session Title: Real Application of a Policy Advocacy Evaluation Tool |
| Demonstration Session 582 to be held in Adams Room on Friday, November 9, 11:15 AM to 12:00 PM |
| Sponsored by the Advocacy and Policy Change TIG |
| Presenter(s): |
| Rhonda Ortiz, The California Endowment, rortiz@calendow.org |
| Sue Hoechstetter, Alliance for Justice, sue@afj.org |
| Traci Endo Inouye, Social Policy Research Associates, traci@spra.com |
| Catherine Crystal Foster, Blueprint Research & Design Inc, catherine@policyconsulting.org |
| Justin Louie, Blueprint Research & Design Inc, justin@blueprintrd.com |
| Abstract: This demonstration will present Alliance for Justice's Advocacy Evaluation Tool. The session will begin with an overview of how it was developed, what it is, and how it can be used. This will be followed by two examples of how it is currently being used in the field. One example will be from The California Endowment's Hmong Health Collaborative and will show how the evaluators of this program used the tool to adapt it to make it more applicable for this community. Another example will highlight the work of other evaluators contracted by The California Endowment to provide technical assistance to a couple of community-based organizations working on comprehensive health care access in the San Francisco Bay Area in using the tool. |
| Roundtable: Challenges Faced by an External Evaluator in Evaluating a Multi-site Program: Lessons Learned |
| Roundtable Presentation 583 to be held in Jefferson Room on Friday, November 9, 11:15 AM to 12:00 PM |
| Presenter(s): |
| Mary Poulin, Justice Research and Statistics Association, mpoulin@jrsa.org |
| Abstract: This roundtable will explore both the challenges faced and the solutions selected in the ongoing implementation of a federally-funded, quasi-experimental design evaluation of a juvenile mentoring program for at-risk youths in Utah. Challenges that will be discussed include: evaluation design planning, program funding, geographical distance, cross-site variation in program implementation, fostering commitment to data collection, and sample size. Particular attention will be paid to issues pertaining to the needs of the three primary clients of the evaluation—the agency funding the evaluation, the program being evaluated, and the program participants. |
| Session Title: Conducting a Process Evaluation of a Prisoner Reentry Initiative |
| Think Tank Session 584 to be held in Washington Room on Friday, November 9, 11:15 AM to 12:00 PM |
| Sponsored by the Crime and Justice TIG |
| Presenter(s): |
| Aisha Nyandoro, Michigan State University, smithai1@msu.edu |
| Discussant(s): |
| William Davidson, Michigan State University, daviso7@msu.edu |
| Abstract: Almost every state and the federal system have some form of reentry initiative designed to facilitate the prisoner's transition back to society. While there are certainly many important concerns about whether a reentry initiative "works", there are a range of critical issues that must be addressed that are equally important. Before any results can be attributed to a particular initiative it is imperative that the evaluation determine and document the activities conducted to determine if they have been implemented in accord with the program design. The roundtable discussion will divide into work groups; each group will discuss: Is it possible to conduct a process evaluation for a reentry initiative? If so, what are the steps in conducting this type of evaluation? Why is it important to examine the process of model implementation? Who are the stakeholders that should be involved in this process? What are some of the possibilities and challenges? |
| Session Title: Implementing Process Evaluation in a Dispersed State Program |
| Demonstration Session 585 to be held in D'Alesandro Room on Friday, November 9, 11:15 AM to 12:00 PM |
| Sponsored by the Quantitative Methods: Theory and Design TIG |
| Presenter(s): |
| Richard Bowman, University of Arizona, rbowman@email.arizona.edu |
| Michele Walsh, University of Arizona, mwalsh@u.arizona.edu |
| Abstract: Implementing process evaluation measures involves a wide variety of stakeholders, and requires strategic compromises. The demonstration will use our experience over the last three years with the Arizona Tobacco Education and Prevention Program to highlight several critical issues - balancing the needs of evaluation and program monitoring, and the needs of central administrators and local service delivery staff - and will outline the steps required to build an organization-wide data system. - Selling the Idea of Process Evaluation - Constructing and Piloting the Instruments - Implementing the Systems - Using the Data The delivered process evaluation system involves an "event report" that is submitted by all providers of services via a web-based tool that makes systematic and continuous program assessment and feedback feasible and effective. |
| Session Title: Using Multilevel Discrete-time Survival Models to Predict Whether and When Events Occur |
| Demonstration Session 586 to be held in Calhoun Room on Friday, November 9, 11:15 AM to 12:00 PM |
| Sponsored by the Quantitative Methods: Theory and Design TIG |
| Presenter(s): |
| Steven Pierce, Michigan State University, pierces1@msu.edu |
| Abstract: The session will introduce the audience to discrete-time survival analysis, which builds on logistic regression to predict not only whether an individual will experience an event, but also when the individual experiences the event. The session will then illustrate how to extend the technique to handle the multilevel case where individuals are drawn from a clustered sample and both individual-level and cluster-level characteristics are used to predict the occurrence of the event. The example data examine whether a door-to-door outreach effort affected whether and when households drawn from 52 different neighborhoods in a small city responded to a survey about neighborhood conditions. The session will cover the data needed to do these models, the concept of censoring, exploratory analysis, software tools, testing multilevel survival models, interpreting output, graphical methods for displaying the results, and recommended resources for those who want to learn more about the technique. |
| Session Title: Using Technology to Enhance Aboriginal Evaluations |
| Expert Lecture Session 587 to be held in McKeldon Room on Friday, November 9, 11:15 AM to 12:00 PM |
| Sponsored by the Indigenous Peoples in Evaluation TIG |
| Chair(s): |
| Joan LaFrance, Mekinak Consulting, joanlafrance1@msn.com |
| Presenter(s): |
| Andrea L K Johnston, Johnston Research Inc, andrea@johnstonresearch.ca |
| Discussant(s): |
| Katherine Tibbetts, Kamehameha Schools, katibbet@ksbe.edu |
| Abstract: Johnston Research Inc., an Aboriginal owned and directed company has made use of technology in several evaluation projects. This presentation will discuss the relevance of using technology in Aboriginal contexts. Technology assists in honoring the audio and visual communication of Aboriginal people. In particular, we will discuss relevant mediums and approaches. It is not so much the use of technology as it is the manner in which it is employed. We are concerned with the content, such as adding audio to visual presentations. In addition to looking at actual visual and audio examples used by Johnston Research Inc. we will discuss other questions. Is the technology easy to use? Can it be adapted to suit the needs of other programs? What about funding for high-tech research? How do you support technology in northern communities? We will discuss our recent experience with using technology and models. |
| Session Title: Programs for Lesbian, Gay, Bisexual, and Transgender Students: Interventions for Diverse Populations | ||||||||||||
| Multipaper Session 588 to be held in Preston Room on Friday, November 9, 11:15 AM to 12:00 PM | ||||||||||||
| Sponsored by the Lesbian, Gay, Bisexual, Transgender Issues TIG and the Pre-K - 12 Educational Evaluation TIG | ||||||||||||
| Chair(s): | ||||||||||||
| Sylvia Fisher, United States Department of Health and Human Services, sylvia.fisher@samhsa.hhs.gov | ||||||||||||
|
| Session Title: Organizational Learning in the Context of Higher Education Institutions |
| Expert Lecture Session 589 to be held in Schaefer Room on Friday, November 9, 11:15 AM to 12:00 PM |
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG and the Assessment in Higher Education TIG |
| Chair(s): |
| Denise Seigart, Mansfield University, dseigart@mansfield.edu |
| Presenter(s): |
| Susan Boser, Indiana University Pennsylvania, sboser@iup.edu |
| Discussant(s): |
| William Rickards, Alverno College, william.rickards@alverno.edu |
| Abstract: Tensions currently exist regarding over the role assessment will play in higher education. Regional accreditation bodies urge the use assessment for organizational learning at the institutional level. The federal government seeks to require standardized, comparative assessment across institutions, linking funding to results. At stake here is who will determine what the curricular should be, with what standards, measured how, and how findings will be used. This comes to the heart of academic freedom. Yet faculty often resist movement toward assessment at all, despite the potential negative consequences and, curiously, despite the higher ed mission regarding learning and research. This paper will 1) examine the characteristics of the higher education context that resist and those that enable using evaluation for learning, 2) explore how our learning about evaluation capacity-building and organizational learning might inform the current conflict, and 3) propose how this conflict might advance theory and practice of organizational learning. |
| Session Title: Issues in Early Childhood and Preschool Evaluation | ||||||||||
| Multipaper Session 590 to be held in Fairmont Suite on Friday, November 9, 11:15 AM to 12:00 PM | ||||||||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | ||||||||||
| Chair(s): | ||||||||||
| Michael P Mueller, The Hospital for Sick Children, michael.mueller@sickkids.ca | ||||||||||
|
| Roundtable: Measuring Success in Professional Exchange: International Visitor Leadership Program |
| Roundtable Presentation 591 to be held in Federal Hill Suite on Friday, November 9, 11:15 AM to 12:00 PM |
| Presenter(s): |
| Liudmila Mikhailova, Delphi International of World Learning, liudmila.mikhailova@worldlearning.org |
| Abstract: This paper talks about monitoring and evaluation techniques designed for International Visitor Leadership Program (IVLP), a State Department-sponsored program for professional exchange. The paper explores a unique model of effective alliance between U.S. government, national program agencies, and thousands of volunteers across the country (Centers for International Visitors) that work together to administer IVLP. Started in 1940 with Inter-American exchange, IVLP brings annually to the U.S. about 4,500 promising leaders from 185 countries in 50 areas of expertise. The paper analyzes criteria for M&E for measuring IVLP success and discusses its findings. The evaluation design is crafted to measure short-and-long outcomes in light of the State Department program objectives. Program success is measured at four major levels: satisfaction with the program; new subject-related knowledge acquisition; changed attitudes and increase of civic responsibilities; and organizational change. A multi-attribute evaluation model to measure success in international exchange will be presented and discussed. |
| Session Title: Peer Review and Learning: New Uses | |||||||||||
| Multipaper Session 592 to be held in Royale Board Room on Friday, November 9, 11:15 AM to 12:00 PM | |||||||||||
| Sponsored by the Research, Technology, and Development Evaluation TIG | |||||||||||
| Chair(s): | |||||||||||
| David Roessner, SRI International, david.roessner@sri.com | |||||||||||
|
| Session Title: Challenges Associated With the Implementation and Use of a Statewide Substance Abuse and Mental Health Outcome and Program Performance System | ||||
| Panel Session 593 to be held in Royale Conference Foyer on Friday, November 9, 11:15 AM to 12:00 PM | ||||
| Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG | ||||
| Chair(s): | ||||
| Robert Hubbard, National Development and Research Institutes Inc, hubbard@ndri-nc.org | ||||
| Abstract: In North Carolina, outcomes for consumers with diagnoses of mental illness and/or substance abuse are monitored through the North Carolina Treatment Outcomes and Program Performance System (NC-TOPPS). NC-TOPPS started in 1997 as a paper-based instrument to collect data on individuals receiving specific substance abuse services. In 2005, this was expanded into a web-based system that is now used to collect information on the life outcomes of all individuals (ages 6+) receiving publicly funded mental health and substance abuse services. In the last two years, over 1000 providers have participated in this system, providing a pool of data that can be used to inform both research and the continuous improvement of the public service system. This panel presents some of the challenges associated with the implementation and use of this statewide system, highlighting the tensions between providers, policymakers and researchers at the local, regional and state level. | ||||
| ||||
|
| Session Title: Building a World-wide Context for Evaluation: A Discussion With the American Evaluation Association's International Committee |
| Think Tank Session 594 to be held in Hanover Suite B on Friday, November 9, 11:15 AM to 12:00 PM |
| Sponsored by the AEA Conference Committee |
| Chair(s): |
| Donna Podems, Macro International Inc, donna@otherwise.co.za |
| Discussant(s): |
| Ross Conner, University of California, Irvine, rfconner@uci.edu |
| Alexey Kuzmin, Process Consulting Company, alexey@processconsulting.ru |
| Thomas E Grayson, University of Illinois at Urbana-Champaign, tgrayson@uiuc.edu |
| Gail Barrington, Barrington Research Group Inc, gbarrington@barringtonresearchgrp.com |
| Abstract: The AEA values a multicultural, global and international understanding of evaluation practices and has a commitment to understand and build awareness of the worldwide context for evaluation. During this session, AEA’s International Committee will facilitate an open discussion to gather a broad range of insights regarding how AEA should consider operationalizing their Mission statement with regards to AEA’s international role. Specifically, how should AEA, if at all, support and develop relationships and collaborations with evaluators around the globe to gain a better understanding of international evaluation issues. |
| Session Title: Beyond the Report: Using Evaluations to Create a College-going Culture | |||||
| Panel Session 595 to be held in Baltimore Theater on Friday, November 9, 11:15 AM to 12:00 PM | |||||
| Sponsored by the Presidential Strand and the College Access Programs TIG | |||||
| Chair(s): | |||||
| Janet Usinger, University of Nevada, Reno, usingerj@unr.edu | |||||
| Abstract: Evaluations are resource intensive. Ideally, the program and organization are the beneficiaries of this intense resource commitment through meaningful dialogue between evaluators and project staff and effective feedback. More often, however, evaluations are directed upstream to policy-makers and not necessarily toward the individuals directly involved in the day-to day activities of the project. Two Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP) projects have designed their evaluations to serve two roles, to capture the impact of the project activities and to promote organizational learning. One project uses a Logic Model to create a common understanding of the theoretical grounding of project activities. Another uses a longitudinal case study as a means of reflecting organizational growth and development back to the instructional leadership team of participating schools. This panel will present details of the design and implementation of these two approaches that use evaluations to create a college-going culture. | |||||
| |||||
|
| Session Title: Improving Payment Accuracy in the Child Care Program: Error Rate Measurement in the Child Care and Development Fund (CCDF) |
| Demonstration Session 596 to be held in International Room on Friday, November 9, 11:15 AM to 12:00 PM |
| Sponsored by the Human Services Evaluation TIG |
| Presenter(s): |
| Carol Pearson, Walter R McDonald & Associates Inc, cpearson@wrma.com |
| Harry Day, Walter R McDonald & Associates Inc, hday@wrma.com |
| Abstract: This demonstration will review a methodology for measuring improper payments in the Child Care and Development Fund (CCDF). The CCDF is a 5 billion dollar block grant that allows maximum flexibility for States to set critical policies such as establishing eligibility criteria. The US Department of Health and Human Services (USDHHS) Child Care Bureau (CCB) is following the rulemaking process to impose error rate measurement and reporting requirements on all States receiving CCDF. The presentation will review the development, implementation and utility of the following key components of the error rate methodology based on a pilot study implemented in nine States: - Sampling procedures; - Computation of five error rate measures; - Data collection instruments; - Record review process; - Computation of a national estimate of an annual amount of improper payments in the CCDF; and - Potential evaluation methodology for estimating cost savings through program intervention. |
| Session Title: Quality Indicators in Health Care: From Training to Accreditation | ||||||||||||||
| Multipaper Session 597 to be held in Chesapeake Room on Friday, November 9, 11:15 AM to 12:00 PM | ||||||||||||||
| Sponsored by the Health Evaluation TIG | ||||||||||||||
| Chair(s): | ||||||||||||||
| Molly Engle, Oregon State University, molly.engle@oregonstate.edu | ||||||||||||||
|
| Session Title: Learning From Leaders: Evaluating Popular Culture Artifacts as a Development Tool | |||
| Panel Session 598 to be held in Versailles Room on Friday, November 9, 11:15 AM to 12:00 PM | |||
| Sponsored by the AEA Conference Committee | |||
| Chair(s): | |||
| Jamie Callahan, Texas A&M University, jcallahan@tamu.edu | |||
| Discussant(s): | |||
| Kelly Hannum, Center for Creative Leadership, hannumk@leaders.ccl.org | |||
| Abstract: This panel will engage participants in a series of storytelling experiences that emphasize the evaluation of leadership in characters and self in the pursuit of development of personal leadership. The panel begins with an exploration of the link between leadership, learning, and evaluation. We then share a series of leadership stories drawn from a leadership development program that uses popular culture artifacts such as film, television, fiction, and non-fiction as learning vehicles. The discussant will integrate these presentations by demonstrating the role of evaluation in learning to lead and in developing others to lead. We conclude the panel by engaging audience members to share their self-evaluations of leadership. | |||
| |||
|