| Session Title: Promoting and Assessing Individual and Organizational Knowledge Building |
| Skill-Building Workshop 803 to be held in International Ballroom A on Saturday, November 10, 1:50 PM to 3:20 PM |
| Sponsored by the Presidential Strand |
| Presenter(s): |
| Lyn Shulha, Queen's University, shulhal@educ.queensu.ca |
| Glenda Eoyang, Human Systems Dynamics Institute, geoyang@hsdinstitute.org |
| Abstract: Process use in evaluation continues to focus on having individuals learn about their programs, evaluative inquiry, and each other (Preskill, Zuckerman, & Matthews, 2003). This raises questions about how individuals learn, how we know they've learned and how this learning contributes directly to knowledge building within organizations (Coghlan, Preskill, Tzavaras Catsambas, 2003; Cousins & Shulha, 2006; Eoyang, 2006). This session begins by visiting the relationship of evaluation to newer conceptions of organizations (Eoyang, 2001); learning (Fostaty Young & Wilson, 2000); and knowledge building (Shulha & Shulha 2006), and how these conceptions might alter the role of the evaluator. Following this introduction, participants will work together using a common case study to explore the utility of the ideas/connections/extensions (ICE) taxonomy for assessing depth of individual learning, and the containers/differences/exchanges (CDE) framework for the analysis of organizational learning. Closure activities will give participants an opportunity to focus on when and how these tools might compliment their own evaluator toolkit. |
| Session Title: Evaluation in Education: Promises, Challenges, Booby Traps and Some Empirical Data | |||
| Panel Session 804 to be held in International Ballroom B on Saturday, November 10, 1:50 PM to 3:20 PM | |||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | |||
| Chair(s): | |||
| Katherine McKnight, Pearson Achievement Solutions, kathy.mcknight@pearsonachievement.com | |||
| Abstract: NCLB legislation, with its emphasis on accountability and evidence-based programs and practices, implies a central role of evaluation in education program and policy decision-making. Therefore, the onus is on evaluators to design and conduct evaluations that produce usable information capable of serving as the basis for effective education decision-making. To produce usable information, research and evaluation must be relevant to the needs of the decision-makers. The focus of this panel is to describe the kind of information needed for a usable knowledge base to guide education decision-making and to suggest guidelines for evaluators in the design and conduct of program evaluations in the field of education. For education to advance as a field grounded in science, evaluators must continually assess gaps in the knowledge base, searching for and testing general principles upon which that knowledge base can expand and effectively inform program development and policy-making. | |||
| |||
| |||
| |||
| |||
|
| Session Title: International Efforts to Strengthen Evaluation as a Profession and Build Evaluation Capacity | |||
| Panel Session 805 to be held in International Ballroom C on Saturday, November 10, 1:50 PM to 3:20 PM | |||
| Sponsored by the International and Cross-cultural Evaluation TIG | |||
| Chair(s): | |||
| Arnold Love, Independent Consultant, ajlove1@attglobal.net | |||
| Discussant(s): | |||
| Arnold Love, Independent Consultant, ajlove1@attglobal.net | |||
| Abstract: Evaluation practice is spreading rapidly in many parts of the world, and along with it comes increasing need for professional evaluation expertise. However, professional evaluation expertise and know-how is not something that can be created over night. Building evaluation capacity and developing evaluation as a field of professional practice is a major challenge everywhere, but especially for developing countries. This panel draws on recent international experiences in United Nations family of agencies, Japan, and Latin America/Caribbean to address critical questions regarding the professionalization of evaluation: Who should be responsible for increasing evaluation capacity and promoting professionalization? Should governments and international bodies simply focus on creating the demand for evaluation or directly influence the development of the supply of evaluation expertise? What roles should professional evaluation associations and networks play? What are cost-effective and rapid ways to build high-quality evaluation expertise? | |||
| |||
| |||
|
| Session Title: Revisiting the Logic Modeling Process: Emerging Benefits, Challenges and the Role of E-Technology | |||
| Panel Session 806 to be held in International Ballroom D on Saturday, November 10, 1:50 PM to 3:20 PM | |||
| Sponsored by the AEA Conference Committee | |||
| Chair(s): | |||
| Ralph Renger, University of Arizona, renger@u.arizona.edu | |||
| Abstract: The three-step ATM process is one of many approaches to logic modeling. We have employed it extensively and with great success in a number of content areas. Through our work, incidental benefits and new uses of the process have become apparent. In addition, we have encountered situational challenges and have made modifications to the process to meet the needs of various stakeholders. In some situations we have employed e-technologies to overcome challenges and to enhance the utility of the process. This session will begin with a review of the ATM logic modeling process and a discussion of benefits that have emerged. Following that, the challenges encountered in the process and proposed solutions will be considered. The session will conclude with a discussion of the role of e-technologies in facilitating the process. | |||
| |||
| |||
|
| Session Title: Learning How to Start and Succeed as an Independent Evaluation Consultant | |||
| Panel Session 807 to be held in International Ballroom E on Saturday, November 10, 1:50 PM to 3:20 PM | |||
| Sponsored by the Independent Consulting TIG | |||
| Chair(s): | |||
| Jennifer Williams, J E Williams and Associates LLC, jew722@zoomtown.com | |||
| Discussant(s): | |||
| Michael Hendricks, Independent Consultant, mikehendri@aol.com | |||
| Abstract: Veteran Independent Consultants will share their professional insights on starting and maintaining an Independent Evaluation Consulting business. Panelists will describe ways of building and maintaining client relationships, and share their expertise related to initial business set-up and lessons they have learned. Discussions will include the pros and cons of having an independent consulting business, the various types of business structures, methods of contracting and fee setting, as well as the personal decisions that impact on having your own business. They will examine some consequences of evaluation in the context of conducting independent consulting in diverse settings. The session will include ample time for audience members to pose specific questions to the panelists. | |||
| |||
| |||
| |||
| |||
|
| Session Title: Examining the Form and Function of Evaluation in Philanthropy | |||||
| Panel Session 808 to be held in Liberty Ballroom Section A on Saturday, November 10, 1:50 PM to 3:20 PM | |||||
| Sponsored by the Non-profit and Foundations Evaluation TIG | |||||
| Chair(s): | |||||
| Pennie G Foster-Fishman, Michigan State University, fosterfi@msu.edu | |||||
| Abstract: The form and function of evaluation in philanthropy is a topic that has received considerable attention in recent years. Debated issues concern: 1) whether evaluation should play an accountability and/or a learning function; 2) how internal evaluation units should be structured; 3) how to effectively integrate evaluation and a learning orientation within foundations; and 4) who should be the targeted audiences of evaluation information. This panel will explore these issues and others, highlighting how foundations can create the internal systems needed to allow evaluation to flourish. Three panelists, representing three major foundations will highlight their own experiences with evaluation within their organizations. | |||||
| |||||
| |||||
| |||||
|
| Session Title: Money Talks: Including Costs in Your Evaluation | ||||
| Panel Session 809 to be held in Liberty Ballroom Section B on Saturday, November 10, 1:50 PM to 3:20 PM | ||||
| Sponsored by the Quantitative Methods: Theory and Design TIG and the Costs, Effectiveness, Benefits, and Economics TIG | ||||
| Chair(s): | ||||
| Patricia Herman, University of Arizona, pherman@email.arizona.edu | ||||
| Discussant(s): | ||||
| Brian Yates, American University, brian.yates@mac.com | ||||
| Abstract: This panel presents an overview of three types of cost-based evaluation techniques that can be added to existing evaluation plans to increase the usefulness of results. The three methods are basic cost-effectiveness analysis, Monte Carlo simulation, and threshold analysis. Each is applied, as an illustration, to a different component of a tobacco control program. This panel will attempt to illustrate that cost-based evaluation is possible across a number of types of programs where these techniques might not typically be considered. None of the programs evaluated had a previous cost evaluation, and the methods here were conducted, together with an experienced practitioner, by program evaluators with little direct experience with cost-based techniques. All results are preliminary, and each panelist will discuss what they learned from adding a cost analysis to their evaluations. The panel will end with a discussion intended to ensure generalization of the approaches to all types of programs. | ||||
| ||||
| ||||
| ||||
|
| Session Title: Systems Methodologies for Evaluation | |||||||||||||||||
| Multipaper Session 810 to be held in Mencken Room on Saturday, November 10, 1:50 PM to 3:20 PM | |||||||||||||||||
| Sponsored by the Systems in Evaluation TIG | |||||||||||||||||
| Chair(s): | |||||||||||||||||
| Bob Williams, Independent Consultant, bobwill@actrix.co.nz | |||||||||||||||||
|
| Session Title: North Carolina Cooperative Extension's Program Development Institute: A Multi-faceted, Multi-level, Multi-disciplinary Training Approach |
| Demonstration Session 811 to be held in Edgar Allen Poe Room on Saturday, November 10, 1:50 PM to 3:20 PM |
| Sponsored by the Extension Education Evaluation TIG |
| Presenter(s): |
| Lisa Guion, North Carolina State University, lisa_guion@ncsu.edu |
| Abstract: This demonstration session will provide participants with a training outline for each day of the intensive North Carolina Cooperative Extension Program Development Institute. The development, organization and structuring of the institute will be discussed. A list of teaching tools that were used each day of the institute will also be shared. Finally, the use of some of the tools found to be most effective will be demonstrated. Participant will have the opportunity to ask the presenter about logistical questions that could aid them in implementing a similar training in their state. Extension Evaluators and Specialists charged with providing training on program development will find this session to be informative. |
| Session Title: Empowerment Evaluation Communities of Learners: From Rural Spain to the Arkansas Delta |
| Multipaper Session 812 to be held in Carroll Room on Saturday, November 10, 1:50 PM to 3:20 PM |
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG |
| Chair(s): |
| David Fetterman, Stanford University, profdavidf@yahoo.com |
| Discussant(s): |
| Stewart I Donaldson, Claremont Graduate University, stewart.donaldson@cgu.edu |
| Abstract: Empowerment evaluation is the use of evaluation concepts, techniques, and findings to foster improvement and self-determination. It employs both qualitative and quantitative methodologies. It also knows no national boundaries. It is being applied countries ranging from Brazil to Japan, as well as Mexico, United Kingdom, Finland, New Zealand, Spain, and the United States. These panel members highlight how empowerment evaluation is being used in rural Spain and the Arkansas Delta. In both cases, they depend on communities of learners to facilitate the process. The third member of the panel highlights a web-based tool to support empowerment evaluation that allow crosses all geographic boundaries. |
| Learning From Empowerment Evaluation in Rural Spain: Implications for the European Union |
| Jose Maria Diaz Puente, Polytechnic University, Madrid, jmdiazpuente@gmail.com |
| At the present time, thousands of evaluation works are carried out each year in the European Union to analyze the efficacy of European policies and seek the best way to improve the programs being implemented. Many of these works are related to programs applied in the rural areas that occupy up to 80% of the territory of the EU and include many of the most disadvantaged regions. The results of the application of empowerment evaluation in the rural areas of Spain show that this evaluation approach is an appropriate way to foster learning in the rural context. The learning experience was related to capacity building in stakeholders and evaluation team, the evaluator role and advocacy, the impact of the empowerment evaluation approach, its potential limitations, difficulties and applicability to rural development in the EU. |
| Empowerment Evaluation: Transforming Data Into Dollars and the Politics of Community Support in Arkansas Tobacco Prevention Projects |
| Linda Delaney, Fetterman and Assoc, linda2inspire@earthlink.net |
| David Fetterman, Stanford University, profdavidf@yahoo.com |
| Empowerment evaluation is being used to facilitate tobacco prevention work in the State of Arkansas. The University of Arkansas's Depart of Education is guiding this effort, under the Minority Initiated Sub-Recipient Grant's Office. Teams of community agencies are working together with individual evaluators throughout the state to collect tobacco prevention data and turn it into meaningful results in their communities. They are also using the data collectively to demonstrate how a collective can be effective. The grantees and evaluators are collecting data about the number of people who quit smoking and translating that into dollars saved in terms of excess medical expenses. This has caught the attention of the Black Caucus and the legislature. Lessons learned about transforming data and the politics of community support are shared. |
| Empowerment Evaluation and the Web: (interactive Getting to Outcomes) iGTO |
| Abraham Wandersman, University of South Carolina, wandersman@sc.edu |
| iGTO is an Internet based approach to Getting to Outcomes called Interactive Getting to Outcomes. It is a capacity-building system, funded by NIAAA, that is designed to help practitioners reach results using science and best practices. Getting to Outcomes (GTO) is a ten-step approach to results-based accountability. The ten steps are as follows; Needs/Resources, Goals, Best Practices, Fit, Capacity, Planning, Implementation, Outcomes, CQI, and Sustainability. iGTO plays the role of quality improvement/quality assurance in a system that has tools, training, technical assistance, and quality improvement/quality assurance. With iGTO, the organizations uses empowerment evaluation approaches to assess process and outcomes and promote continuous quality improvement. Wandersman et al highlight the use of iGTO in two large state grants to demonstrate the utility of this new tool. |
| Session Title: The Follow-up Monitoring and Outcome Survey for National Research and Development Projects in New Energy and Industrial Technology Development Organization (NEDO) |
| Multipaper Session 813 to be held in Pratt Room, Section A on Saturday, November 10, 1:50 PM to 3:20 PM |
| Sponsored by the Research, Technology, and Development Evaluation TIG |
| Chair(s): |
| Takahisa Yano, New Energy and Industrial Technology Development Organization, yanotkh@nedo.go.jp |
| Abstract: NEDO is Japan's largest public R&D management organization for promoting various areas of technologies. It is very important for funding agencies as NEDO to monitor the post-project activities of project participants toward the practical application of R&D achievements, to assess the impact of national R&D projects and to review previous post-project evaluations in view of post-project activities in order to provide any feedback to improve R&D management. In this session, from these points of view, relevance between the score of ex-post evaluation and post-project activities will be discussed. And in order to identify the important management factors for successful or failure of post-project activities, unique procedures, we called -Follow-up chart-, will be discussed. And in order to understand outcomes derived from National R&D, a case study using various indicators will also be discussed. |
| Study of the Correlation Between Ex-post Evaluation and Follow-up Monitoring of National Research and Development (Part I) |
| Hiroyuki Usuda, New Energy and Industrial Technology Development Organization, usudahry@nedo.go.jp |
| Momoko Okada, New Energy and Industrial Technology Development Organization, okadammk@nedo.go.jp |
| NEDO has conducted intermediate evaluations and ex-post evaluations since FY2001. In addition to ex-ante (pre-project) evaluations of new R&D projects, NEDO also started follow-up monitoring and evaluations on completed project in FY2004. In the intermediate and ex-project evaluation work, projects are assessed and evaluated from the following four categories. 'Purpose and strategy', 'Project management', 'R&D achievements', and 'Prospect for practical applications and other impacts'. On the other hand, NEDO has tracked participating organizations of which post-project activities have reached practical applications. In this study, we will try to verify the validity of ex-post evaluations by comparing the result of ex-post evaluations with that of follow-up monitoring. |
| Study for the Important Management Factors Based on Follow-up Monitoring Data (Part II) |
| Setsuko Wakabayashi, New Energy and Industrial Technology Development Organization, wakabayashistk@nedo.go.jp |
| Tsutomu Kitagawa, New Energy and Industrial Technology Development Organization, kitagawattm@nedo.go.jp |
| Takahisa Yano, New Energy and Industrial Technology Development Organization, yanotkh@nedo.go.jp |
| Kazuaki Komoto, New Energy and Industrial Technology Development Organization, kohmotokza@nedo.go.jp |
| NEDO has used Follow-up Monitoring to track various post-project activities of participating organizations in the past NEDO projects since FY2004. NEDO has conducted questionnaires survey and interviews to companies of which post-project activities have reached commercialization stage or discontinued. And also, NEDO has specifically tried to identify important management factors by using a Follow-up Chart in order to improve its project management. In this study, several important management points which NEDO had obtained from the results of FY 2005 and FY2006 will be discussed. |
| Approach for the Understanding of Outcomes Derived from National Research and Development of Energy Conservation Project (Part III) |
| Kazuaki Komoto, New Energy and Industrial Technology Development Organization, kohmotokza@nedo.go.jp |
| Tsutomu Kitagawa, New Energy and Industrial Technology Development Organization, kitagawattm@nedo.go.jp |
| Takahisa Yano, New Energy and Industrial Technology Development Organization, yanotkh@nedo.go.jp |
| Setsuko Wakabayashi, New Energy and Industrial Technology Development Organization, wakabayashistk@nedo.go.jp |
| In this study, in order to make a method to understand the outcomes, the energy conservation project which purposes developing the advanced industrial furnace conducted by NEDO were adopted, because NEDO has contributed for the development of technology in this field. 5 indicators, such as amount of the burner, market size, the effect on CO2 reduction, were used to show the outcomes and the data obtained by using these indicators presented in this study. |
| Session Title: Ethics in Evaluation: At the Crossroads of Principle to Practice |
| Skill-Building Workshop 814 to be held in Pratt Room, Section B on Saturday, November 10, 1:50 PM to 3:20 PM |
| Sponsored by the Teaching of Evaluation TIG |
| Chair(s): |
| Linda Schrader, Florida State University, lschrade@mailer.fsu.edu |
| Presenter(s): |
| Michael Morris, University of New Haven, mmorris@newhaven.edu |
| Abstract: Evaluators strive to uphold ethical principles in their practice of evaluation while working in challenging and diverse contexts. The AEA Guiding Principles for Evaluators delineate professional skills and behaviors that embody a set of values and ethics for effective evaluation practice. How does an evaluator infuse these ethical guidelines into practice? This skill-building workshop will present an overview of the Guiding Principles and examine how these ethical principles can be employed to prevent, and respond effectively to, ethical dilemmas encountered as an evaluation unfolds. Ethical issues regarding conflicts around stakeholders' priorities, differing cultural values, and varied expectations for the evaluation will be discussed. Participants will have opportunities to apply these concepts to a case study and explore strategies for addressing ethical challenges encountered in evaluation planning and implementation. It is expected that participants will acquire reflective insights and knowledge about the application of ethical principles to enhance their practice. |
| Roundtable Rotation I: Teen Interactive Theater Education: Evaluation of a Youth Development Approach to the Reduction of Risk Behaviors |
| Roundtable Presentation 815 to be held in Douglas Boardroom on Saturday, November 10, 1:50 PM to 3:20 PM |
| Presenter(s): |
| Ruth Carter, University of Arizona, rcarter@cals.arizona.edu |
| Daniel McDonald, University of Arizona, mcdonald@cals.arizona.edu |
| Abstract: TITE is an innovative youth development program that engages young people through the use of experiential activities on pertinent topics in today's society and employs a cross-age teaching strategy. The evaluation of the program adds to the knowledge-base of the effectiveness of youth development approaches, particularly among underrepresented populations (63% of respondents identify themselves as Hispanic and 21% as Native American). This roundtable discussion will show how evaluation results have been used to inform the development and implementation of the TITE curriculum. Issues relating to the evaluation will be discussed including obtaining human subjects approval and working with alternative high schools and youth detention centers. |
| Roundtable Rotation II: What Exactly are Life Skills Anyway? |
| Roundtable Presentation 815 to be held in Douglas Boardroom on Saturday, November 10, 1:50 PM to 3:20 PM |
| Presenter(s): |
| Benjamin Silliman, North Carolina State University, ben_silliman@ncsu.edu |
| Daniel Perkins, Pennsylvania State University, dfp102@psu.edu |
| Abstract: This roundtable discusses key issues in defining, measuring, and evaluating life skills in youth. Presenters illustrate these issues with a critique of two models and two measures of life skills. Discussion focuses on issues relevant to theory and practice: how much life skills such as goal setting, leadership, or communication is learned (and how much born-in)? Who is the best judge of life skills? What methods work best for documenting growth of life skills? (How) should measurement methods be integrated with educational methods? When and how often should life skills be evaluated? (How) do individual differences affect mastery of life skills? |
| Session Title: Learning From the Evaluation of Voluntary Environmental Partnership Programs |
| Multipaper Session 816 to be held in Hopkins Room on Saturday, November 10, 1:50 PM to 3:20 PM |
| Sponsored by the Environmental Program Evaluation TIG |
| Chair(s): |
| Katherine Dawes, United States Environmental Protection Agency, dawes.katherine@epa.gov |
| Abstract: Many of today's environmental challenges cannot be addressed by regulation alone. They require a broader mix of solutions - regulatory programs, information, education, technical assistance, grants, and voluntary partnership programs. Partnership programs have been the subject of many evaluations and reviews. EPA Partnership Programs play an important role in improving air quality, energy efficiency, and reducing solid waste. They enable flexible, collaborative, market-driven solutions that can deliver measurable environmental results. EPA began using Partnership Programs in the early 1990s as a unique, non-regulatory approach to environmental management. Recently, EPA Partnership Programs have received increasing scrutiny from internal and external audiences who question whether these programs help the Agency achieve its environmental goals. This multi-paper session will discuss efforts underway to: 1) coordinate measurement and evaluation efforts of these programs; 2) discuss lessons learned from evaluating two Partnership Programs; and 3) begin a dialogue about evaluating the next generation of programs. |
| The Lay of the Land: "Voluntary" Partnership Programs at the United States Environmental Protection Agency |
| Laura Pyzik, United States Environmental Protection Agency, pyzik.laura@epa.gov |
| To help coordinate Partnership Program efforts across the Agency, NCEI has developed a Partnership Programs Coordination (PPC) Team. This Team assures that 'Voluntary' Partnership Programs are well designed, measured, branded and managed, and present a coherent image to external partners. In recent years, EPA Partnership Programs have been the subject of a number of internal and external evaluations and reviews. Consequently, the PPC Team has taken the lead for coordinating efforts to equip Partnership Programs with the necessary measurement and evaluation tools and trainings. The PPC Team will discuss: 1) what was learned from past evaluative inquiries; 2) how program evaluations have helped or challenged their coordination efforts; and 3) ongoing efforts to measure and evaluate EPA Partnership Programs including the development of measurement and evaluation guidelines, training, and an Information Collection Request to allow Partnership Programs to collect information on outcomes. |
| Measuring the Effectiveness of Environmental Protection Agency's Indoor Air Quality Tools for Schools Program |
| Dena Moglia, United States Environmental Protection Agency, moglia.dena@epa.gov |
| IAQ TfS is a voluntary, flexible, multi-media program that stresses teamwork and collaboration to help schools/school districts identify, correct and prevent indoor air pollution and other environmental problems so they can provide safe, healthy learning environments for children. The IAQ TfS Kit - a central part of the Program -- helps schools develop an IAQ management plan and shows them how to carry out practical action to improve IAQ at little or no cost using in-house staff to conduct straightforward activities. Presenters will discuss lessons learned regarding: (1) the impact of a comprehensive IAQ management plan on a school's indoor environment; (2) the resources associated with implementing IAQ management plans; and (3) how well an IAQ management plan can reduce environmental asthma triggers. The results will shed light on program outcomes, the impacts of the IAQ TfS program, and the effectiveness of the approach to meeting EPA's Clean Air goals. |
| Evaluating the Hospitals for a Healthy Environment (H2E) Program's Partner Hospitals' Environmental Improvements |
| Chen Wen, United States Environmental Protection Agency, wen.chen@epa.gov |
| The H2E Program, a voluntary collaboration among the EPA, American Hospital Association, American Nurses Association, and Health Care Without Harm, has operated since 1998. The H2E program provides a variety of technical assistance tools to help Partner facilities reduce their environmental impact, including: fact sheets, website, monthly teleconference training calls, and peer-to-peer listserv's. Among the program goals, the H2E seeks the virtual elimination of mercury containing waste from the healthcare sector by FY2005. An evaluation was conducted to determine how successful the H2E program has been in achieving the aforementioned goal, as well as a 33% reduction of healthcare waste by FY2005, and 50% reduction of healthcare waste by FY2010. This paper discusses lessons learned regarding: 1) how H2E can best help Partner hospitals collect environmental information that will help both the hospitals and EPA; and 2) which H2E activities are most effective in encouraging hospitals to make environmental improvements. |
| Evaluating the Next Generation of Environmental Protection Agency (EPA) Partnership Programs: Where Do We Go From Here? |
| Laura Pyzik, United States Environmental Protection Agency, pyzik.laura@epa.gov |
| In the spirit of learning and information exchange, the last session of this panel involves a dialogue between panelists and conference participants to respond to three key questions regarding existing and future evaluations of environmental Partnership Programs. 1) What have existing evaluations of EPA Partnership Programs taught us about the design, and effectiveness of Partnership Programs? 2) How do we use what was learned from past evaluations to improve the ability of Partnership Programs to achieve environmental results? 3) What questions still need to be answered regarding evaluation designs, and data collection methods that are the most appropriate for evaluating environmental Partnership Programs? |
| Session Title: Does Aid Evaluation Work?: Reducing World Poverty by Improving Learning, Accountability and Harmonization in Aid Evaluation |
| Multipaper Session 817 to be held in Peale Room on Saturday, November 10, 1:50 PM to 3:20 PM |
| Sponsored by the International and Cross-cultural Evaluation TIG |
| Chair(s): |
| Michael Scriven, Western Michigan University, scriven@aol.com |
| Abstract: This session will discuss some fundamental and deeply imbedded issues in aid evaluation, including relatively low achievement of learning, limitations in accountability and lack of harmonization among donors and between government and donors. Clearly these issues are inter-related, and we see them as not merely technical but deeply structural. The first presentation will examine these issues through a comparative study of nine development project in Africa and Asia, and it will propose a newly refined framework of cost-effectiveness analysis for organizational learning and accountability. The second presentation will focus the structural factors and arrangements that have led serious positive bias and disinterest among stakeholders though a systematic review of 31 evaluation manuals and their application. The third presentation will focus on the harmonization of the current aid practices by reviewing the current tools and their actual uses, and it will propose challenges and opportunities with some possible solutions. |
| Reducing World Poverty by Improving Evaluation of Development Aid |
| Paul Clements, Western Michigan University, paul.clements@wmich.edu |
| Although international development aid bears a heavy burden of learning and accountability, the way evaluation is organized in this field leads to inconsistency and positive bias. This paper first discusses structural weaknesses in aid evaluation. Next it presents evidence of inconsistency and bias from evaluations of several development projects in Africa and India. While this is a limited selection of projects, the form of the inconsistency and bias indicates that the problems are widespread. Third the paper shows how the independence and consistency of evaluations could be enhanced by professionalizing the evaluation function. Members of an appropriately structured association would have both the capacity and the power to provide more helpful evaluations. In order better to support learning and accountability, the independent and consistent evaluations should be carried out using a cost-benefit framework. |
| Lessons Learned from the Embedded Institutional Arrangement in Aid Evaluation |
| Ryoh Sasaki, Western Michigan University, ryoh.sasaki@wmich.edu |
| In the past, several trials of meta-evaluation have been conducted to answer a long-held question: Does aid work? However, the general public still today suspects its effectiveness and asks the same question. One of the reasons why people are still facing this question is that there would be serious flaws in the current aid evaluation practice. In this paper, I will present the issues identified by the systematic review of 31 evaluation guidelines developed by aid agencies (the multilateral and bilateral aid agencies) and related review reports. I can conclude the identified issues are those 'deeply embedded in institutional arrangement' rather than technical issues. They are: (i) dominance of agency's own value criteria under the name of 'mixed-up of all stakeholders' values', (ii) dependency of evaluators under the title of external consultant, (iii) modificationality of evaluation reports, and (iv) logical flaw of aid evaluation. Some fundamental suggestions are made at last. |
| Hope for High Impact Aid: Real Challenges, Real Opportunities and Real Solutions |
| Ronald Scott Visscher, Western Michigan University, ronald.s.visscher@wmich.edu |
| The Paris Declaration demands mutual accountability and harmonization between all parties involved in international aid. The extreme challenge in Afghanistan is one of many examples of why the fate of freedom and democracy now depends on this. Yet the secret is out. Succeeding in international development is tough. Everyone now knows failure is the norm. Evaluators must recognize this situation as a historic opportunity to assume independence, "speak truth to power" and demand support for high quality evaluation. By taking on this stronger role monitoring and evaluation (M&E) will have the opportunity to meet its promise of inspiring real progress. But delivering mutual accountability, learning and coordination will still be required. How will M&E deliver these on these heightened demands? This presentation will help evaluators learn how new and improved M&E tools designed to meet these complex demands can be integrated into real practical solutions for each unique context. |
| Session Title: Advocacy, Community Mobilization and Systems Change: Assessing Unique Strategies to Impact Community Health | |||||||||||||||||||||||||
| Multipaper Session 818 to be held in Adams Room on Saturday, November 10, 1:50 PM to 3:20 PM | |||||||||||||||||||||||||
| Sponsored by the Advocacy and Policy Change TIG | |||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||
| Zoe Clayson, Abundantia Consulting, zoeclay@abundantia.net | |||||||||||||||||||||||||
|
| Roundtable: Challenges of Evaluating a Multi-disciplinary, Multi-agency, School Based, Safe Schools/Healthy Students Project |
| Roundtable Presentation 819 to be held in Jefferson Room on Saturday, November 10, 1:50 PM to 3:20 PM |
| Presenter(s): |
| Carl Brun, Wright State University, carl.brun@wright.edu |
| Betty Yung, Wright State University, betty.yung@wright.edu |
| Cheryl Meyer, Wright State University, cheryl.meyer@wright.edu |
| Carla Clasen, Wright State University, carla.clasen@wright.edu |
| Katherine Cauley, Wright State University, katherine.cauley@wright.edu |
| Kay Parent, Wright State University, kay.parent@wright.edu |
| Abstract: The roundtable presenters compose the team responsible for evaluating a three year, multi-agency Safe Schools/Healthy Students grant implemented in a K-12 school-wide district. The evaluators represent health administration, nursing, psychology, and social work. The presenters will discuss the coordination of implementing over 25 instruments to measure more than 17 long-term outcomes, including the required GPRA outcomes for the federally funded project. The evaluators also assisted staff from 15 programs to measure short-term outcomes. The presenters will discuss several challenges to the evaluation including "program creep", changes in program implementation from the original grant, and an aversion to measuring outcomes. The staff implementing the grant funded interventions constantly sought advice from the evaluators on how to implement the programs. As evaluators, we constantly needed to clarify our role while also providing assistance in utilizing the evaluation data. The presenters will also discuss a tool used to monitor the complex evaluation plan. |
| Session Title: Capacity Factors in Prevention and New Tobacco Control Strategies and Evaluations | |||||||||||||||||||||||||||||||
| Multipaper Session 820 to be held in Washington Room on Saturday, November 10, 1:50 PM to 3:20 PM | |||||||||||||||||||||||||||||||
| Sponsored by the Health Evaluation TIG | |||||||||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||||||||
| Robert LaChausse, California State University, San Bernardino, rlachaus@csusb.edu | |||||||||||||||||||||||||||||||
|
| Session Title: Put That in Writing: Communicating Evaluation Results in a Way That Promotes Learning and Use | |||
| Panel Session 821 to be held in D'Alesandro Room on Saturday, November 10, 1:50 PM to 3:20 PM | |||
| Sponsored by the Non-profit and Foundations Evaluation TIG | |||
| Chair(s): | |||
| Toni Freeman, The Duke Endowment, tfreeman@tde.org | |||
| Abstract: This session will address the importance of evaluators and foundations working together to effectively present and report information. Knowledge management is playing a more significant role for foundations and nonprofits as is presenting and reporting useful evaluation products. This session will address three emerging issues regarding evaluation reporting -- communicating evaluation results with the audience in mind, developing quality evaluation products in a timely way to maximize their usefulness, and clarifying the ownership of evaluation data and reports. The panelists will also discuss how evaluators and foundation staff can work together to produce products that meet their respective standards and create learning communities of practice. | |||
| |||
| |||
|
| Session Title: Of Mice and Men: How to Conduct a Random Assignment Study | |||
| Panel Session 822 to be held in Calhoun Room on Saturday, November 10, 1:50 PM to 3:20 PM | |||
| Sponsored by the Quantitative Methods: Theory and Design TIG | |||
| Chair(s): | |||
| Carrie Markovitz, Abt Associates Inc, carrie_markovitz@abtassoc.com | |||
| Abstract: Random assignment, known as the gold standard in research, is more commonly being implemented in evaluations of social programs and initiatives. However, these types of studies present unique challenges in study design, implementation, and the recruitment of subjects. In this session we will review some of the topics around designing and implementing a successful random assignment study. We will present examples of current random assignment studies, discuss the unique challenges involved in each type of evaluation, and offer best practices and recommendations for conducting random assignment studies in different settings. | |||
| |||
| |||
|
| Session Title: Cultural Isses in Multiethnic Evaluation | |||||||||||||||||||||
| Multipaper Session 823 to be held in McKeldon Room on Saturday, November 10, 1:50 PM to 3:20 PM | |||||||||||||||||||||
| Sponsored by the Multiethnic Issues in Evaluation TIG | |||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||
| Tamara Bertrand, Florida State University, tbertrand@admin.fsu.edu | |||||||||||||||||||||
| Discussant(s): | |||||||||||||||||||||
| Emiel Owens, Texas Southern University, owensew@tsu.edu | |||||||||||||||||||||
|
| Session Title: Building Capacity for Cross-cultural Leadership Development Evaluation |
| Think Tank Session 824 to be held in Preston Room on Saturday, November 10, 1:50 PM to 3:20 PM |
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG and the Multiethnic Issues in Evaluation TIG |
| Presenter(s): |
| Kelly Hannum, Center for Creative Leadership, hannumk@leaders.ccl.org |
| Discussant(s): |
| Claire Reinelt, Leadership Learning Community, claire@leadershiplearning.org |
| Emily Hoole, Center for Creative Leadership, hoolee@leaders.ccl.org |
| Kelly Hannum, Center for Creative Leadership, hannumk@leaders.ccl.org |
| Abstract: We will focus on three areas important to building evaluation capacity with regard to cross-cultural evaluations of leadership development: 1) How is the role of an evaluator similar or different when working across cultures? What capacities do evaluators need to work effectively across different cultures? How can evaluators build their capacity and/or compensate for not having certain knowledge or skills? 2) What are key issues related to data collection? What are different expectations about stakeholder involvement? How can evaluators better understand possible risks associated with stakeholder involvement? What forms of data collection should be used? How can evaluators manage the logistics of language and distance? 3) What are key issues related to data analysis and interpretation? How can one detect measurement invariance with small samples? How can evaluators be sensitive to differences of meaning with regard to concepts of leadership? What are examples of process used to include stakeholders in the interpretation of data? |
| Session Title: Building Organizational Capacity for Self-evaluation |
| Demonstration Session 825 to be held in Schaefer Room on Saturday, November 10, 1:50 PM to 3:20 PM |
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG |
| Presenter(s): |
| Trilby Smith, Metis Associates, tsmith@metisassoc.com |
| Kathleen Agaton, Metis Associates, kagaton@metisassoc.com |
| Abstract: This session will demonstrate how self-evaluation is an effective and empowering method of learning for organizations. It will show how organizations can use self-evaluation as a tool to facilitate ongoing learning, guide decision-making, and measure progress towards their goals. The session will also demonstrate how self-evaluation complements and supports an external evaluation. Presenters will address the principles of self-evaluation, and steps organizations can take to become self-evaluating. They will also discuss the training and support needed to build organizational capacity for effective self-evaluation. To illustrate how community organizations are learning through self-evaluation, the presenters will discuss lessons learned from the Jim Casey Youth Opportunities Initiative, a national foundation whose mission is to help youth in foster care make successful transitions to adulthood. With a number of demonstration sites, the work of the Initiative highlights how different organizational and community contexts influence the self-evaluation process. |
| Session Title: Comparing Apples to Apples: Applying the Rasch Measurement Framework to a Statewide Parent Survey |
| Demonstration Session 826 to be held in Calvert Ballroom Salon B on Saturday, November 10, 1:50 PM to 3:20 PM |
| Sponsored by the Special Needs Populations TIG |
| Presenter(s): |
| Kathleen Lynch, Virginia Commonwealth University, kblynch@vcu.edu |
| William Fisher, Avatar International Inc, wfisher@avatar-intl.com |
| Abstract: This demonstration session will introduce evaluators to Rasch measurement concepts and methods, illustrating their application to a statewide survey of parents' perceptions of schools' efforts to foster parent involvement. The USDOE Office of Special Education Programs requires each state to develop and submit an Annual Progress Report on their State Performance Plan for special education. To assist states, the National Center for Special Education Accountability Monitoring (NCSEAM) has developed and made available a set of surveys that were constructed within the Rasch measurement framework. Presenters will cover the basics of Rasch methodology; its usefulness for survey development, data analysis, and standard setting; and how to interpret results to inform program improvement. Using both lecture and interactive formats, presenters will engage the audience in thinking through ways to address the challenges inherent in trying to communicate to a broad audience a radically different way of thinking about measurement. |
| Session Title: Case Studies of Evaluation Use | |||||||||||||||||||||||||||||
| Multipaper Session 827 to be held in Calvert Ballroom Salon C on Saturday, November 10, 1:50 PM to 3:20 PM | |||||||||||||||||||||||||||||
| Sponsored by the Evaluation Use TIG | |||||||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||||||
| Emmalou Norland, Institute for Learning Innovation, norland@ilinet.org | |||||||||||||||||||||||||||||
|
| Session Title: Rating Tools, Causation, and Performance Measurement | ||||||||||||||
| Multipaper Session 828 to be held in Calvert Ballroom Salon E on Saturday, November 10, 1:50 PM to 3:20 PM | ||||||||||||||
| Sponsored by the Government Evaluation TIG | ||||||||||||||
| Chair(s): | ||||||||||||||
| David J Bernstein, Westat, davidbernstein@westat.com | ||||||||||||||
|
| Session Title: Articulating Authentic and Rigorous Science Education Evaluation Through the Inquiry Science Instruction Observation Protocol (ISIOP) |
| Think Tank Session 829 to be held in Fairmont Suite on Saturday, November 10, 1:50 PM to 3:20 PM |
| Sponsored by the Pre-K - 12 Educational Evaluation TIG |
| Presenter(s): |
| Daphne Minner, Education Development Center Inc, dminner@edc.org |
| Discussant(s): |
| Daphne Minner, Education Development Center Inc, dminner@edc.org |
| Neil Schiavo, Education Development Center Inc, nschiavo@edc.org |
| Abstract: A significant challenge in K-12 evaluation is the limited availability of valid and reliable instruments targeting science teaching. In many instances, evaluators forgo existing instruments and develop their own at great costs of time, money, and, sometimes, scientific rigor. The development of the Inquiry Science Instruction Observation Protocol (ISIOP) has been launched in response to these demands. ISIOP supports evaluators in determining the extent of scientific inquiry-supporting instructional practices present in middle grade science classrooms. This Think Tank session addresses questions of how evaluators select and use an observation protocol, like ISIOP. Small group discussion will center on questions of: 1. What factors do and should evaluators consider when selecting an observation protocol? 2. What guidance do evaluators need to use an observation protocol? |
| Session Title: Summer School Ain't So Bad, But Evaluating It Can Be: Lessons Learned From Outcome Evaluations of Summer Programs | ||||
| Panel Session 830 to be held in Federal Hill Suite on Saturday, November 10, 1:50 PM to 3:20 PM | ||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | ||||
| Chair(s): | ||||
| Elizabeth Cooper-Martin, Montgomery County Public Schools, elizabeth_cooper-martin@mcpsmd.org | ||||
| Discussant(s): | ||||
| Cindy Tananis, University of Pittsburgh, tananis@pitt.edu | ||||
| Abstract: School districts commit significant effort and resources to summer programs. In the following panel, the presenters will share their experiences in evaluating a variety of such programs, including academic and arts programs for elementary students and remedial and enrichment courses for middle school students. Specifically, each panelist will reflect on a particular type of outcome that is useful for evaluating a summer program and present its advantages and challenges, plus lessons learned, based on using that outcome in their evaluations. As available, panelists will present evaluation design, data collection instruments, analytical methods, and results. Members will discuss the potential and limitations of the following approaches: course data and standardized test scores from the following academic year, stakeholder survey data, cumulative effects, and scores from pre-session and post-session tests. The panel's goal is to share lessons learned in the field as an invitation to discussion about outcome evaluations of summer programs. | ||||
| ||||
| ||||
| ||||
|
| Session Title: Forging a Strong Link Between Research and Science Policy for Air Quality Decisions | |||
| Panel Session 831 to be held in Royale Board Room on Saturday, November 10, 1:50 PM to 3:20 PM | |||
| Sponsored by the Research, Technology, and Development Evaluation TIG and the Environmental Program Evaluation TIG | |||
| Chair(s): | |||
| Dale Pahl, United States Environmental Protection Agency, pahl.dale@epa.gov | |||
| Abstract: This panel discusses (1) national ambient air quality standards that protect public health and the environment under the Clean Air Act and (2) the roles of research, synthesis, and evaluation in helping inform decisions about these standards. Presentations describe a. An overview of ambient air quality standards and the use of research and science to inform decision-making about these standards; b. A paradigm for federal particulate matter research and its use to plan and coordinate research across federal agencies; c. The value of this paradigm to improve understanding of relationships between sources of atmospheric contaminants, air quality, human exposure to air pollution, human health, and risk assessment; and d. Synthesis and evaluation of new scientific knowledge relevant to decision-making about ambient air quality standards. The presentations illustrate the value of the paradigm for federal particulate matter research in forging a strong link between research and science policy on air quality issues-including the knowledge base for air quality standards, compliance, and public health impacts of air quality decisions. | |||
| |||
| |||
| |||
|
| Session Title: Putting it All Together: Integrating Evaluation Components to Create a Comprehensive Statewide Evaluation | |||
| Panel Session 832 to be held in Royale Conference Foyer on Saturday, November 10, 1:50 PM to 3:20 PM | |||
| Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG | |||
| Chair(s): | |||
| Tiffany Comer Cook, University of Wyoming, tcomer@uwyo.edu | |||
| Discussant(s): | |||
| Laura Feldman, University of Wyoming, lfeldman@uwyo.edu | |||
| Abstract: The University of Wyoming's Survey & Analysis Center (WYSAC) integrated a variety of assessments to evaluate Wyoming's Tobacco Prevention and Control Program and its components. This panel will focus on how WYSAC combined the assessments to create a comprehensive statewide evaluation. Specifically, the panel will discuss the following topics: the collection, analysis, and reporting of data related to the establishment of smoke-free environments in Wyoming; the administration of surveys to measure attitudes concerning tobacco-related policies; and the surveillance of Wyoming's tobacco consumption and prevalence. Ultimately, the panel will elaborate on how WYSAC incorporated these various evaluation components to create a comprehensive statewide evaluation that provides useful information for individual communities and state government. | |||
| |||
| |||
|
| Session Title: Engaging Participants in the Evaluation Process: A Participatory Approach | |||||||||||||||||||||
| Multipaper Session 833 to be held in Hanover Suite B on Saturday, November 10, 1:50 PM to 3:20 PM | |||||||||||||||||||||
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG | |||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||
| Arlene Hopkins, Los Angeles Unified School District, arlene.hopkins@gmail.com | |||||||||||||||||||||
|
| Session Title: Advances and Applications in Using Propensity Scores to Reduce Selection Bias in Quasi-Experiments | ||||||
| Panel Session 834 to be held in Baltimore Theater on Saturday, November 10, 1:50 PM to 3:20 PM | ||||||
| Sponsored by the Quantitative Methods: Theory and Design TIG | ||||||
| Chair(s): | ||||||
| M H Clark, Southern Illinois University, Carbondale, mhclark@siu.edu | ||||||
| Abstract: Quasi-experiments are useful for studies that need to be conducted in applied settings where random assignment to treatment groups is not practical. However, a major disadvantage in using these designs is that the treatment effects may not yield unbiased estimates. Propensity scores, the predicted probability that cases will be in a particular treatment group, are often used to help model and correct for this selection bias. The studies included as part of this panel present recent findings in propensity score research. This panel will present (a) a comparison of various methods for computing, using, and interpreting propensity scores; and (b) how propensity scores can be applied to quasi-experiments in which selection into treatment conditions is potentially biased. | ||||||
| ||||||
| ||||||
|
| Session Title: Unintended Interventions | |||||
| Panel Session 835 to be held in International Room on Saturday, November 10, 1:50 PM to 3:20 PM | |||||
| Sponsored by the Quantitative Methods: Theory and Design TIG | |||||
| Chair(s): | |||||
| Melinda Davis, University of Arizona, mfd@u.arizona.edu | |||||
| Abstract: Evaluators measure the strengths and limitations of programs and policies using a broad array of methods. However, even the best designed investigation can go awry, and study protocols can result in surprising and unintended effects. It is these unintended effects that can inform future research. Non-specific effects of treatment are usually treated as nuisance variables, to be eliminated or at least controlled. However, they can be a rich source of new interventions. A 'failed' study may not be a failure at all, if it identifies a new approach for a difficult problem. Vignettes will be presented from a variety of studies; the Consent Form as a potent treatment, useful mistakes in randomization, assessment as intervention, and the unexpected effect of a seemingly minor part of the study protocol. Each demonstrates a novel way to learn from evaluation results; effective interventions may be hidden in the non-specific effects of treatment. | |||||
| |||||
| |||||
|
| Session Title: Evaluating College Access Programs: Evaluation Models and Methods for Different Interventions: Middle School Programs, High School Programs, Summer Bridge Programs, and College Scholarships | ||||||||||||||||||||||
| Multipaper Session 836 to be held in Chesapeake Room on Saturday, November 10, 1:50 PM to 3:20 PM | ||||||||||||||||||||||
| Sponsored by the College Access Programs TIG | ||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||
| Kurt Burkum, National Council for Community and Education Partnerships, kurt_burkum@edpartnerships.org | ||||||||||||||||||||||
|