Return to search form  

Session Title: Promoting and Assessing Individual and Organizational Knowledge Building
Skill-Building Workshop 803 to be held in International Ballroom A on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Presidential Strand
Presenter(s):
Lyn Shulha,  Queen's University,  shulhal@educ.queensu.ca
Glenda Eoyang,  Human Systems Dynamics Institute,  geoyang@hsdinstitute.org
Abstract: Process use in evaluation continues to focus on having individuals learn about their programs, evaluative inquiry, and each other (Preskill, Zuckerman, & Matthews, 2003). This raises questions about how individuals learn, how we know they've learned and how this learning contributes directly to knowledge building within organizations (Coghlan, Preskill, Tzavaras Catsambas, 2003; Cousins & Shulha, 2006; Eoyang, 2006). This session begins by visiting the relationship of evaluation to newer conceptions of organizations (Eoyang, 2001); learning (Fostaty Young & Wilson, 2000); and knowledge building (Shulha & Shulha 2006), and how these conceptions might alter the role of the evaluator. Following this introduction, participants will work together using a common case study to explore the utility of the ideas/connections/extensions (ICE) taxonomy for assessing depth of individual learning, and the containers/differences/exchanges (CDE) framework for the analysis of organizational learning. Closure activities will give participants an opportunity to focus on when and how these tools might compliment their own evaluator toolkit.

Session Title: Evaluation in Education: Promises, Challenges, Booby Traps and Some Empirical Data
Panel Session 804 to be held in International Ballroom B on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Katherine McKnight,  Pearson Achievement Solutions,  kathy.mcknight@pearsonachievement.com
Abstract: NCLB legislation, with its emphasis on accountability and evidence-based programs and practices, implies a central role of evaluation in education program and policy decision-making. Therefore, the onus is on evaluators to design and conduct evaluations that produce usable information capable of serving as the basis for effective education decision-making. To produce usable information, research and evaluation must be relevant to the needs of the decision-makers. The focus of this panel is to describe the kind of information needed for a usable knowledge base to guide education decision-making and to suggest guidelines for evaluators in the design and conduct of program evaluations in the field of education. For education to advance as a field grounded in science, evaluators must continually assess gaps in the knowledge base, searching for and testing general principles upon which that knowledge base can expand and effectively inform program development and policy-making.
Evaluations as Tests of Theory
Lee Sechrest,  University of Arizona,  sechrest@u.arizona.edu
References to 'evidence-based' policy and practice in education (and other fields) are so frequent and casual as to suggest that the problems of identifying, synthesizing, and interpreting research pose no real difficulties. Yet deciding what should count as evidence is not straightforward, and how to synthesize conceptually, methodologically, and empirically diverse findings across samples that are frequently ill-defined and unrepresentative of any population of interest is extraordinarily difficult if even possible. Moreover, societal, economic, and cultural conditions under which recommendations must be applied are markedly variable and change in unpredictable ways over time. It is pointless to fall back on the virtually uniform conclusion that 'more research is needed'. What is needed is better theory about educational variables, effects at all levels and interpretation of evidence in relation to theory. Evidence can further the development of theory that can provide the foundation for the knowledge base applied to policy and practice.
What do we Mean by "What Works?"
Katherine McKnight,  Pearson Achievement Solutions,  kathy.mcknight@gmail.com
A common approach to evaluation is the proverbial 'horse race' study, in which interventions are pitted against each other. Too often these studies fail to elucidate why one intervention would be better than the other, and for whom. Problems arise from a lack of program definition (the black box problem) and rationale (why it ought to work). We are left with differences in outcomes without an understanding of how they were produced. Without that understanding Lipsey (1990) argues, an intervention "can only be reproduced as ritual in the hope it will have the expected effects." Small theories are necessary for building the kind of knowledge needed to understand study outcomes and reproduce them elsewhere; they give meaning and explanation to events and support new insights and problem-solving efforts (Lipsey, 1990). In this paper, we focus on how education research would benefit from the small theory approach.
Education and Instructional Materials Development: Towards Evidenced Based Practice
Christopher Brown,  Pearson School Companies,  christopher.brown@phschool.com
There appears to be a small but growing realization that Education has much to learn from other industries, especially medicine and agriculture, about becoming an evidenced based practice. Progress seems very slow and some mechanisms, such as NCLB and state regulations, have had unintended consequences that may be hampering the effort. This paper will discuss the state of evidenced based practice in K12 schools as well as the R&D conducted for instructional materials. It will suggest the need to examine and strengthen the entire evidenced based value chain including the roles, capabilities and expectations of researchers and evaluators, developers, teachers, students, parents, schools of education, states, districts, and the federal government. It will discuss the considerable friction between an evidenced-based perspective and the regulatory/compliance based system of US K12 education. Ideas for strengthening the value chain and truly engaging in evidenced based practice will be presented.
What is Taught and What is Tested? Evidence From the Program of International Student Assessment
Werner Wittmann,  University of Mannheim,  wittmann@tnt.psychologie.uni-mannheim.de
There are a lot of debates about problems of teaching to the test. Grades should best mirror what has been taught and individual differences in grades should reflect different amounts of learning related to the content of instruction. How are the PISA test scores in reading, math and science related to the respective grades? PISA-data from the USA and selected countries are reported, demonstrating large differences in the predictability of grades from cognitive and non-cognitive variables. The implications of these results for evidence-based education are discussed.
A Research and Development (R&D) Approach to Education Interventions
Ronald Gallimore,  Pearson Achievement Solutions,  ronaldg@ucla.edu
NCLB, with its emphasis on accountability and evidence-based practice, pressures education decision-makers and researchers to demand and provide immediate evidence for a given intervention if it is to be adopted. The need for accountability and scientific evidence in education is not at issue; however, the process by which evidence is accumulated in this type of pressure-driven system is not optimal for developing a useful knowledge base by which to develop programs and determine policy. In this paper, we focus on a systematic, multi-faceted and iterative approach to accumulating evidence for an intervention designed to improve student learning and achievement of Native Hawaiian students. This example reflects an R&D approach to developing, testing and refining education interventions consistent with Lipsey's (1990) notion of building small theories and accumulating a useful knowledge base upon which to develop effective interventions.

Session Title: International Efforts to Strengthen Evaluation as a Profession and Build Evaluation Capacity
Panel Session 805 to be held in International Ballroom C on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Arnold Love,  Independent Consultant,  ajlove1@attglobal.net
Discussant(s):
Arnold Love,  Independent Consultant,  ajlove1@attglobal.net
Abstract: Evaluation practice is spreading rapidly in many parts of the world, and along with it comes increasing need for professional evaluation expertise. However, professional evaluation expertise and know-how is not something that can be created over night. Building evaluation capacity and developing evaluation as a field of professional practice is a major challenge everywhere, but especially for developing countries. This panel draws on recent international experiences in United Nations family of agencies, Japan, and Latin America/Caribbean to address critical questions regarding the professionalization of evaluation: Who should be responsible for increasing evaluation capacity and promoting professionalization? Should governments and international bodies simply focus on creating the demand for evaluation or directly influence the development of the supply of evaluation expertise? What roles should professional evaluation associations and networks play? What are cost-effective and rapid ways to build high-quality evaluation expertise?
Strengthening Evaluation as a Profession in the United Nations System and Throughout Latin America and the Caribbean
Ada Ocampo,  United Nations Children's Fund,  aocampo@unicef.org
Ada Ocampo will discuss the strengthening of the evaluation function in the United Nations system, describing why there is a need for professional evaluators, the competencies profile for United Nations evaluators, the training strategy for building a professional cadre of evaluators, and its expected outcomes. She will also describe the latest developments in the professionalization of evaluation in Latin America, including the creation of a new Virtual Masters degree. Ocampo is currently a Programme Officer with the Evaluation Offices of the United Nations Children's Fund (UNICEF) with over 20 years of evaluation experience. She has served with the United Nations Development Programme, as coordinator for Monitoring and Evaluation of the International Fund for Agricultural Development. She created the first Latin American network on evaluation, organized several regional e-conferences on evaluation, and published several books and multimedia CDs on evaluation.
Building the Evaluation Profession in Japan: Experience of the Japanese Evaluation Society (JES) Accreditation Program
Masafumi Nagao,  Hiroshima University,  nagaom@hiroshima-u.ac.jp
Masafumi Nagao will describe the Accreditation Program developed by the Japan Evaluation Society to build evaluation as a profession and increase evaluation capacity in Japan, including an update of the pilot-test of the program in the area of school evaluation and plans for creating evaluation training programs in development assistance and governmental administration. Nagao is a research professor at the Center for the Study of International Cooperation in Education at Hiroshima University where he conducts research relating to the evaluation of international aid programs and projects in the field of education. He is a member of the Board of the Japan Evaluation Society and has served with the Sasakawa Peace Foundation and with the Technology Division of the United Nations Conference on Trade and Development. He also leads an international network of foundations and organizations in 11 Asian countries and territories for the promotion of transnational civil society.
Innovative Approaches for Increasing the Evaluation Capacity of International Educators and School Evaluators
Keiko Kuji-Shikatani,  Independent Consultant,  kujikeiko@aol.com
Keiko Kuji-Shikatani will describe the development a new interactive on-line evaluation course for international education practitioners designed to help them build their competency in conducting, commissioning or using evaluations in keeping with their program and funding requirements. Kuji-Shikatani is a program evaluation and learning specialist, providing evaluation, research, and capacity development services to organizations with training, skills development, behavioral and attitudinal change programs, in Canada, Japan, and internationally. She has worked for over 20 years with various non-profit organizations to improve their programs serving children, youth, women, families, immigrants, and workers in challenging situations. She has served as chair of the Canadian Evaluation Society's Ontario Chapter, past chair of the CES Ontario Professional Development Committee, and represents the CES as a Quality Assurance Advisor for the Japan Evaluation Society, sharing Canadian expertise in training professionals in program evaluation and research methodologies.

Session Title: Revisiting the Logic Modeling Process: Emerging Benefits, Challenges and the Role of E-Technology
Panel Session 806 to be held in International Ballroom D on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the AEA Conference Committee
Chair(s):
Ralph Renger,  University of Arizona,  renger@u.arizona.edu
Abstract: The three-step ATM process is one of many approaches to logic modeling. We have employed it extensively and with great success in a number of content areas. Through our work, incidental benefits and new uses of the process have become apparent. In addition, we have encountered situational challenges and have made modifications to the process to meet the needs of various stakeholders. In some situations we have employed e-technologies to overcome challenges and to enhance the utility of the process. This session will begin with a review of the ATM logic modeling process and a discussion of benefits that have emerged. Following that, the challenges encountered in the process and proposed solutions will be considered. The session will conclude with a discussion of the role of e-technologies in facilitating the process.
Emerging Benefits of the Logic Modeling Process
Jessica Surdam,  University of Arizona,  jsurdam@u.arizona.edu
This session will review the ATM logic modeling process, highlight its emerging benefits, and outline recent innovations in the process and its usage. Particular emphases will be placed on our experiences using the process (1) in cross-cultural contexts and (2) in organizational settings. In particular, we have found the process to translate well across cultures, even when the process is used between a cultural outsider and a cultural insider. In addition, we have identified substantial utility in using the process to help organizations 'organize' themselves.
Challenges Encountered in the Logic Modeling Process
Erin Peacock,  University of Arizona,  epeacock@email.arizona.edu
Though there are numerous and emerging benefits of the ATM approach to logic modeling, it is not without challenges. In particular, we have encountered situations in which the process, in its unmodified form, does not meet the needs of the stakeholders. This session will focus on the challenges we have encountered using the process and the subsequent situational modifications we have made to the process. We will then discuss the pros and cons of each modification, highlighting the outcome of the modification in our real experience.
Understanding the Role of E-Technology in the Logic Modeling Process
Kim Fielding,  University of Arizona,  kjf@u.arizona.edu
This session highlights the different e-learning technologies that would facilitate the logic modeling process. Specifically, we will draw attention to technologies that can be used to: 1) identify root causes of the problem, 2) facilitate decision making concerning the root causes, and 3) identify measurement strategies to assess the root causes. The presentation will focus on the functionality of the technology needed for each phase of the logic modeling process, potential criteria to evaluate software packages, and an overview of software packages that could be applied to each phase. Review of e-learning technologies is intended to enable more efficient and cost effective forms of communication among long-distance collaborators and stakeholders and allow technical assistance and feedback to be exchanged easily throughout the planning, development, and evaluation of programs.

Session Title: Learning How to Start and Succeed as an Independent Evaluation Consultant
Panel Session 807 to be held in International Ballroom E on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Independent Consulting TIG
Chair(s):
Jennifer Williams,  J E Williams and Associates LLC,  jew722@zoomtown.com
Discussant(s):
Michael Hendricks,  Independent Consultant,  mikehendri@aol.com
Abstract: Veteran Independent Consultants will share their professional insights on starting and maintaining an Independent Evaluation Consulting business. Panelists will describe ways of building and maintaining client relationships, and share their expertise related to initial business set-up and lessons they have learned. Discussions will include the pros and cons of having an independent consulting business, the various types of business structures, methods of contracting and fee setting, as well as the personal decisions that impact on having your own business. They will examine some consequences of evaluation in the context of conducting independent consulting in diverse settings. The session will include ample time for audience members to pose specific questions to the panelists.
Evaluation of National Evaluation Programs: A Partnership Perspective
Amy Germuth,  Compass Consulting Group,  agermuth@mindspring.com
Dr. Germuth is an experienced teacher and school administrator with expertise evaluating PreK-12 and reform initiatives as well as business/industry collaboratives.
Learning to Grow and Direct a Small Business in the Field of Educational Evaluation
Kathleen Haynie,  Kathleen Haynie Consulting,  kchaynie@stanfordalumni.org
Dr. Haynie, Director of Kathleen Haynie Consulting, has been an evaluation consultant since 2002. Her current projects span the field of science education: early childhood, K-12, learning, teaching, and assessment. She will discuss the "growing pains" of a developing business - bringing in projects; balancing workloads and priorities; hiring staff; budgeting; communicating with universities, school districts, and corporations; developing new business under time constraints.
Learning From Reflections of 30 Years of Evaluation Experience
Mary Ann Scheirer,  Scheirer Consulting,  maryann@scheirerconsulting.com
Dr. Scheirer has been an evaluator for 3 decades, working in a variety of settings including higher education, government agencies, large consulting firms, and now, Independent Consulting. Her presentation will focus on her current evaluation of several projects funded by the same local foundation.
International Evaluation Consulting: Learning From one Woman's Perspective
Tristi Nichols,  Manitou Inc,  tnichols@manitouinc.com
Dr. Nichols is a program evaluator with a sole proprietorship consulting business, and concentrates primarily on international issues. Her reflections about consulting, international travel and the types of decisions she makes (which take into account her family) will be of interest to novice, veteran, or aspiring independent consultants.
Jennifer Williams,  J E Williams and Associates LLC,  jew722@zoomtown.com
Jennifer E. Williams Ed.D., is President and Lead Consultant of J. E. Williams and Associates, an adjunct professor, licensed counselor, and Independent Consultant. She has extensive experience conducting education, social and market research and program evaluation. Her research agenda includes cultural sensitivity in all professional practice, business/economic inclusion, and abuse education, prevention and intervention (substance abuse, sex abuse, child abuse, domestic violence, school/workplace violence, etc.). She will share lessons learned when she transitioned from employee to independent consultant and gained a contract from the former employer.

Session Title: Examining the Form and Function of Evaluation in Philanthropy
Panel Session 808 to be held in Liberty Ballroom Section A on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Pennie G Foster-Fishman,  Michigan State University,  fosterfi@msu.edu
Abstract: The form and function of evaluation in philanthropy is a topic that has received considerable attention in recent years. Debated issues concern: 1) whether evaluation should play an accountability and/or a learning function; 2) how internal evaluation units should be structured; 3) how to effectively integrate evaluation and a learning orientation within foundations; and 4) who should be the targeted audiences of evaluation information. This panel will explore these issues and others, highlighting how foundations can create the internal systems needed to allow evaluation to flourish. Three panelists, representing three major foundations will highlight their own experiences with evaluation within their organizations.
Evaluation and Learning in the World of Philanthropy
Pennie G Foster-Fishman,  Michigan State University,  fosterfi@msu.edu
Branda Nowell,  North Carolina State University,  blnowell@chass.ncsu.edu
Kevin Ford,  Michigan State University,  fordjk@msu.edu
This presentation discusses our findings from our exemplar case study of approximately 30 US foundations and the form and function of evaluation within these organizations. We will examine what makes evaluation work within these organizations and the purpose, structure, and context surrounding evaluation operations. Lessons learned about evaluation and learning within the field of philanthropy will be shared.
Evaluation at the W.K. Kellogg Foundation
Teresa Behrens,  W K Kellogg Foundation,  tbehrens@wkkf.org
The evolution of the form and function of evaluation at the Kellogg Foundation will be discussed. Dr. Behrens will also react to the evaluation findings presented and discuss their implications for evaluation at the Kellogg Foundation.
Evaluation at the Packard Foundation
Gail Berkowitz,  Packard Foundation,  gberkowitz@packard.org
Dr. Berkowitz will react to the evaluation findings presented and discuss their implications for evaluation at the Packard Foundation.
Evaluation at the Barr Foundation
Roberto Cremonini,  Barr Foundation,  rcremonini@pilothouse.com
Dr. Cremonini will react to the evaluation findings presented and discuss their implications for evaluation at the Barr Foundation.

Session Title: Money Talks: Including Costs in Your Evaluation
Panel Session 809 to be held in Liberty Ballroom Section B on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Quantitative Methods: Theory and Design TIG and the Costs, Effectiveness, Benefits, and Economics TIG
Chair(s):
Patricia Herman,  University of Arizona,  pherman@email.arizona.edu
Discussant(s):
Brian Yates,  American University,  brian.yates@mac.com
Abstract: This panel presents an overview of three types of cost-based evaluation techniques that can be added to existing evaluation plans to increase the usefulness of results. The three methods are basic cost-effectiveness analysis, Monte Carlo simulation, and threshold analysis. Each is applied, as an illustration, to a different component of a tobacco control program. This panel will attempt to illustrate that cost-based evaluation is possible across a number of types of programs where these techniques might not typically be considered. None of the programs evaluated had a previous cost evaluation, and the methods here were conducted, together with an experienced practitioner, by program evaluators with little direct experience with cost-based techniques. All results are preliminary, and each panelist will discuss what they learned from adding a cost analysis to their evaluations. The panel will end with a discussion intended to ensure generalization of the approaches to all types of programs.
Overview of Cost-based Evaluation
Patricia Herman,  University of Arizona,  pherman@email.arizona.edu
This short presentation will provide a quick overview of cost-based evaluation and its benefits, and allow for an introduction to the other speakers, their program components, and the cost-based evaluation technique each will demonstrate.
Cost-effectiveness of Smoking Cessation Programs: A Preliminary Analysis
Dee Dee Avery,  University of Arizona,  davery@email.arizona.edu
Patricia Herman,  University of Arizona,  pherman@email.arizona.edu
Arizona, like many states, has in place a series of programs targeting smoking cessation. An estimate of the quit rate is available for each as a result of regular program evaluation activities. In this case all that is required is to assemble the resources used by each program from program documentation and to compare these costs to the effects seen. This paper presents an example of a straightforward cost-effectiveness analysis. Challenges involved in gathering cost data, in getting program manager by-in for the analysis, and in performing a sensitivity analysis will also be discussed.
Costs and Effects From Several Sources? Putting it All Together With a Monte Carlo Simulation
Michele Walsh,  University of Arizona,  mwalsh@u.arizona.edu
Patricia Herman,  University of Arizona,  pherman@email.arizona.edu
Put broadly, Monte Carlo methods are useful for modeling phenomena with significant uncertainty in inputs. For campaigns offering free nicotine replacement therapy (NRT) there is often an increase in cessation efforts by smokers, a proportion of that increase (and of those who would have tried to quit anyway) that obtain the free NRT, an increase in the proportion of those efforts that are successful at the end of the intervention, and an increase in the proportion of those who remain successful at longer-term follow up-the measured effect of the program. In addition to overall administrative costs, the costs of these programs depend on the number of smokers who must be screened for eligibility, the number receiving free NRT vouchers, and the number who actually redeem those vouchers. Each of these inputs is uncertain. Monte Carlo simulation allows an estimate of program cost-effectiveness takes these uncertainties into account.
Threshold Analysis: How Much Reduced Exposure Does it Take to Make a Secondhand Smoke Media Campaign Worthwhile?
Crystal Schemp,  University of Arizona,  csg@email.arizona.edu
Patricia Herman,  University of Arizona,  pherman@email.arizona.edu
Every year millions are spent by tobacco control programs using media campaigns to prevent smoking initiation, to promote cessation, and to reduce exposure to secondhand smoke. Because of their wide exposure and the number of concurrent program components, the effect of these campaigns is difficult to determine. Although other cost-based evaluations require that effects be measured prior to (or in parallel with) the cost analysis, a threshold analysis uses campaign cost and the dollar value of smoking reduction to provide a lower-limit for effects for the program to be cost-effective. This threshold value can oft-times indicate without further evaluation a high likelihood of cost-effectiveness (the lower limit is so low that it is highly likely actual measured effects will exceed that value) or a high likelihood that the program is not cost-effective (the lower limit is so high as to preclude the likelihood that actual effects will exceed that value).

Session Title: Systems Methodologies for Evaluation
Multipaper Session 810 to be held in Mencken Room on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Systems in Evaluation TIG
Chair(s):
Bob Williams,  Independent Consultant,  bobwill@actrix.co.nz
Evaluating Participants' Conceptual Changes Around Complex Program Outcomes: Measuring Thinking Around Integrated Food Systems
Presenter(s):
Rita O'Sullivan,  University of North Carolina, Chapel Hill,  ritao@email.unc.edu
John O'Sullivan,  North Carolina A & T State University,  johno@ncat.edu
Jeni Corn,  Technology in Learning SERVE,  jocorn@serve.org
Abstract: Measuring systems change and the way individuals change in their systems thinking pose very interesting and important challenges for program evaluators. As the complexity of desired program effects accelerates into systems outcomes versus discrete changes in individual indicators, evaluators need more tools to track these involved outcome changes. This paper presents the methodology and results of measuring one group of program participants' changes in how they think about integrated food systems as the result their participation in a leadership development program. Evaluators used Inspiration, a concept mapping software, to develop pre and post depictions of participants' concepts of integrated food systems. Then using a rubric developed by the evaluators, differences in the concept maps were assessed.
Independent Science Review in Watershed Management Projects: What Insights Does Critical Systems Heuristics Provide in Understanding the Quest for Best Available Science?
Presenter(s):
Mary A McEathron,  University of Minnesota,  mceat001@umn.edu
Abstract: Scientific evidence or scientifically-based evaluations are phrases spoken often these days, conjuring up images of a nearly singular path to increased clarity and incisive decision-making. The intersection of science and policy in watershed management projects tells a murkier, more twisted tale. Interviews conducted with natural resource agency staff and independent science panelists who were involved in recent reviews on the Columbia and Missouri Rivers revealed significant pockets of disconnection and uncertainty among the scientific disciplines needed to inform decisions in large ecosystems. Frequently occurring themes included boundary issues, which shifted and divided levels of expertise, roles, and the assessment of knowledge. This paper will focus on how the author used Ulrich's Critical Systems Heuristics to guide and inform her analysis. In addition to the insights gained, the author will discuss the successes and challenges she encountered in applying this (new to her) approach from the Systems world.
How Do Evaluation Concepts Travel? Using Social Network Analysis to Trace Knowledge Transfer in the International Program for Development Evaluation Training
Presenter(s):
Rahel Kahlert,  University of Texas, Austin,  kahlert@mail.utexas.edu
Robert Kahlert,  University of Vienna,  robert.kahlert@gmail.com
Abstract: The increasing demand for political accountability in governments has propelled the diffusion and institutionalization of evaluation in the public sector on a global scale—especially from the Western societies to the developing world. Social network analysis can aid in the detection of information flow and knowledge transfer. With regard to the evaluation field, it can help determine whether a particular tendency exists towards directionality in the diffusion of evaluation theories and practices. In particular, social network analysis can be used to trace the diffusion of evaluation models across cultural boundaries. The paper utilizes information from the international setting of the International Program for Development Evaluation Training (IPDET). IPDET has trained approximately 1,500 evaluators from more than 100 countries since its inception in 2001. A social network will be constructed from trainers, trainees, and their respective institutional, professional, and academic affiliations. By annotating the social network with the conceptualizations communicated, we can track the shifts these conceptualizations undergo as they are transported between individuals and institutions with different socio-cultural contexts and political needs, generating a network of a very different topography.

Session Title: North Carolina Cooperative Extension's Program Development Institute: A Multi-faceted, Multi-level, Multi-disciplinary Training Approach
Demonstration Session 811 to be held in Edgar Allen Poe Room  on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Extension Education Evaluation TIG
Presenter(s):
Lisa Guion,  North Carolina State University,  lisa_guion@ncsu.edu
Abstract: This demonstration session will provide participants with a training outline for each day of the intensive North Carolina Cooperative Extension Program Development Institute. The development, organization and structuring of the institute will be discussed. A list of teaching tools that were used each day of the institute will also be shared. Finally, the use of some of the tools found to be most effective will be demonstrated. Participant will have the opportunity to ask the presenter about logistical questions that could aid them in implementing a similar training in their state. Extension Evaluators and Specialists charged with providing training on program development will find this session to be informative.

Session Title: Empowerment Evaluation Communities of Learners: From Rural Spain to the Arkansas Delta
Multipaper Session 812 to be held in Carroll Room on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
David Fetterman,  Stanford University,  profdavidf@yahoo.com
Discussant(s):
Stewart I Donaldson,  Claremont Graduate University,  stewart.donaldson@cgu.edu
Abstract: Empowerment evaluation is the use of evaluation concepts, techniques, and findings to foster improvement and self-determination. It employs both qualitative and quantitative methodologies. It also knows no national boundaries. It is being applied countries ranging from Brazil to Japan, as well as Mexico, United Kingdom, Finland, New Zealand, Spain, and the United States. These panel members highlight how empowerment evaluation is being used in rural Spain and the Arkansas Delta. In both cases, they depend on communities of learners to facilitate the process. The third member of the panel highlights a web-based tool to support empowerment evaluation that allow crosses all geographic boundaries.
Learning From Empowerment Evaluation in Rural Spain: Implications for the European Union
Jose Maria Diaz Puente,  Polytechnic University, Madrid,  jmdiazpuente@gmail.com
At the present time, thousands of evaluation works are carried out each year in the European Union to analyze the efficacy of European policies and seek the best way to improve the programs being implemented. Many of these works are related to programs applied in the rural areas that occupy up to 80% of the territory of the EU and include many of the most disadvantaged regions. The results of the application of empowerment evaluation in the rural areas of Spain show that this evaluation approach is an appropriate way to foster learning in the rural context. The learning experience was related to capacity building in stakeholders and evaluation team, the evaluator role and advocacy, the impact of the empowerment evaluation approach, its potential limitations, difficulties and applicability to rural development in the EU.
Empowerment Evaluation: Transforming Data Into Dollars and the Politics of Community Support in Arkansas Tobacco Prevention Projects
Linda Delaney,  Fetterman and Assoc,  linda2inspire@earthlink.net
David Fetterman,  Stanford University,  profdavidf@yahoo.com
Empowerment evaluation is being used to facilitate tobacco prevention work in the State of Arkansas. The University of Arkansas's Depart of Education is guiding this effort, under the Minority Initiated Sub-Recipient Grant's Office. Teams of community agencies are working together with individual evaluators throughout the state to collect tobacco prevention data and turn it into meaningful results in their communities. They are also using the data collectively to demonstrate how a collective can be effective. The grantees and evaluators are collecting data about the number of people who quit smoking and translating that into dollars saved in terms of excess medical expenses. This has caught the attention of the Black Caucus and the legislature. Lessons learned about transforming data and the politics of community support are shared.
Empowerment Evaluation and the Web: (interactive Getting to Outcomes) iGTO
Abraham Wandersman,  University of South Carolina,  wandersman@sc.edu
iGTO is an Internet based approach to Getting to Outcomes called Interactive Getting to Outcomes. It is a capacity-building system, funded by NIAAA, that is designed to help practitioners reach results using science and best practices. Getting to Outcomes (GTO) is a ten-step approach to results-based accountability. The ten steps are as follows; Needs/Resources, Goals, Best Practices, Fit, Capacity, Planning, Implementation, Outcomes, CQI, and Sustainability. iGTO plays the role of quality improvement/quality assurance in a system that has tools, training, technical assistance, and quality improvement/quality assurance. With iGTO, the organizations uses empowerment evaluation approaches to assess process and outcomes and promote continuous quality improvement. Wandersman et al highlight the use of iGTO in two large state grants to demonstrate the utility of this new tool.

Session Title: The Follow-up Monitoring and Outcome Survey for National Research and Development Projects in New Energy and Industrial Technology Development Organization (NEDO)
Multipaper Session 813 to be held in Pratt Room, Section A on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Takahisa Yano,  New Energy and Industrial Technology Development Organization,  yanotkh@nedo.go.jp
Abstract: NEDO is Japan's largest public R&D management organization for promoting various areas of technologies. It is very important for funding agencies as NEDO to monitor the post-project activities of project participants toward the practical application of R&D achievements, to assess the impact of national R&D projects and to review previous post-project evaluations in view of post-project activities in order to provide any feedback to improve R&D management. In this session, from these points of view, relevance between the score of ex-post evaluation and post-project activities will be discussed. And in order to identify the important management factors for successful or failure of post-project activities, unique procedures, we called -Follow-up chart-, will be discussed. And in order to understand outcomes derived from National R&D, a case study using various indicators will also be discussed.
Study of the Correlation Between Ex-post Evaluation and Follow-up Monitoring of National Research and Development (Part I)
Hiroyuki Usuda,  New Energy and Industrial Technology Development Organization,  usudahry@nedo.go.jp
Momoko Okada,  New Energy and Industrial Technology Development Organization,  okadammk@nedo.go.jp
NEDO has conducted intermediate evaluations and ex-post evaluations since FY2001. In addition to ex-ante (pre-project) evaluations of new R&D projects, NEDO also started follow-up monitoring and evaluations on completed project in FY2004. In the intermediate and ex-project evaluation work, projects are assessed and evaluated from the following four categories. 'Purpose and strategy', 'Project management', 'R&D achievements', and 'Prospect for practical applications and other impacts'. On the other hand, NEDO has tracked participating organizations of which post-project activities have reached practical applications. In this study, we will try to verify the validity of ex-post evaluations by comparing the result of ex-post evaluations with that of follow-up monitoring.
Study for the Important Management Factors Based on Follow-up Monitoring Data (Part II)
Setsuko Wakabayashi,  New Energy and Industrial Technology Development Organization,  wakabayashistk@nedo.go.jp
Tsutomu Kitagawa,  New Energy and Industrial Technology Development Organization,  kitagawattm@nedo.go.jp
Takahisa Yano,  New Energy and Industrial Technology Development Organization,  yanotkh@nedo.go.jp
Kazuaki Komoto,  New Energy and Industrial Technology Development Organization,  kohmotokza@nedo.go.jp
NEDO has used Follow-up Monitoring to track various post-project activities of participating organizations in the past NEDO projects since FY2004. NEDO has conducted questionnaires survey and interviews to companies of which post-project activities have reached commercialization stage or discontinued. And also, NEDO has specifically tried to identify important management factors by using a Follow-up Chart in order to improve its project management. In this study, several important management points which NEDO had obtained from the results of FY 2005 and FY2006 will be discussed.
Approach for the Understanding of Outcomes Derived from National Research and Development of Energy Conservation Project (Part III)
Kazuaki Komoto,  New Energy and Industrial Technology Development Organization,  kohmotokza@nedo.go.jp
Tsutomu Kitagawa,  New Energy and Industrial Technology Development Organization,  kitagawattm@nedo.go.jp
Takahisa Yano,  New Energy and Industrial Technology Development Organization,  yanotkh@nedo.go.jp
Setsuko Wakabayashi,  New Energy and Industrial Technology Development Organization,  wakabayashistk@nedo.go.jp
In this study, in order to make a method to understand the outcomes, the energy conservation project which purposes developing the advanced industrial furnace conducted by NEDO were adopted, because NEDO has contributed for the development of technology in this field. 5 indicators, such as amount of the burner, market size, the effect on CO2 reduction, were used to show the outcomes and the data obtained by using these indicators presented in this study.

Session Title: Ethics in Evaluation: At the Crossroads of Principle to Practice
Skill-Building Workshop 814 to be held in Pratt Room, Section B on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Teaching of Evaluation TIG
Chair(s):
Linda Schrader,  Florida State University,  lschrade@mailer.fsu.edu
Presenter(s):
Michael Morris,  University of New Haven,  mmorris@newhaven.edu
Abstract: Evaluators strive to uphold ethical principles in their practice of evaluation while working in challenging and diverse contexts. The AEA Guiding Principles for Evaluators delineate professional skills and behaviors that embody a set of values and ethics for effective evaluation practice. How does an evaluator infuse these ethical guidelines into practice? This skill-building workshop will present an overview of the Guiding Principles and examine how these ethical principles can be employed to prevent, and respond effectively to, ethical dilemmas encountered as an evaluation unfolds. Ethical issues regarding conflicts around stakeholders' priorities, differing cultural values, and varied expectations for the evaluation will be discussed. Participants will have opportunities to apply these concepts to a case study and explore strategies for addressing ethical challenges encountered in evaluation planning and implementation. It is expected that participants will acquire reflective insights and knowledge about the application of ethical principles to enhance their practice.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Teen Interactive Theater Education: Evaluation of a Youth Development Approach to the Reduction of Risk Behaviors
Roundtable Presentation 815 to be held in Douglas Boardroom on Saturday, November 10, 1:50 PM to 3:20 PM
Presenter(s):
Ruth Carter,  University of Arizona,  rcarter@cals.arizona.edu
Daniel McDonald,  University of Arizona,  mcdonald@cals.arizona.edu
Abstract: TITE is an innovative youth development program that engages young people through the use of experiential activities on pertinent topics in today's society and employs a cross-age teaching strategy. The evaluation of the program adds to the knowledge-base of the effectiveness of youth development approaches, particularly among underrepresented populations (63% of respondents identify themselves as Hispanic and 21% as Native American). This roundtable discussion will show how evaluation results have been used to inform the development and implementation of the TITE curriculum. Issues relating to the evaluation will be discussed including obtaining human subjects approval and working with alternative high schools and youth detention centers.
Roundtable Rotation II: What Exactly are Life Skills Anyway?
Roundtable Presentation 815 to be held in Douglas Boardroom on Saturday, November 10, 1:50 PM to 3:20 PM
Presenter(s):
Benjamin Silliman,  North Carolina State University,  ben_silliman@ncsu.edu
Daniel Perkins,  Pennsylvania State University,  dfp102@psu.edu
Abstract: This roundtable discusses key issues in defining, measuring, and evaluating life skills in youth. Presenters illustrate these issues with a critique of two models and two measures of life skills. Discussion focuses on issues relevant to theory and practice: how much life skills such as goal setting, leadership, or communication is learned (and how much born-in)? Who is the best judge of life skills? What methods work best for documenting growth of life skills? (How) should measurement methods be integrated with educational methods? When and how often should life skills be evaluated? (How) do individual differences affect mastery of life skills?

Session Title: Learning From the Evaluation of Voluntary Environmental Partnership Programs
Multipaper Session 816 to be held in Hopkins Room on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Environmental Program Evaluation TIG
Chair(s):
Katherine Dawes,  United States Environmental Protection Agency,  dawes.katherine@epa.gov
Abstract: Many of today's environmental challenges cannot be addressed by regulation alone. They require a broader mix of solutions - regulatory programs, information, education, technical assistance, grants, and voluntary partnership programs. Partnership programs have been the subject of many evaluations and reviews. EPA Partnership Programs play an important role in improving air quality, energy efficiency, and reducing solid waste. They enable flexible, collaborative, market-driven solutions that can deliver measurable environmental results. EPA began using Partnership Programs in the early 1990s as a unique, non-regulatory approach to environmental management. Recently, EPA Partnership Programs have received increasing scrutiny from internal and external audiences who question whether these programs help the Agency achieve its environmental goals. This multi-paper session will discuss efforts underway to: 1) coordinate measurement and evaluation efforts of these programs; 2) discuss lessons learned from evaluating two Partnership Programs; and 3) begin a dialogue about evaluating the next generation of programs.
The Lay of the Land: "Voluntary" Partnership Programs at the United States Environmental Protection Agency
Laura Pyzik,  United States Environmental Protection Agency,  pyzik.laura@epa.gov
To help coordinate Partnership Program efforts across the Agency, NCEI has developed a Partnership Programs Coordination (PPC) Team. This Team assures that 'Voluntary' Partnership Programs are well designed, measured, branded and managed, and present a coherent image to external partners. In recent years, EPA Partnership Programs have been the subject of a number of internal and external evaluations and reviews. Consequently, the PPC Team has taken the lead for coordinating efforts to equip Partnership Programs with the necessary measurement and evaluation tools and trainings. The PPC Team will discuss: 1) what was learned from past evaluative inquiries; 2) how program evaluations have helped or challenged their coordination efforts; and 3) ongoing efforts to measure and evaluate EPA Partnership Programs including the development of measurement and evaluation guidelines, training, and an Information Collection Request to allow Partnership Programs to collect information on outcomes.
Measuring the Effectiveness of Environmental Protection Agency's Indoor Air Quality Tools for Schools Program
Dena Moglia,  United States Environmental Protection Agency,  moglia.dena@epa.gov
IAQ TfS is a voluntary, flexible, multi-media program that stresses teamwork and collaboration to help schools/school districts identify, correct and prevent indoor air pollution and other environmental problems so they can provide safe, healthy learning environments for children. The IAQ TfS Kit - a central part of the Program -- helps schools develop an IAQ management plan and shows them how to carry out practical action to improve IAQ at little or no cost using in-house staff to conduct straightforward activities. Presenters will discuss lessons learned regarding: (1) the impact of a comprehensive IAQ management plan on a school's indoor environment; (2) the resources associated with implementing IAQ management plans; and (3) how well an IAQ management plan can reduce environmental asthma triggers. The results will shed light on program outcomes, the impacts of the IAQ TfS program, and the effectiveness of the approach to meeting EPA's Clean Air goals.
Evaluating the Hospitals for a Healthy Environment (H2E) Program's Partner Hospitals' Environmental Improvements
Chen Wen,  United States Environmental Protection Agency,  wen.chen@epa.gov
The H2E Program, a voluntary collaboration among the EPA, American Hospital Association, American Nurses Association, and Health Care Without Harm, has operated since 1998. The H2E program provides a variety of technical assistance tools to help Partner facilities reduce their environmental impact, including: fact sheets, website, monthly teleconference training calls, and peer-to-peer listserv's. Among the program goals, the H2E seeks the virtual elimination of mercury containing waste from the healthcare sector by FY2005. An evaluation was conducted to determine how successful the H2E program has been in achieving the aforementioned goal, as well as a 33% reduction of healthcare waste by FY2005, and 50% reduction of healthcare waste by FY2010. This paper discusses lessons learned regarding: 1) how H2E can best help Partner hospitals collect environmental information that will help both the hospitals and EPA; and 2) which H2E activities are most effective in encouraging hospitals to make environmental improvements.
Evaluating the Next Generation of Environmental Protection Agency (EPA) Partnership Programs: Where Do We Go From Here?
Laura Pyzik,  United States Environmental Protection Agency,  pyzik.laura@epa.gov
In the spirit of learning and information exchange, the last session of this panel involves a dialogue between panelists and conference participants to respond to three key questions regarding existing and future evaluations of environmental Partnership Programs. 1) What have existing evaluations of EPA Partnership Programs taught us about the design, and effectiveness of Partnership Programs? 2) How do we use what was learned from past evaluations to improve the ability of Partnership Programs to achieve environmental results? 3) What questions still need to be answered regarding evaluation designs, and data collection methods that are the most appropriate for evaluating environmental Partnership Programs?

Session Title: Does Aid Evaluation Work?: Reducing World Poverty by Improving Learning, Accountability and Harmonization in Aid Evaluation
Multipaper Session 817 to be held in Peale Room on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Michael Scriven,  Western Michigan University,  scriven@aol.com
Abstract: This session will discuss some fundamental and deeply imbedded issues in aid evaluation, including relatively low achievement of learning, limitations in accountability and lack of harmonization among donors and between government and donors. Clearly these issues are inter-related, and we see them as not merely technical but deeply structural. The first presentation will examine these issues through a comparative study of nine development project in Africa and Asia, and it will propose a newly refined framework of cost-effectiveness analysis for organizational learning and accountability. The second presentation will focus the structural factors and arrangements that have led serious positive bias and disinterest among stakeholders though a systematic review of 31 evaluation manuals and their application. The third presentation will focus on the harmonization of the current aid practices by reviewing the current tools and their actual uses, and it will propose challenges and opportunities with some possible solutions.
Reducing World Poverty by Improving Evaluation of Development Aid
Paul Clements,  Western Michigan University,  paul.clements@wmich.edu
Although international development aid bears a heavy burden of learning and accountability, the way evaluation is organized in this field leads to inconsistency and positive bias. This paper first discusses structural weaknesses in aid evaluation. Next it presents evidence of inconsistency and bias from evaluations of several development projects in Africa and India. While this is a limited selection of projects, the form of the inconsistency and bias indicates that the problems are widespread. Third the paper shows how the independence and consistency of evaluations could be enhanced by professionalizing the evaluation function. Members of an appropriately structured association would have both the capacity and the power to provide more helpful evaluations. In order better to support learning and accountability, the independent and consistent evaluations should be carried out using a cost-benefit framework.
Lessons Learned from the Embedded Institutional Arrangement in Aid Evaluation
Ryoh Sasaki,  Western Michigan University,  ryoh.sasaki@wmich.edu
In the past, several trials of meta-evaluation have been conducted to answer a long-held question: Does aid work? However, the general public still today suspects its effectiveness and asks the same question. One of the reasons why people are still facing this question is that there would be serious flaws in the current aid evaluation practice. In this paper, I will present the issues identified by the systematic review of 31 evaluation guidelines developed by aid agencies (the multilateral and bilateral aid agencies) and related review reports. I can conclude the identified issues are those 'deeply embedded in institutional arrangement' rather than technical issues. They are: (i) dominance of agency's own value criteria under the name of 'mixed-up of all stakeholders' values', (ii) dependency of evaluators under the title of external consultant, (iii) modificationality of evaluation reports, and (iv) logical flaw of aid evaluation. Some fundamental suggestions are made at last.
Hope for High Impact Aid: Real Challenges, Real Opportunities and Real Solutions
Ronald Scott Visscher,  Western Michigan University,  ronald.s.visscher@wmich.edu
The Paris Declaration demands mutual accountability and harmonization between all parties involved in international aid. The extreme challenge in Afghanistan is one of many examples of why the fate of freedom and democracy now depends on this. Yet the secret is out. Succeeding in international development is tough. Everyone now knows failure is the norm. Evaluators must recognize this situation as a historic opportunity to assume independence, "speak truth to power" and demand support for high quality evaluation. By taking on this stronger role monitoring and evaluation (M&E) will have the opportunity to meet its promise of inspiring real progress. But delivering mutual accountability, learning and coordination will still be required. How will M&E deliver these on these heightened demands? This presentation will help evaluators learn how new and improved M&E tools designed to meet these complex demands can be integrated into real practical solutions for each unique context.

Session Title: Advocacy, Community Mobilization and Systems Change: Assessing Unique Strategies to Impact Community Health
Multipaper Session 818 to be held in Adams Room on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Zoe Clayson,  Abundantia Consulting,  zoeclay@abundantia.net
Follow the Money: Assessing Clinic Consortia Policy Advocacy Capacity
Presenter(s):
Annette Gardner,  University of California, San Francisco,  annette.gardner@ucsf.edu
Claire Brindis,  University of California, San Francisco,  claire.brindis@ucsf.edu
Astrid Hendricks,  The California Endowment,  ahendricks@calendow.org
Abstract: Clinic financial stability or having the financial ability to meet the needs of underserved patient populations is a struggle in the face of budget constraints and an increasing number of uninsured. The 19 consortia grantees funded by The California Endowment successfully increased their financial and operational stability and that of their member clinics. From 2001-2005, grantees increased the amount of funding secured on behalf of clinics and consortia by a total of $505 million. Funding secured by grantees that was attributable to grant-funded policy advocacy and fund development activities increased from $74 million in 2001 to $152.2 million in 2005. Multiple strategies were used simultaneously to achieve clinic financial stability, including engaging in policy advocacy to maintain or increase funding, funding diversification, and developing relationships with private sector funders. This paper describes the changes in funding secured from 2001-2005, including the amount, type and allocation, as well as the strategies undertaken by clinic consortia to secure this funding.
Community Mobilization: Framing the Strategy and Evaluating Results
Presenter(s):
Roberto Garcia,  Abundantia Consulting,  rng17@cvip.net
Paul Speer,  Vanderbilt University,  paul.speer@vanderbilt.edu
Zoe Clayson,  Abundantia Consulting,  zoeclay@abundantia.net
Abstract: Community mobilization is a health promotion strategy that involves organizing community members to support and implement preventive health programs. Due to its reputation for being inexpensive, culturally sensitive, and building upon existing community infrastructure, community mobilization efforts are becoming increasingly common components of US and international health programs. While the theoretical underpinnings of community mobilization are well-documented, it remains broadly defined in practice and therefore difficult to evaluate. This presentation will explore the various theoretical and practical definitions of community mobilization and identify practices that are critical to its efficacy. It will begin with a review of existing literature about community mobilization strategies in the US and abroad and highlight salient evaluation findings. An evidence-based definition that incorporates the highly political nature of community mobilization and advocacy work will be offered and tested through a case study involving the Agricultural Worker Health Initiative, which the authors are currently evaluating.
Critical Components of Using a Systems Approach to Effect Environmental Asthma Policies and Reduce Health Disparities
Presenter(s):
Mary Kreger,  University of California, San Francisco,  mary.kreger@ucsf.edu
Claire Brindis,  University of California, San Francisco,  claire.brindis@ucsf.edu
Dana Hughes,  University of California, San Francisco,  dana.hughes@ucsf.edu
Diane Manuel,  The California Endowment,  dmanuel@calendow.org
Diana Lee,  National Community Development Institute,  dlee@ncdinet.com
Annalisa Robles,  The California Endowment,  arobles@calendow.org
Marion Standish,  The California Endowment,  mstandish@calendow.org
Lauren Sassoubre,  University of California, San Francisco,  lauren.sassoubre@ucsf.edu
Abstract: Policy outcomes were categorized into: Indoor air quality in schools, Indoor air quality in homes, Outdoor air quality. Systems change concepts and examples are discussed for community, regional, and statewide use to affect policy changes. These include: Designing synergistic systems aligning values, activities, and relationships, Developing collaborative planning and consensus building, Creating capacity using education, refining assumptions, and using data, Creating leaders, advocates, and champions, Employing communication strategies to enhance capacity and leadership development, Designing change efforts that are sensitive to community cultural and environmental factors, Instituting and reinforcing appropriate feedback loops, Assessing positive and negative unanticipated consequences, Addressing root causes of problems, Designing changes at appropriate systems levels, Addressing relevant interrelationships, Leveraging resources for sustainable funding, Shifting power among stakeholders. Employing systems change concepts to evaluate this type of community-oriented environmental policy initiative provides valuable tools and feedback to the participants and the funder (The California Endowment).

Roundtable: Challenges of Evaluating a Multi-disciplinary, Multi-agency, School Based, Safe Schools/Healthy Students Project
Roundtable Presentation 819 to be held in Jefferson Room on Saturday, November 10, 1:50 PM to 3:20 PM
Presenter(s):
Carl Brun,  Wright State University,  carl.brun@wright.edu
Betty Yung,  Wright State University,  betty.yung@wright.edu
Cheryl Meyer,  Wright State University,  cheryl.meyer@wright.edu
Carla Clasen,  Wright State University,  carla.clasen@wright.edu
Katherine Cauley,  Wright State University,  katherine.cauley@wright.edu
Kay Parent,  Wright State University,  kay.parent@wright.edu
Abstract: The roundtable presenters compose the team responsible for evaluating a three year, multi-agency Safe Schools/Healthy Students grant implemented in a K-12 school-wide district. The evaluators represent health administration, nursing, psychology, and social work. The presenters will discuss the coordination of implementing over 25 instruments to measure more than 17 long-term outcomes, including the required GPRA outcomes for the federally funded project. The evaluators also assisted staff from 15 programs to measure short-term outcomes. The presenters will discuss several challenges to the evaluation including "program creep", changes in program implementation from the original grant, and an aversion to measuring outcomes. The staff implementing the grant funded interventions constantly sought advice from the evaluators on how to implement the programs. As evaluators, we constantly needed to clarify our role while also providing assistance in utilizing the evaluation data. The presenters will also discuss a tool used to monitor the complex evaluation plan.

Session Title: Capacity Factors in Prevention and New Tobacco Control Strategies and Evaluations
Multipaper Session 820 to be held in Washington Room on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Robert LaChausse,  California State University, San Bernardino,  rlachaus@csusb.edu
Capacity Factors Influencing Evaluation Scope Among Prevention Coalitions
Presenter(s):
Julianne Manchester,  The Ohio State University,  manchester.12@osu.edu
James W Altschuld,  The Ohio State University,  altschuld.1@osu.edu
Abstract: Community coalitions are collectives (education, law enforcement, schools, and other sectors) engaged in needs assessment, resource identification, action planning, program implementation and evaluation to reduce and/or prevent substance abuse among youth and adults. Findings will be presented from a study that investigated the influence of finances, multiple sector representation, and the relations among those sectors on the scope of coalition evaluation plans. Coalitions should measure combinations of process (number of deliverables), intermediate (self report substance use questionnaires) and outcome indicators (public records such as arrests) to be accountable to funders and community stakeholders. The study shows that evaluation plans are primarily affected by funding level. The nature and number of involved community sectors and internal relationships that occur among members appear to be important but secondary to evaluation processes.
Safe Schools/Healthy Students Project Directors' Perspectives on Evaluation and Evaluators
Presenter(s):
Jenifer Cartland,  Childrens' Memorial Hospital, Chicago,  jcartland@childrensmemorial.org
Holly Ruch-Ross,  Independent Consultant,  hruchross@aol.com
Maryann Mason,  Children's Memorial Hospital, Chicago,  mmason@childrensmemorial.org
William Donohue,  Michigan State University,  donohue@msu.edu
Abstract: This paper reports preliminary findings on the perspective on and experience with evaluation for SS/HS project directors from a larger study which surveyed and interviewed 20 evaluator-project director pairs. The sample size is small, but results are instructive in terms of the challenges and opportunities for evaluators in developing working relationships with program leaders. There appears to be a gap in education and experience between evaluators and project directors, with the evaluators, on average, being more highly educated and experienced than their program peers. The two groups also deal with very different work environments (public schools for project directors; universities and consulting firms for evaluators). While there is broad agreement between project directors and evaluators about many particular aspects of evaluation, there is some difference in perception about the broader goals of evaluation. These differences appear to be related to expectations, evaluator style, and project stage.
Evaluating School-based Tobacco Prevention Initiatives: Challenges and Strategies
Presenter(s):
Patricia Lauer,  Rocky Mountain Center for Health Promotion and Education,  patl@rmc.org
Rebecca Van Buhler,  Rocky Mountain Center for Health Promotion and Education,  beckyvb@rmc.org
Abstract: In recent years, many states have used revenues from tobacco taxes to fund various types of tobacco prevention efforts. This paper addresses challenges and strategies for evaluating multi-site school-based tobacco prevention initiatives. A non-profit organization had state funding to administer mini-grants to over 50 schools and 15 districts to conduct tobacco prevention activities and programs. Evaluation challenges occurred at two levels of inquiry: (1) processes and outcomes of grantees' programs, and (2) influences of technical assistance and training on grantees' capacities to implement tobacco prevention efforts. Evaluation strategies included providing evaluation guidelines and tools to grantees and collecting data from multiple sources in multiple formats.
Ready, Set, ACTION: Evaluating the Multi-site Effectiveness Study of the Adolescent Cessation of Tobacco: Independent of Nicotine (ACTION) Adolescent Tobacco Cessation Program in Tobacco-growing Communities
Presenter(s):
Laurie Stockton,  Pacific Institute for Research and Evaluation,  lstockton@pire.org
Al Stein-Seroussi,  Pacific Institute for Research and Evaluation,  stein@pire.org
Paul Brodish,  Pacific Institute for Research and Evaluation,  pbrodish@pire.org
Abstract: The Pacific Institute for Research and Evaluation used an experimental design to evaluate the relative effects of two tobacco cessation initiatives on tobacco-using high school youth in fourteen tobacco producing communities in Kentucky, North Carolina, and Ohio. Schools were randomly assigned to either receive the experimental condition or the comparison video condition. The primary outcome measures were youths' bio-chemically confirmed 3-day abstinence and self-reported abstinence from tobacco use (up to 30 days) at three data collection points: baseline, post-test, and three-month follow-up. Hierarchical linear modeling is being used to test for differences in abstinence from tobacco use between the experimental and comparison groups, while controlling for the individual differences that existed prior to the intervention and intracluster correlations for the nested conditions. Analyses are currently underway to examine differences between baseline and post-test. The results for the three month follow-up data are forthcoming and will be available by June 2007.
Combining Qualitative and Quantitative Methods in the Evaluation of Health Prevention Programs Targeting Hard-to-reach Populations
Presenter(s):
Violeta Dominguez,  University of Arizona,  violetdl@email.arizona.edu
Abstract: As part of the evaluation of a statewide tobacco control program a study was designed to gather data about service coverage, awareness, and utilization from a hard-to-reach, high- risk population. Face-to-face interviews were conducted with low-income clients, health care providers, and staff at community health centers. Further, survey data were collected from the Medicaid health plans serving this population. This paper will present results from an analysis that used qualitative and quantitative methods in synergy and compared information gathered from different types of respondents. This evaluation highlights areas of convergence and discrepancies among these data sources and how they can help inform interventions. It also provides important suggestions for potential collaborations between different agencies serving the same target population.

Session Title: Put That in Writing: Communicating Evaluation Results in a Way That Promotes Learning and Use
Panel Session 821 to be held in D'Alesandro Room on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Toni Freeman,  The Duke Endowment,  tfreeman@tde.org
Abstract: This session will address the importance of evaluators and foundations working together to effectively present and report information. Knowledge management is playing a more significant role for foundations and nonprofits as is presenting and reporting useful evaluation products. This session will address three emerging issues regarding evaluation reporting -- communicating evaluation results with the audience in mind, developing quality evaluation products in a timely way to maximize their usefulness, and clarifying the ownership of evaluation data and reports. The panelists will also discuss how evaluators and foundation staff can work together to produce products that meet their respective standards and create learning communities of practice.
Communicating Evaluation Results With Your Audience in Mind
David Scheie,  Touchston Center for Collaborative Inquiry,  dscheie@tcq.net
It is important to be clear about the intended audience(s) for the evaluation. The earlier in the process that can be identified the better. Report length, wording and level of technical information must be tailored to the audience. Someone may be aware that materials should be addressed to particular audiences, but not know how to do it. David will address the elements or qualities entailed in writing to different audiences and discuss ways evaluation can create learning communities of practice. Prior to founding Touchstone, David worked 17 years on staff at Rainbow Research, Inc. He has a strong commitment to creating evaluation reports that help advance the work, which requires clarity about the intended audience for the evaluation.
Some Things Just Happen: Writing Isn't one of Them
Mary Grcich Williams,  Lumina Foundation for Education,  mwilliams@luminafoundation.org
Writing is a time consuming process, and that fact may not always be recognized in planning and budgeting. Sometimes report quality suffers if the evaluator's timeline does not accommodate editing and re-crafting. Mary will discuss how to integrate evaluation reports with communication pieces that will emerge from the report, including drafts and final product. She will also discuss how evaluators and professional communicators can work together to produce a product that meets the quality standards of both fields. As part of her role as a foundation evaluation director, Mary routinely works with grantees, program officers and communicators to produce evaluation products that address multiple needs. Previously, she operated her own evaluation consulting practice and served as director of a state educational division.
Who Owns Your Evaluation Report?
Toni Freeman,  The Duke Endowment,  tfreeman@tde.org
As an emerging field, foundations and evaluators face numerous questions about the ownership of evaluation data and reports, such as: Who 'owns' a report? What language could be included in grant agreements/engagement to clarify ownership? What policies make sense and are fair to the evaluator and the grant making organization? What ethical and legal issues are involved with translating reports into other communications products? Toni will discuss these issues and share examples of reporting policies regarding evaluator(s) use of the data as well. Toni has led the evaluation efforts at the foundation since 1999. She routinely conducts searches for evaluators and manages projects involving various evaluation methods and products. Two of her greatest challenges have been finding the right evaluator for a project and working with consultants on evaluation reports. Previously, Toni served in an executive position responsible for reporting organizational results to varied stakeholders, including elected officials.

Session Title: Of Mice and Men: How to Conduct a Random Assignment Study
Panel Session 822 to be held in Calhoun Room on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Carrie Markovitz,  Abt Associates Inc,  carrie_markovitz@abtassoc.com
Abstract: Random assignment, known as the gold standard in research, is more commonly being implemented in evaluations of social programs and initiatives. However, these types of studies present unique challenges in study design, implementation, and the recruitment of subjects. In this session we will review some of the topics around designing and implementing a successful random assignment study. We will present examples of current random assignment studies, discuss the unique challenges involved in each type of evaluation, and offer best practices and recommendations for conducting random assignment studies in different settings.
The Benefit Offset National Demonstration (BOND)
Larry Orr,  Abt Associates Inc,  larry_orr@abtassoc.com
Dr. Larry L. Orr will discuss the design of the Benefit Offset National Demonstration (BOND), a test of new approaches to helping Social Security Disability Insurance (SSDI) beneficiaries and applicants return to work. BOND will use random assignment to measure the impacts of alternative combinations of financial incentives, health insurance and health supports, and consumer-directed employment supports on employment and earnings. Dr. Orr is Chief Economist at Abt Associates and specializes in the design and implementation of randomized field trials of public programs.
The National Random Assignment Study of Youth Corps
Carrie Markovitz,  Abt Associates Inc,  carrie_markovitz@abtassoc.com
Dr. Carrie E Markovitz will present on the design and implementation of a random assignment evaluation of Youth Corps being conducted for the Corporation for National and Community Service. This study is a 30-month impact evaluation of youth corps, which are programs that combine intensive community service with job training and education. Because there is no one model for youth corps, the study sample includes a variety of organization types presenting a unique challenge for designing and implementing random assignment. Dr. Markovitz is the Project Director for this study and has been involved in numerous random assignment studies. She is a statistician with 11 years of experience designing and conducting quantitative and qualitative evaluations of youth and workforce development programs.
Impact Evaluation of Upward Bound's Increased Emphasis on Higher-Risk Students
Ryoko Yamaguchi,  Abt Associates Inc,  ryoko_yamaguchi@abtassoc.com
Ryoko Yamaguchi will discuss the design of the Upward Bound evaluation, which randomizes 3,600 high school students within 90 Upward Bound programs, to investigate whether the program has an impact on student outcomes. When conducting random assignment in a school-based setting, there are multiple constituents to train, inform, and gain approval, starting with students, parents, teachers, administrators, the school board, and program directors. In addition, school-based settings provide unique challenges such as possible differential attrition, shared variance and nestedness of the data, and contamination that is critical for researchers to examine. Dr. Yamaguchi has 15 years experience in education, both as a researcher/evaluator and as a special education teacher. She has conducted numerous school-based experimental and quasi-experimental studies, and has expertise in at-risk youth issues.

Session Title: Cultural Isses in Multiethnic Evaluation
Multipaper Session 823 to be held in McKeldon Room on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Tamara Bertrand,  Florida State University,  tbertrand@admin.fsu.edu
Discussant(s):
Emiel Owens,  Texas Southern University,  owensew@tsu.edu
Does Revising the Language on a Survey Capture Non-native English Speakers' Opinions More Accurately?
Presenter(s):
Sally Francis,  Walden University,  sally.francis@waldenu.edu
Eric Riedel,  Walden University,  eric.riedel@waldenu.edu
Abstract: The purpose of this paper is to explore the impact of revising language on a course evaluation instrument so that the form is more easily understood by non-native English speakers. Data was compared on parallel questions from new course evaluation surveys designed for non-native English speakers with original surveys designed for native English speakers. The sample included 36 course sections using the original survey and 32 course sections using the new survey from an online bachelor's completion program offered jointly by American and Latin American universities. The data were compared using common courses and weighted so the samples were statistically the same. Independent t-tests showed that students who received the new survey rated their online course instructor significantly lower then those who received the original survey. A factor analysis showed that the students who took the new survey perceived their instructor along more factors then students with the original survey.
Evaluating the Effectiveness of a 'Small Learning Community' Project on Inner-City Students
Presenter(s):
Deirdre Sharkey,  Texas Southern University,  owensew@tsu.edu
Emiel Owens,  Texas Southern University,  owensew@tsu.edu
Abstract: The purpose of the present study is to evaluate the effectiveness of a "Small Learning Community" project on a low achieving inner-city school. The CIPP Evaluation Model was used as an assessment tool during this study. The CIPP model is a comprehensive framework for guiding evaluations of programs, projects, personnel, products, institutions, and systems. It is focused on program evaluations, particularly those aimed at effecting long-term, sustainable improvements.
Diversity in the Evaluation Field: Expanding the Pipeline for Racial/Ethnic Minorities
Presenter(s):
Dustin Duncan,  Harvard University,  dduncan@hsph.harvard.edu
Abstract: Racial/ethnic diversity in the evaluation field is important. Among other benefits, increasing the racial/ethnic diversity of people entering the field of evaluation is a strategy to increase the cultural competency among evaluators in general. At present, however, still too few racial/ethnic minorities are in the evaluation field. This paper will discuss strategies for expanding the pipeline of racial/ethnic minorities in the evaluation field, including creating evaluation-training programs specifically for racial/ethnic minority students and working with Historically Black Colleges & Universities. The present paper is from the perspective of a graduate student; he is presently participating in the American Evaluation Association/Duquesne University Graduate Education Diversity Internship Program. In the paper, he draws on his experiences through this internship as well as other evaluation experiences.
The Case Against Cultural Competence
Presenter(s):
Gregory Diggs,  University of Colorado, Denver,  shupediggs@netzero.com
Abstract: Cultural Competence: “A systematic, responsive inquiry that is actively cognizant, understanding, and appreciative of the cultural context in which the evaluation takes place; that frames and articulates the epistemology of the evaluative endeavor; that employs culturally and contextually appropriate methodology; and that uses stakeholder generated, interpretive means to arrive at the results and further use of the findings.” “Competence” has been operationalized as a goal or developmental process, instead of as a set of skills, knowledge and ability. Dr.Diggs argues that the use of the term “cultural competence” as used by AEA is misleading and misguided; poorly representing the basic concepts of culture and competence. CC advocates often use the term as if it is interchangeable with important concepts like cultural awareness and cultural responsiveness. How will the merit or worth of the evaluator's alleged cultural competence be certified? That methods used are “culturally appropriate”? Who among us can do so with validity?

Session Title: Building Capacity for Cross-cultural Leadership Development Evaluation
Think Tank Session 824 to be held in Preston Room on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG and the Multiethnic Issues in Evaluation TIG
Presenter(s):
Kelly Hannum,  Center for Creative Leadership,  hannumk@leaders.ccl.org
Discussant(s):
Claire Reinelt,  Leadership Learning Community,  claire@leadershiplearning.org
Emily Hoole,  Center for Creative Leadership,  hoolee@leaders.ccl.org
Kelly Hannum,  Center for Creative Leadership,  hannumk@leaders.ccl.org
Abstract: We will focus on three areas important to building evaluation capacity with regard to cross-cultural evaluations of leadership development: 1) How is the role of an evaluator similar or different when working across cultures? What capacities do evaluators need to work effectively across different cultures? How can evaluators build their capacity and/or compensate for not having certain knowledge or skills? 2) What are key issues related to data collection? What are different expectations about stakeholder involvement? How can evaluators better understand possible risks associated with stakeholder involvement? What forms of data collection should be used? How can evaluators manage the logistics of language and distance? 3) What are key issues related to data analysis and interpretation? How can one detect measurement invariance with small samples? How can evaluators be sensitive to differences of meaning with regard to concepts of leadership? What are examples of process used to include stakeholders in the interpretation of data?

Session Title: Building Organizational Capacity for Self-evaluation
Demonstration Session 825 to be held in Schaefer Room on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Presenter(s):
Trilby Smith,  Metis Associates,  tsmith@metisassoc.com
Kathleen Agaton,  Metis Associates,  kagaton@metisassoc.com
Abstract: This session will demonstrate how self-evaluation is an effective and empowering method of learning for organizations. It will show how organizations can use self-evaluation as a tool to facilitate ongoing learning, guide decision-making, and measure progress towards their goals. The session will also demonstrate how self-evaluation complements and supports an external evaluation. Presenters will address the principles of self-evaluation, and steps organizations can take to become self-evaluating. They will also discuss the training and support needed to build organizational capacity for effective self-evaluation. To illustrate how community organizations are learning through self-evaluation, the presenters will discuss lessons learned from the Jim Casey Youth Opportunities Initiative, a national foundation whose mission is to help youth in foster care make successful transitions to adulthood. With a number of demonstration sites, the work of the Initiative highlights how different organizational and community contexts influence the self-evaluation process.

Session Title: Comparing Apples to Apples: Applying the Rasch Measurement Framework to a Statewide Parent Survey
Demonstration Session 826 to be held in Calvert Ballroom Salon B on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Special Needs Populations TIG
Presenter(s):
Kathleen Lynch,  Virginia Commonwealth University,  kblynch@vcu.edu
William Fisher,  Avatar International Inc,  wfisher@avatar-intl.com
Abstract: This demonstration session will introduce evaluators to Rasch measurement concepts and methods, illustrating their application to a statewide survey of parents' perceptions of schools' efforts to foster parent involvement. The USDOE Office of Special Education Programs requires each state to develop and submit an Annual Progress Report on their State Performance Plan for special education. To assist states, the National Center for Special Education Accountability Monitoring (NCSEAM) has developed and made available a set of surveys that were constructed within the Rasch measurement framework. Presenters will cover the basics of Rasch methodology; its usefulness for survey development, data analysis, and standard setting; and how to interpret results to inform program improvement. Using both lecture and interactive formats, presenters will engage the audience in thinking through ways to address the challenges inherent in trying to communicate to a broad audience a radically different way of thinking about measurement.

Session Title: Case Studies of Evaluation Use
Multipaper Session 827 to be held in Calvert Ballroom Salon C on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Evaluation Use TIG
Chair(s):
Emmalou Norland,  Institute for Learning Innovation,  norland@ilinet.org
Initial Results From “Beyond Evaluation Use”: A Study of Involvement and Influence in Large, Multi-site National Science Foundation (NSF) Evaluations
Presenter(s):
Jean King,  University of Minnesota,  kingx004@umn.edu
Lija Greenseid,  University of Minnesota,  gree0573@umn.edu
Kelli Johnson,  University of Minnesota,  johns706@umn.edu
Frances Lawrenz,  University of Minnesota,  lawrenz@umn.edu
Stacie Toal,  University of Minnesota,  toal0002@umn.edu
Boris Volkov,  University of Minnesota,  volk0057@umn.edu
Abstract: This paper presents preliminary findings of a three-year study of the use and influence of four NSF-funded program evaluations. It examines the relationship between stakeholder involvement and the long-term impact of the evaluations on project staff, the science, technology, engineering, and mathematics (STEM) community, and the evaluation community. The project answers three overarching research questions: 1. What patterns of involvement exist in large, multi-site evaluations? 2. To what extent do different levels of involvement in program evaluations result in different patterns of evaluation use and influence? 3. What evaluation practices are most directly related to enhancing the influence of evaluations? Initial results suggest that people are involved in these large-scale program evaluations in a number of ways and that, not surprisingly, involvement can affect use. Given the diverse factors affecting the complex processes involved, our data suggest that the mechanisms promoting evaluation use and influence are far more difficult to pinpoint.
Case Studies of Evaluation Use and Influence in a School District
Presenter(s):
John Ehlert,  University of Minnesota,  jehlert@comcast.net
Jean King,  University of Minnesota,  kingx004@umn.edu
Abstract: The study's purpose was to determine the ways people used the results of specific evaluations and how these evaluations influenced district practice over time. It focused on three evaluations selected because they were completed between 1999 and 2004, were participatory in nature, and had sufficient individuals remaining in the district to be interviewed, including a study of the implementation of state graduation standards, a study of the Special Education Department, and a study of middle school programming. Two primary methods were used: interviews with participants in three studies and document analysis of related meeting notes, reports, information from the district website, etc. The results document how future decisions included evaluation content and the processes that created structures for use. They demonstrate the extent to which external forces dramatically affected the use and influence of these evaluations, with implications for the concepts of evaluation use and influence more generally.
Process Use and Organizational Learning: A Different Perspective: The Case of the World Bank
Presenter(s):
Silvia Paruzzolo,  World Bank,  sparuzzolo@worldbank.org
Giovanni Fattore,  Bocconi University,  giovanni.fattore@unibocconi.it
Abstract: Although interest in process use of evaluation, and organizational learning has grown substantially in recent years, studies that investigate non-evaluators perspectives on this issue are almost absent. Different authors maintain that evaluation appears to be most useful, especially as an organizational learning 'tool', when it is conducted using participatory approaches. What the present study wants to address is the following question: Would program practitioners involved in an evaluation of their program as primary stakeholders agree with this take? And why? The guiding idea of the present paper is that if evaluations are meant to be an organizational learning system, they need to be viewed as such by the primary users, i.e. the ones that initiate, and benefit from, the learning process. Using a mixed methods approach, the authors will explore this issue in the context of the World Bank, where interest in program evaluation is definitely gaining momentum.
Building Learning Communities With Evaluation Data Teams: A Collective Case Study of Six Alaskan School Districts
Presenter(s):
Edward McLain,  University of Alaska, Anchorage,  ed@uaa.alaska.edu
Susan Tucker,  Evaluation and Development Association,  sutucker@sutucker.cnc.net
Diane Hirshberg,  University of Alaska, Anchorage,  hirshberg@uaa.alaska.edu
Alexandra Hill,  University of Alaska,  anarh1@uaa.alaska.edu
Abstract: Building the capacity of school-based "data teams" to use various improvement-oriented evaluation methodologies across diverse contexts has not been studied systematically. USDE's Title II Teacher Quality Enhancement (TQE) initiative is charged with enhancing teacher quality in high-need schools, which are experiencing a worsening crisis in attracting (and retaining) quality teachers. We discuss the development and growth of data teams in districts serving 60% of Alaska's students. Working with faculty from University of Alaska-Anchorage (UAA), data teams composed of teachers and principals form a professional learning community for restructuring and reculturing (Fullan, 2000; Sparks, 2005). Operating since spring 2005, these teams facilitate data-enhanced question framing, planning, and decision-making regarding student performance, instructional strategies, teacher retention, resource management, and systemic support grounded by geography of need and cultural responsiveness. We address learnings regarding the challenges of partnering, plateau effects and honoring diversity along with success strategies for data team development and institutionalization.

Session Title: Rating Tools, Causation, and Performance Measurement
Multipaper Session 828 to be held in Calvert Ballroom Salon E on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Government Evaluation TIG
Chair(s):
David J Bernstein,  Westat,  davidbernstein@westat.com
Causation in Federal Government Evaluations
Presenter(s):
Mina Zadeh,  United States Department of Health and Human Services,  mm_hz@yahoo.com
Abstract: This presentation will address “causation” in Federal government evaluations. Do high quality, high impact evaluations have to address causation in order to be deemed effective? This presentation will delve into the issue of causation in evaluations. It will use several high impact evaluations within the US Department of Health and Human Services to demonstrate that evaluations can be effective without addressing the causes of vulnerabilities that are identified within a program.
Selecting Measures for External Performance Accountability: Standards, Criteria, and Purpose
Presenter(s):
James Derzon,  Pacific Institute for Research and Evaluation,  jderzon@verizon.net
Abstract: Beginning with Congressional passage of the Government Performance and Results Act of 1993 (GPRA) and culminating in President Bush's implementation of the Office of Management and Budget's Program Assessment Rating Tool (PART), federal agencies are required to use performance measures to determine their overall effectiveness. As a management information technique borrowed from the management-by-objective (MBO) philosophy of total quality management, performance measures contribute to an information system by providing a narrow view of some critical aspects of a program's performance. However, it has proven difficult for many grant programs and agencies addressing human needs to demonstrate PART effectiveness. Using examples from an evaluation of the Americorps*NCCC PART performance measures, an instrument developed for evaluating performance measures for external reviewers will be introduced and criteria for evaluating performance measures will be distinguished from indicators useful for in-house program monitoring and program evaluation.
Evaluating an Evaluation Process: Lessons Learned From the Evaluation of the National Flood Insurance Program
Presenter(s):
Marc Shapiro,  Independent Consultant,  shapiro@urgrad.rochester.edu
Abstract: The NFIP underwrites 5.4 million policies worth over $1 trillion in assets with greater average annual outlays than Social Security. It is also one of the country's most complicated governmental programs providing the public good of risk information, managing floodplain risks, and also filling a market void of providing insurance the private market is reluctant to provide. It affects governments ranging from small communities to the national government, involves sometimes conflicting goals, and affects an array of stakeholders. Evaluating the 30-year old program was a complicated six-year process involving $5 million and 14 studies. In addition, toward the end of the evaluation, the hurricane seasons of 2004 and 2005 heightened attention to the previously obscure program, creating the potential for politicizing findings. This presentation discusses lessons learned including utilizing stakeholders, shaping client expectations, aligning program and evaluation goals, exploiting policy windows, and more.

Session Title: Articulating Authentic and Rigorous Science Education Evaluation Through the Inquiry Science Instruction Observation Protocol (ISIOP)
Think Tank Session 829 to be held in Fairmont Suite on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Daphne Minner,  Education Development Center Inc,  dminner@edc.org
Discussant(s):
Daphne Minner,  Education Development Center Inc,  dminner@edc.org
Neil Schiavo,  Education Development Center Inc,  nschiavo@edc.org
Abstract: A significant challenge in K-12 evaluation is the limited availability of valid and reliable instruments targeting science teaching. In many instances, evaluators forgo existing instruments and develop their own at great costs of time, money, and, sometimes, scientific rigor. The development of the Inquiry Science Instruction Observation Protocol (ISIOP) has been launched in response to these demands. ISIOP supports evaluators in determining the extent of scientific inquiry-supporting instructional practices present in middle grade science classrooms. This Think Tank session addresses questions of how evaluators select and use an observation protocol, like ISIOP. Small group discussion will center on questions of: 1. What factors do and should evaluators consider when selecting an observation protocol? 2. What guidance do evaluators need to use an observation protocol?

Session Title: Summer School Ain't So Bad, But Evaluating It Can Be: Lessons Learned From Outcome Evaluations of Summer Programs
Panel Session 830 to be held in Federal Hill Suite on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Elizabeth Cooper-Martin,  Montgomery County Public Schools,  elizabeth_cooper-martin@mcpsmd.org
Discussant(s):
Cindy Tananis,  University of Pittsburgh,  tananis@pitt.edu
Abstract: School districts commit significant effort and resources to summer programs. In the following panel, the presenters will share their experiences in evaluating a variety of such programs, including academic and arts programs for elementary students and remedial and enrichment courses for middle school students. Specifically, each panelist will reflect on a particular type of outcome that is useful for evaluating a summer program and present its advantages and challenges, plus lessons learned, based on using that outcome in their evaluations. As available, panelists will present evaluation design, data collection instruments, analytical methods, and results. Members will discuss the potential and limitations of the following approaches: course data and standardized test scores from the following academic year, stakeholder survey data, cumulative effects, and scores from pre-session and post-session tests. The panel's goal is to share lessons learned in the field as an invitation to discussion about outcome evaluations of summer programs.
The Use of Next Year's Course Enrollment, Test Scores, and Course Grades in an Evaluation of Summer Intervention and Enrichment Courses for Middle School Students
Elizabeth Cooper-Martin,  Montgomery County Public Schools,  elizabeth_cooper-martin@mcpsmd.org
Rachel Hickson,  Montgomery County Public Schools,  rhickson731@yahoo.com
Middle schools in Montgomery County Public Schools offered two types of summer courses. Focus on mathematics classes were designed to increase the number of students participating in advanced mathematics classes. Intervention courses, in both mathematics and English, were intended to help students achieve grade-level requirements in these subjects. The proposed outcomes of interest were end-of-course grades, standardized test results (both scores and passing rates), and, enrollment in above grade courses (for focus classes only). Although clearly important outcomes for the school system, these measures raised several issues: lag time between the course and the scores or grades, relevance of outcomes to the summer program content, and heterogeneity in courses taken by middle school students. Other lessons learned were related to identifying an appropriate comparison group of students and using of thresholds to measure student improvement.
The Use of Multiple Stakeholder Surveys in the Evaluation of Summer Programs for Elementary Students
Nyambura Maina,  Montgomery County Public Schools,  susan_n_maina@mcpsmd.org
Julie Wade,  Montgomery County Public Schools,  julie_wade@mcpsmd.org
In an effort to gain a better understanding of a program and its effects, we may examine it from different 'angles.' Our evaluation of two summer programs in Montgomery County Public Schools - Extended Learning Opportunities Summer Adventures in Learning (ELO SAIL) and 21st Century Community Learning Centers (21st CCLC) -uses survey data from multiple stakeholders. The ELO SAIL is a four-week program for students K-5 in Title I schools. The goal is to alleviate students' summer learning loss and to help schools maintain Adequate Yearly Progress. The 21st CCLC supports the ELO SAIL academic program by providing cultural arts and recreation activities. Ongoing evaluations of the programs employ a range of outcome measures, including surveys from administrators, teachers, artists, media specialists, recreation providers, parents, and students. This discussion will address effective administration of multiple stakeholder surveys, response rate, reliability, corroborating findings with other data sources, and consequential validity.
Evaluation of Cumulative Effects of a Summer Elementary Education Program
Scot McNary,  Montgomery County Public Schools,  scot_w_mcnary@mcpsmd.org
One type of summer educational program school districts implement is designed to prevent summer learning loss. Some programs allow students to return each summer. Student benefit from attendance should be detected in reduced summer learning loss. However, the cumulative effect of a summer program is more difficult to evaluate. Design decisions made during a recent evaluation of a summer elementary education program are discussed. Challenges include defining and measuring effects, particularly with respect to establishing good candidates for comparison to attendees, defining cumulative attendance, and selecting appropriate outcomes. Lessons learned pertain to improving future evaluation efforts, as follows: 1) rely on recent methodological advances in matching observational studies, 2) ensure outcome measures have sufficient validity for use, 3) construct a priori definitions of cumulative effect.
Evaluating Outcomes of a Summer Learning Program Using Non-Randomized Comparison Group Pretest-Posttest Quasi-Experimental Design
Helen Wang,  Montgomery County Public Schools,  helen_wang@mcpsmd.org
This summative evaluation employs a non-randomized comparison group pretest-posttest quasi-experimental design to examine academic benefits from the Extended Learning Opportunities Summer Adventures in Learning (ELO SAIL) program in Montgomery County Public Schools in Maryland. The summer program provides a four-week service to incoming Kindergarten through Grade 5 students from Title I schools, aimed at alleviating summer academic loss and promoting continued progress in learning. The present discussion addresses the strength of the selected evaluation design as a more scientifically rigorous and ethically practical approach for evaluating the summer program. Challenges and solutions involved in the evaluation, including the development of realistic evaluation questions, use of relevant outcome measures, and difficulty in assessment administrations are also discussed.

Session Title: Forging a Strong Link Between Research and Science Policy for Air Quality Decisions
Panel Session 831 to be held in Royale Board Room on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Research, Technology, and Development Evaluation TIG and the Environmental Program Evaluation TIG
Chair(s):
Dale Pahl,  United States Environmental Protection Agency,  pahl.dale@epa.gov
Abstract: This panel discusses (1) national ambient air quality standards that protect public health and the environment under the Clean Air Act and (2) the roles of research, synthesis, and evaluation in helping inform decisions about these standards. Presentations describe a. An overview of ambient air quality standards and the use of research and science to inform decision-making about these standards; b. A paradigm for federal particulate matter research and its use to plan and coordinate research across federal agencies; c. The value of this paradigm to improve understanding of relationships between sources of atmospheric contaminants, air quality, human exposure to air pollution, human health, and risk assessment; and d. Synthesis and evaluation of new scientific knowledge relevant to decision-making about ambient air quality standards. The presentations illustrate the value of the paradigm for federal particulate matter research in forging a strong link between research and science policy on air quality issues-including the knowledge base for air quality standards, compliance, and public health impacts of air quality decisions.
An Overview of National Ambient Air Quality Standards
Ron Evans,  United States Environmental Protection Agency,  evans.ron@epa.gov
The presentation communicates an overview of national ambient air quality standards that protect public health and the environment under the Clean Air Act and the use of research and science to inform decision-making for these standards.
A Paradigm for Federal Particulate Matter Research
James Vickery,  United States Environmental Protection Agency,  vickery.james@epa.gov
In the United States, a number of federal agencies coordinate particulate matter research to strengthen the link between research and science policy. The federal PM research strategy incorporates a conceptual paradigm to help guide and improve the understanding of the relationships between particulate matter, air quality, human exposure, and human health. This presentation describes the paradigm for particulate matter research.
Relationships Among Atmospheric Contaminants, Air Quality, Human Exposure, and Health
Rochelle Araujo,  United States Environmental Protection Agency,  araujo.rochelle@epa.gov
An understanding of these relationships (among atmospheric contaminants, air quality, human exposure, and health) is essential for developing and applying knowledge to inform decision-making about public health and about compliance. This presentation discusses the theory underlying these relationships.
Synthesis and Evaluation of New Scientific Knowledge
William Wilson,  United States Environmental Protection Agency,  wilson.william@epa.gov
This presentation describes a fundamental step in the review of national ambient air quality standards--the synthesis and evaluation of new scientific knowledge about particulate matter, ambient air quality, exposure, and human health. This periodic and rigorous assessment process, described in the Clean Air Act, characterizes our knowledge, and the degree to which uncertainties remain that are related to this knowledge.

Session Title: Putting it All Together: Integrating Evaluation Components to Create a Comprehensive Statewide Evaluation
Panel Session 832 to be held in Royale Conference Foyer on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Tiffany Comer Cook,  University of Wyoming,  tcomer@uwyo.edu
Discussant(s):
Laura Feldman,  University of Wyoming,  lfeldman@uwyo.edu
Abstract: The University of Wyoming's Survey & Analysis Center (WYSAC) integrated a variety of assessments to evaluate Wyoming's Tobacco Prevention and Control Program and its components. This panel will focus on how WYSAC combined the assessments to create a comprehensive statewide evaluation. Specifically, the panel will discuss the following topics: the collection, analysis, and reporting of data related to the establishment of smoke-free environments in Wyoming; the administration of surveys to measure attitudes concerning tobacco-related policies; and the surveillance of Wyoming's tobacco consumption and prevalence. Ultimately, the panel will elaborate on how WYSAC incorporated these various evaluation components to create a comprehensive statewide evaluation that provides useful information for individual communities and state government.
Administering Surveys to Assess Attitudes
Russ Miller,  University of Wyoming,  russmllr@uwyo.edu
Russ Miller will present on WYSAC's administration of surveys to measure attitudes concerning tobacco-related policies. Mr. Miller has five years of combined professional experience in survey research, call center supervision, and program evaluation. He has worked on numerous projects involving the development of instruments and administration for surveys, such as the Wyoming's Youth Risk Behavior and Prevention Needs Assessment surveys, as well as a nationwide telephone survey for the Department of the Interior's new recreational pass. His presentation will focus on the ability of survey research to indicate citizens' attitudes and how such attitudes contribute to the overall scheme of an evaluation project.
Evaluating Outcomes Related to Prevalence
Shannon Williams,  University of Wyoming,  swilli42@uwyo.edu
Shannon Williams will present on WYSAC's surveillance of the outcomes related to tobacco consumption and prevalence. Ms. Williams has a master's degree and is currently working toward her Ph.D. in Applied Statistics and Research Methods. Previous to her work on the Tobacco Prevention and Control Program, she worked on a report for the Colorado Youth Risk Behavior Survey. Her presentation will focus on how to measure behavior and how such measurements fit into an inclusive evaluation. Ms. Williams will also discuss collecting, analyzing, and reporting prevalence data to monitor program progress.
Analyzing Policy
Tiffany Comer Cook,  University of Wyoming,  tcomer@uwyo.edu
Tiffany Comer Cook will present on WYSAC's collection, analysis, and reporting of data related to the establishment of smoke-free environments in Wyoming. Ms. Comer Cook is the project coordinator for Wyoming's Tobacco Prevention and Control Program Evaluation. In addition to Tobacco Control and Prevention, Ms. Comer Cook has experience with multiple evaluation projects, including evaluations of Wyoming Drug Courts and Wyoming's Prisoner Reentry Program. Her presentation will focus on evaluating existing smoke-free policies and the capacity for evaluation data to promote future policies.

Session Title: Engaging Participants in the Evaluation Process: A Participatory Approach
Multipaper Session 833 to be held in Hanover Suite B on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Arlene Hopkins,  Los Angeles Unified School District,  arlene.hopkins@gmail.com
Participatory Systems Change Evaluation: Involving all Users in All Stages of Systems Change Assessment
Presenter(s):
Dianna Newman,  University at Albany,  dnewman@uamail.albany.edu
Anna Lobosco,  New York State Developmental Disabilities Planning Council,  alobosco@ddpc.state.ny.us
Abstract: Grassroots efforts are yielding changes to service delivery systems. These efforts frequently require or are based on a system evaluation as opposed to a program evaluation. Because of the grassroots nature of these demands, it is becoming increasingly important to include consumers in systems change evaluation. This includes looking at the underlying assumptions, relationships, and connections of service systems as well as the delivery process and expected outcomes. In addition, consumers must be actively involved in the design, implementation, analysis, and use phases of the systems change evaluation process. The Three I Model, an approach to evaluating systems change, provides a framework for documenting these complex change efforts consistent with participatory and empowerment approaches to evaluation. This paper will provide details related to use of this evaluation model in consumer-oriented evaluation along with examples from practice and will address how it complements and adds to participatory and empowerment practices.
Rethinking Participatory Evaluation's Conceptualization: Toward the Development of a “Full-Blown”, Useful Concept
Presenter(s):
Pierre-Marc Daigneault,  Université Laval,  pierre-marc.daigneault.1@ulaval.ca
Steve Jacob,  Université Laval,  steve.jacob@pol.ulaval.ca
Abstract: Participatory evaluation is a generic term that stretches to cover very different realities. Except for a few valuable endeavors towards its conceptualization (Cousins & Whitmore, 1998; Weaver & Cousins, 2004), participatory evaluation remains largely “under-theorized”. Using Goertz's (2006) approach to concepts, we examine and question the current conceptualization of stakeholder involvement in evaluation. We show on the one hand that participatory evaluation is a concept “stuck at the secondary level of conceptualization” that needs to be fully articulated and, on the other, that some of its dimensions need to be rethought. We then put forward what we deem to be an improved, three-level conceptualization of participatory evaluation. Toward this purpose, we developed two concepts, “self managed democratic evaluation” and “technocratic-scientific evaluation”, and a participation index that allows for better differentiation between collaborative approaches. Finally, we succinctly check for the usefulness of this model by applying it to a few selected evaluation approaches.
Participatory Evaluative Action Research (PEAR): Social Learning and Place-based Data as Democratic Practice
Presenter(s):
Annalisa Raymer,  Cornell University,  alr26@cornell.edu
Abstract: How might the practice of evaluation be enacted in framing and designing public goods in a manner that builds public life? This question was the stating point of a multi-year collaborative research engagement, the outcomes of which include a new model of inquiry called Participatory Evaluative Action Research (PEAR). Expressly public-oriented and place-sensitive, PEAR is participatory action research (PAR) with a specific aim of supporting public deliberation and public decision-making through evaluation. Accordingly, it is also democratic evaluation enacted as participatory action research. At the core of PEAR is a vision of social learning and knowledge generation for robust civic life and vital public realm as the means and medium of healthy democracies and a sustainable future. Presented in this presentation are: 1) an overview of the PEAR model (with case illustrations) and 2) the developmental research design that lead to PEAR's inception and development.
Hear Us Out: Youth-led Participatory Evaluation in an Urban Community
Presenter(s):
Sherri Lauver,  University of Rochester,  slauver@warner.rochester.edu
Abstract: Youth often feel as if they have little representation in policies that affect their lives. Hear Us Out! is a youth-led participatory evaluation to bring youth voice to important community problems facing young people in Rochester, New York. Partners include nineteen students from the Rochester After School Academy and faculty from the University of Rochester. Youth ages 14-18 designed and carried out an evaluation to assess beliefs and experiences of young people to offer a collective “voice” to policy makers. The purpose of the presentation is to examine how this participatory evaluation provides a context for identity development and community engagement among youth. It also describes successes and challenges in its pilot year. Data sources include focus groups, surveys, and observations. Findings suggest that youth gain an emerging identity as “change agent” while continuing to wrestle with alienation and cynicism in their schools and community.

Session Title: Advances and Applications in Using Propensity Scores to Reduce Selection Bias in Quasi-Experiments
Panel Session 834 to be held in Baltimore Theater on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
M H Clark,  Southern Illinois University, Carbondale,  mhclark@siu.edu
Abstract: Quasi-experiments are useful for studies that need to be conducted in applied settings where random assignment to treatment groups is not practical. However, a major disadvantage in using these designs is that the treatment effects may not yield unbiased estimates. Propensity scores, the predicted probability that cases will be in a particular treatment group, are often used to help model and correct for this selection bias. The studies included as part of this panel present recent findings in propensity score research. This panel will present (a) a comparison of various methods for computing, using, and interpreting propensity scores; and (b) how propensity scores can be applied to quasi-experiments in which selection into treatment conditions is potentially biased.
A Simulation Study Comparing Propensity Score Methods
Jason Luellen,  Vanderbilt University,  jason.luellen@vanderbilt.edu
Estimates of treatment effects from quasi-experiments are likely biased to some unknown extent due to the nonrandom assignment of study conditions to units, and evaluators are interested in methods for reducing that selection bias. Propensity score methods, which utilize an aggregate of the observed pretreatment covariates to adjust for selection bias, are now a popular option employed when analyzing the data from non-equivalent control group designs. This simulation study compares several methods of estimating propensity scores (logistic regression, classification trees, bootstrap aggregation, boosted regression, and random forests) crossed with several methods of adjusting outcomes using propensity scores (matching, stratification, covariance adjustment, and weighting). The paper is a follow-up to the talk I presented at Evaluation 2006 with additional analyses useful for helping practitioners choose from among the available propensity score methods.
Freshmen Interest Groups: Effects of Academic Success and Retention.
Joel Nadler,  Southern Illinois University, Carbondale,  jnadler@siu.edu
M H Clark,  Southern Illinois University, Carbondale,  mhclark@siu.edu
Heather Falat,  Southern Illinois University, Carbondale,  hfalat@siu.edu
Chad Briggs,  Southern Illinois University, Carbondale,  briggs@siu.edu
A Freshmen Interest Group program was examined using first-year students sampled over a three-year period from a mid-western university. Freshmen Interest Groups are college-level interventions in which students with similar academic interests are housed together and placed in a structured set of pre-chosen classes. The goals of the program are to increase academic performance and retention rates among college freshmen. Since students self-selected into this program, it should not be assumed that posttest only treatment effects would be unbiased. Therefore, statistical adjustments were made to reduce potential selection bias by stratifying on propensity scores. Propensity scores, which are the predicted probabilities for selecting into the program, were computed from several covariates, including personality, previous academic achievement, social skills, and family history. These adjusted results should provide less biased estimates, allowing for stronger causal conclusions than normally allowed in quasi-experiments.
Assessing the Success and Attrition of College Students: A University 101 Study
Nicole Cundiff,  Southern Illinois University Carbondale,  karim@siu.edu
M H Clark,  Southern Illinois University, Carbondale,  mhclark@siu.edu
Heather Falat,  Southern Illinois University, Carbondale,  hfalat@siu.edu
Chad Briggs,  Southern Illinois University, Carbondale,  briggs@siu.edu
A first-year experience course at a higher education institution was evaluated to understand the effects of the program. The program's goals are geared toward promoting success at the University through retention, higher grade point averages, and enhanced academic skills. This study attempts to look at these variables over a three-year span to grasp the effectiveness of the program using a post-test only non-equivalent control group design. Because students self-select into the University 101 course, it is assumed that selection bias will be a problem in evaluating the effectiveness of the program. Therefore, statistical adjustments will be made by matching on propensity scores, which are created by aggregating covariates that we expect would influence students' selection into the program. These covariates include: high school GPA, personality, and social skills. It is expected that the propensity score adjustment will provide a less biased estimate of the program effect.

Session Title: Unintended Interventions
Panel Session 835 to be held in International Room on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Melinda Davis,  University of Arizona,  mfd@u.arizona.edu
Abstract: Evaluators measure the strengths and limitations of programs and policies using a broad array of methods. However, even the best designed investigation can go awry, and study protocols can result in surprising and unintended effects. It is these unintended effects that can inform future research. Non-specific effects of treatment are usually treated as nuisance variables, to be eliminated or at least controlled. However, they can be a rich source of new interventions. A 'failed' study may not be a failure at all, if it identifies a new approach for a difficult problem. Vignettes will be presented from a variety of studies; the Consent Form as a potent treatment, useful mistakes in randomization, assessment as intervention, and the unexpected effect of a seemingly minor part of the study protocol. Each demonstrates a novel way to learn from evaluation results; effective interventions may be hidden in the non-specific effects of treatment.
Non-specific Effects of Treatment: Vignettes
Melinda Davis,  University of Arizona,  mfd@u.arizona.edu
Souraya Sidani,  Ryerson University,  s.sidani@utoronto.ca
Discovering serious design flaws at the end of an evaluation is an experience that new and seasoned evaluators share. Every so often, there is a non-specific effect of treatment that may be greater than the intended study intervention. In and of themselves, study protocols can have surprising results, and these non-specific effects of treatment can be a rich source of new interventions. This presentation will provide a series of vignettes, including the Consent Form as a potent treatment, and useful mistakes in randomization. Non-specific effects of treatment are often treated as nuisance variables, to be eliminated or at least controlled. Evaluators try to minimize problems arising in recruiting, the composition of the comparison group, and the nature of the placebo condition, study assessments, retention, and follow-up procedures. However, we may be able to learn from these unwanted effects. A 'failed' study may not be a failure at all, if it identifies a new approach for a difficult condition.
Non-specific Effects of Treatment: Assessments
Andrea Chambers,  University of Arizona,  aschambers@virginia.edu
Melinda Davis,  University of Arizona,  mfd@u.arizona.edu
John Mark,  Stanford University,  jmark@stanford.edu
Asthma symptoms may be related to panic or fear, and children with asthma are at special risk for problems in psychological functioning. We tested the effectiveness of two brief behavioral treatments to reduce anxiety in children with moderate asthma. We hypothesized that the treatments would reduce anxiety, and help maintain asthma control while tapering corticosteroids. While there were no significant effects of the treatments; all groups significantly lowered their use of steroids without compromising their asthma symptoms. We hypothesize that the decreases in steroid use were due to a non-specific effect of treatment; the assessment protocol. The effective ingredient appeared to be the very brief monthly visits by a pediatrician to assess each child's pulmonary function. Inhaled corticosteroids carry significant health risks, and brief pulmonary evaluations may be an effective method to reduce their use.
Non-specific Effects of Treatment: Biomarkers
Melinda Davis,  University of Arizona,  mfd@u.arizona.edu
Dan Shapiro,  University of Arizona,  shapiro@u.arizona.edu
At any point in time, the majority of smokers are not actively planning to quit, and most will not make quit attempts without some sort of treatment or prompting. We tested the effectiveness of two brief counseling interventions for smoking cessation in smokers who were not ready to quit. We did not find either treatment to have the advantage. However, there was a non-specific effect of treatment. The study protocol included spirometry, and participants who learned that they had marked decreases in their lung capacity were more likely to reduce their smoking. While spirometry was not the focus of our study, the results are consistent with effects of catastrophic and life changing health events. Biomarkers are a useful, although underutilized technique to encourage smoking cessation.

Session Title: Evaluating College Access Programs: Evaluation Models and Methods for Different Interventions: Middle School Programs, High School Programs, Summer Bridge Programs, and College Scholarships
Multipaper Session 836 to be held in Chesapeake Room on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the College Access Programs TIG
Chair(s):
Kurt Burkum,  National Council for Community and Education Partnerships,  kurt_burkum@edpartnerships.org
Increasing College Access for Underrepresented Youth: Developing a Comprehensive Evaluation of a Summer Bridge Program
Presenter(s):
Brianna Kennedy,  University of Southern California,  blkenned@usc.edu
Abstract: Out-of-School Time (OST) programs have blossomed in the field of education as ways to supplement students' time in school. While these programs vary in mission and outcomes, the vast majority lack formal evaluation and cannot account for the activities performed. This paper discusses the efforts by one OST program, hosted by a large research university, to create a formal evaluation process that will guide its own development and serve as a template for others. The formal evaluation included the use of a logic model in program planning and implementation, and the use of verifiable measurement methods.
Evaluating College Access Program Effects: A Dosage Model and Perspective
Presenter(s):
Gary Skolits,  University of Tennessee, Knoxville,  gskolits@utk.edu
Abstract: Evaluating college access “project effects” offers special challenges for the evaluator. This paper presentation addresses the unique challenges of using a “dosage” model to the determine project effects of six-year GEAR UP partnership project that served a cohort (class of 2006) for six years. The paper and presentation will address: 1. Unique data requirements for dosage analysis 2. Methodological challenges of developing dosage indices 3. Consolidation of a wide array of college access initiatives into a dosage construct 4. Separating college access project interventions/effects from other school-based initiatives 5. Longitudinal (multi-year) challenges of dosage 6. Applying dosage to the analysis of project outcomes 7. Dosage analysis strengths and limitations This paper/presentation is relevant to evaluators investigating college access programs and other school projects that: a) cover a wide array of multiple interventions. B) are offered along with other school improvement initiatives, and c) are not amenable to the establishment of a meaningful comparison group.
The Detroit Area Pre-college Engineering Program (DAPCEP) National Science Foundation (NSF) Information Technology Experiences for Students and Teachers (ITEST) Project: Embedding Evaluation in Program Experiences
Presenter(s):
Shannan McNair,  Oakland University,  mcnair@oakland.edu
Margaret Tucker,  Detroit Area Pre-College Engineering Program,  mtucker@dapcep.org
Jason Lee,  Detroit Area Pre-College Engineering Program,  jdlee@dapcep.org
Karla Korpela,  Michigan Technological University,  kokorpel@mtu.edu
Abstract: The NSF Information Technology Experiences for Students and Teachers (ITEST) invests in informal education programs for middle and high school students that are intended to stimulate interest in high technology fields, as well as professional development for teachers that emphasize IT-intensive STEM subject areas. This presentation will discuss the opportunities and challenges experienced in determining the impact of the DAPCEF (NSF) ITEST program for students and their parents. Students complete pre- and post-program surveys of technology knowledge, skills and attitudes, take pre and post-tests of key concepts each semester, and participate in focus group interviews. Parents complete pre- and post-program surveys of technology knowledge, skills and attitudes, complete a mid-program survey, and participate in focus group interviews. Evaluation findings for the first cohort of 120 students will be discussed along with plans to track students through college.
Evaluating Scholarship Programs: Models, Methods, and Illustrative Findings
Presenter(s):
Gary Miron,  Western Michigan University,  gary.miron@wmich.edu
Abstract: This paper is based on two rather innovative evaluations of scholarship programs that have been conducted by the author. Large differences exist in the design and intent of the two scholarship programs that were evaluated. For this reason, differing models and methods were used for each of the evaluations. While the session will not focus on findings, some illustrative findings will be discussed since they exemplify the methods selected for the evaluations.

Search Results