Evaluation 2009 Banner

Return to search form  

Roundtable: On a REMOTE Island: Surviving an Online Graduate Program in Evaluation: Questions, Answers, Final Lessons, and More Questions
Roundtable Presentation 244 to be held in the Boardroom on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Distance Ed. & Other Educational Technologies TIG and the Teaching of Evaluation TIG
Presenter(s):
Charles Giuli, Pacific Resources for Education and Learning, giulic@prel.org
Kavita Rao, University of Hawaii, kavitar@hawaii.edu
Abstract: The University of Hawaii and the Pacific Resources for Education and Learning (PREL) offered an NSF sponsored, online master's degree in evaluation practice called the Regional Education Master's Online Training in Evaluation (REMOTE). The required 30 credits of graduate work had to be completed within 2 years. After an initial in-person, 2-week meeting, the remaining 4 semesters were conducted synchronously and asynchronously online. Nineteen professionals from the Pacific began the course in the summer of 2007; eight are likely to finish in the spring of 2009. A presentation at the 2008 AEA conference described the challenges associated with the 2-year completion window; the demands made by students' professional lives; the obligations posed by community and family; inadequate technology; and students' wish for personal contact with fellow students. The 2009 presentation will update this prior discussion with findings from a survey of students and instructors conducted by the authors since then.

Session Title: Do-It-Yourself Evaluation for Small Advocacy and Community Organizing Groups
Panel Session 245 to be held in Panzacola Section F1 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Justin Louie, Blueprint Research & Design Inc, justin@blueprintrd.com
Discussant(s):
Astrid Hendricks, The California Endowment, ahendricks@calendow.org
Abstract: Small advocacy and community organizing groups often do not have the resources to hire outside evaluators to help them track progress and determine the impact of their work. With limited resources, these groups can build evaluative processes into their ongoing work. In this session, we will examine how one foundation supported the efforts of three community organizing groups to build their internal evaluation capacity, helping the groups answer their own evaluation questions and document progress for their funders. From their example, we will examine what supports, knowledge, and resources are needed for small advocacy and organizing groups to engage in evaluation. We will also present actual tools and strategies, developed by these community organizing groups, to assess constituent leadership development, base building, policy wins, and community impact.
The Role of Evaluators in Helping Small Advocacy and Community Organizing Groups Build Their Evaluation Capacity
Catherine Crystal Foster, Blueprint Research & Design Inc, catherine@blueprintrd.com
Justin Louie, Blueprint Research & Design Inc, justin@blueprintrd.com
Ms. Crystal Foster will discuss how evaluators can provide an evaluation framework and technical support for organizations to build evaluation into their ongoing work. She will present the framework and process used to help the three co-presenting organizations develop and implement their own evaluation plans.
Tracking Leadership Development and Base Building for Community Organizing
Akemi Flynn, People Acting in Community Together, akemiflynn@pactsj.org
Ms. Flynn will discuss how organizations can use online CRM tools to track development of constituents' leadership skills and involvement, as well as expansion of the organization's base.
Documenting Progress in Community Leadership Development and Policy Change
Mary Klein, Peninsula Interfaith Action, marypia@sbcglobal.net
Ms. Klein will discuss how organizations can use surveys, interviews, and archival documents to document progress in developing constituent leaders and in advancing policy change goals.
Developing a Multi-Issue Evaluation Framework for Community Organizers
Adam Kruggel, Contra Costa Interfaith Supporting Community Organization, adam@ccisco.org
Mr. Kruggel will discuss how organizations can build a multi-issue, organization-wide evaluation plan that documents both campaign progress and real-world impact.

Session Title: Evaluating Program Sustainability: A Context for Program Evaluation Over Time
Panel Session 247 to be held in Panzacola Section F3 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Mary Ann Scheirer, Scheirer Consulting, maryann@scheirerconsulting.com
Discussant(s):
Mary Ann Scheirer, Scheirer Consulting, maryann@scheirerconsulting.com
Abstract: One context element for program evaluation is to consider the time point in a program's life cycle for each specific evaluation study. Evaluation should be used across the full range of a program or project's life cycle, from initial needs assessment and planning, to evaluating the sustainability of a program after its initial funding has closed. This panel will focus on methods for evaluating a potential end stage of the program life cycle: whether the program's activities, benefits, or other outcomes are sustained beyond its initial funding. We will present evaluations from several health and human service projects illustrating methods for evaluating sustainability, including the contexts affecting continuity across time. The panel will discuss issues concerning both the methods to evaluate sustainability, and what funders might do to foster greater sustainability of their programs.
Where Are They Now? Assessing the Sustainability of Foundation Grants
Karen Horsch, Independent Consultant, khorsch@comcast.net
This presentation will focus on the methodology and lessons learned from conducting evaluations of the sustainability of grant-funded projects of two different health conversion foundations. The presentation will discuss the guiding evaluation questions, an overview of the methodological challenges to assessing sustainability and how they were addressed, and a description of the methodological approach which included web-based surveys as well as phone interviews.
Methodological Issues in Studying Sustainability of Social Programs
Shimon Spiro, Tel Aviv University, shispi@post.tau.ac.il
Rivka Savaya, Tel Aviv University, savaya@post.tau.ac.il
A number of issues were identified in a survey designed to test a complex predictive model of social program sustainability, administered to key informants in 200 projects. The issues included determining the boundaries of the population of programs to be sampled, and defining sustainability - aspects of time, size, institutionalization and the interrelationships among them. Other issues related to research protocol and instrument. Information collected and stored by foundations was sometimes inadequate, and key informants were not always able to provide information about financial aspects of their project and about developments over time. Furthermore, the attempt to test a complex predictive model with a limited sample of programs, proved to be a challenge. We will report on how we dealt with these issues, and look at the advantages and disadvantages of a survey of key informants, compared to in-depth case studies (an earlier stage of the same study).
Sustainability Assessment and Targeted Technical Assistance
Charles Gasper, Missouri Foundation for Health, cgasper@mffh.org
The Missouri Foundation for Health, like other funders, is strongly interested in the sustainability of the programs and organizations it supports. MFH is part of a smaller pool of funders that focus solely on development and expansion of programs versus providing ongoing support for more mature programming. As such, the ability of an organization to sustain funded programming is a strong interest of the Foundation, given its short cycle of funding. The Foundation staff conducted a literature review, developed and tested an instrument for assessment of nonprofits and their proposed programming. This has been integrated into the Foundation's proposal assessment process. The presentation addresses the development of the instrument and the lessons learned from the use of the instrument in identifying issues, which are addressed with technical assistance for funded programs from grant inception.
Evaluating Program Sustainability in the Public Sector: Context as Structure
Anne Hewitt, Mountainside Associates, hewittan@shu.edu
The number of new social programs introduced locally, regionally and nationally is staggering, but as many as 40% of all new initiatives are not sustained beyond the first few years. The challenge of creating a sustainable and viable public initiative remains an obstacle for many government agencies. To meet the challenge of public setting and environmental parameters, program sustainability evaluation requires integrating the political, regulatory, and policy contextual factors within the assessment framework. To highlight this evaluation approach, a case study of a five-year state initiative using three nationally recognized sustainability frameworks as assessment tools is presented. This evaluation study demonstrates the importance of aligning the political context factors with standard program monitoring and synthesizes the state actions that enabled continued program sustainability despite major political and economic challenges. Results from this review are shared to serve as a model for similar sustainability assessments in the public arena.

Session Title: Tobacco Control Evaluation Initiatives
Multipaper Session 248 to be held in Panzacola Section F4 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Diana Seybolt,  University of Maryland, dseybolt@psych.umaryland.edu
Evaluating a Secondhand Smoke Prevention Toolkit
Presenter(s):
Mary Martinasek, Florida Prevention Research Center, mmartina@health.usf.edu
Moya Alfonso, Florida Prevention Research Center, malfonso@health.usf.edu
Judy Berkowitz, Centers for Disease Control and Prevention, pzbz@cdc.gov
Jason Lind, Florida Prevention Research Center, jlind@health.usf.edu
Abstract: Secondhand smoke is a major risk factor for lung cancer and heart disease, killing over 3,000 and 35,000 nonsmokers respectively per year. Sabemos: Por respeto - Aquá no se fuma, a community outreach toolkit designed and distributed by the Centers for Disease Control and Prevention's Office on Smoking and Health, was designed to affect secondhand smoke rates among parenting Latinos in the United States. The program was developed based on focus groups and interviews with the target audience. The purpose of this study was to conduct an implementation evaluation of the Sabemos community outreach toolkit. Individual in-depth interviews (N=7) and observations of implementation were conducted in spring 2009. Results suggested components of the toolkit that were implemented, modifications to the toolkit, suggested modifications, and perceived impact of the toolkit on members of the target audience. Results will be used to modify the program and improve dissemination of the community outreach toolkit.
A Quasi-experimental Bibliometric Study Comparing the Productivity of the Transdisciplinary Tobacco Use Research Centers (TTURCs) With Tobacco Related R01 Grants
Presenter(s):
Brooke Stipelman, National Institutes of Health, stipelmanba@mail.nih.gov
Lawrence Solomon, National Institutes of Health, solomonl@mail.nih.gov
Annie Feng, National Institutes of Health, fengx3@mail.nih.gov
Kara Hall, National Institutes of Health, hallka@mail.nih.gov
Richard Moser, National Institutes of Health, moserr@mail.nih.gov
Dan Stokols, University of California Irvine, dstokols@uci.edu
David Berrigan, National Institutes of Health, berrigad@mail.nih.gov
James Corrigan, National Institutes of Health, corrigan@mail.nih.gov
Stephen Marcus, National Institutes of Health, marcusst@mail.nih.gov
Glen Morgan, National Institutes of Health, gmorgan@mail.nih.gov
Abstract: The Transdisciplinary Tobacco Use Research Centers (TTURCs) were funded as part of a large scale research initiative funded through the National Cancer Institute (NCI) and are designed to promote transdisciplinary collaborations in tobacco research. As part of an on-going effort to evaluate the societal and scientific merit of this initiative, a bibliometric study is being conducted to examine the productivity of the TTURC researchers. Using a quasi-experimental design, this bibliometric study seeks to compare the TTURC publications to a group of traditional tobacco-related R01 grants that were funded for a similar duration of time. A series of quantitative and qualitative bibliometric indicators will be presented. In addition, methodological considerations and lessons learned will also be addressed.
Mental Health and Tobacco Use in the Aftermath of Hurricanes Katrina and Rita
Presenter(s):
Nikki Lawhorn, Louisiana Public Health Institute, nlawhorn@lphi.org
Jenna Klink, Louisiana Public Health Institute, klink@lphi.org
Lisanne Brown, Louisiana Public Health Institute, lbrown@lphi.org
Abstract: In 2005, hurricanes Katrina and Rita devastated Southern Louisiana, creating large-scale relocation and increases in residents' stress levels. One study found the prevalence of serious mental illness nearly doubled after Hurricane Katrina in a sample of residents living in FEMA defined disaster areas (Kessler et al, 2007). The largest increase in mental health symptoms was found in the moderately affected parishes. Mental illness is a risk factor associated with tobacco use. Many studies have substantiated that substance abuse, including tobacco use, increases after natural disasters. Data from the 2004 and 2006 Behavioral Risk Factor Surveillance Survey were analyzed in order to determine whether changes in mental health were associated with tobacco use. Analysis controlled for demographic factors including gender, income, ethnicity, and parish of residence. Increases in mental health illness may contribute to stagnating smoking prevalence from 2004 to 2006 in Louisiana despite a statewide tobacco prevention and control program.

Session Title: Taking Control of Your Evaluation Career
Skill-Building Workshop 249 to be held in Panzacola Section G1 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Evaluation Managers and Supervisors TIG
Presenter(s):
George F Grob, Center for Public Program Evaluation, georgefgrob@cs.com
Ann Maxwell, United States Department of Health and Human Services, ann.maxwell@oig.hhs.gov
Abstract: This session will engage its participants in a series of exercises deigned to help them understand the many possibilities of a rewarding life long career in the field of evaluation; to identify both the broad and specific skills, knowledge, and experience conducive to achieving it; to evaluate where they currently stand; and to set goals for their own personal future career development. The exercises aim to open each participating evaluator's vision to his or her roles and potential as an analyst/methodologist; substantive program expert; and manager/administrator/advisor. It will explain how these skills naturally develop over a lifetime of evaluation practice, and how an evaluator can plan for and enjoy an expanding role of professionalism, influence, and stature over his or her career.

Session Title: Program Theory and Theory-driven Evaluation TIG Business Meeting and Think Tank: Using Context, Evidence, and Program Theory to Build External Validity
Business Meeting Session 250 to be held in Panzacola Section G2 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
TIG Leader(s):
John Gargani, Gargani + Company, john@gcoinc.com
Katrina Bledsoe, Walter R McDonald and Associates Inc, kbledsoe@wrma.com
Presenter(s):
Uda Walker, Gargani + Company, uda@gcoinc.com
Discussant(s):
Michael Scriven, Claremont Graduate University, mjscriv@gmail.com
Michael Patton, Utilization-Focused Evaluation, mqpatton@prodigy.net
Huey Chen, Centers for Disease Control and Prevention, hbc2@cdc.gov
Stewart Donaldson, Claremont Graduate University, stewart.donaldson@cgu.edu
John Gargani, Gargani + Company, john@gcoinc.com
Abstract: External validity - how we justify using evidence of a program's past performance to predict its future performance - is a hot topic in evaluation. Context, evidence, and theory provide three means of supporting the external validity of an evaluation. To what extent, if at all, can these sources of evidence be purposefully integrated to strengthen external validity? How are they currently used for this purpose in practice? What are the benefits, if any, of doing so within an experimental research paradigm? Or within a theory-based evaluation paradigm? Is external validity even a useful concept for evaluators and policymakers? In this session, prominent evaluators will address these questions and propose practical answers in an interactive format.

Session Title: Using Rigorous Qualitative Data Analysis to Enhance Random Assignment Designs: Lessons From an Evaluation of a Teacher Professional Development Program
Multipaper Session 251 to be held in Panzacola Section H1 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Qualitative Methods TIG
Chair(s):
Savitha Moorthy, Berkeley Policy Associates, savitha@bpacal.com
Abstract: Qualitative data analysis can provide an invaluable complement to experimental evaluations of educational programs. This presentation responds to the call for the public disclosure of methods in naturalistic inquiry and addresses the need for greater transparency and clarity, especially in the qualitative analysis process. We draw from our experiences as a research team working on a longitudinal random assignment study of teacher professional development, to summarize the iterative steps of our analysis approach and outline the strategies we employ to articulate the relationship between qualitative data collection, analysis, and research findings. Specific strategies to be discussed include team-based exploratory coding and the use of computer-assisted qualitative data analysis.
Policy Relevance of Increased Rigor in Qualitative Methods: The Case of Experimental Evaluations in Education
Hannah Betesh, Berkeley Policy Associates, hannah@bpacal.com
In an era of educational research that emphasizes empirical knowledge and evidence, it has become important to incorporate qualitative analyses into studies of what works in education. Within the policy framework of No Child Left Behind, this paper will identify important facets of rigorous qualitative analysis and explore the potential of such work, specifically naturalistic inquiry, for addressing policy relevant issues that cannot be fully explored through quantitative work alone, such as program and policy implementation; characterizing change in organizational cultures; and understanding values, preferences, and perspectives, particularly of vulnerable or marginalized populations.
Team-Based Qualitative Analysis: Lessons From School-Based Data Collection
Jacklyn Altuna, Berkeley Policy Associates, jacklyn@bpacal.com
Through the lens of a multi-year evaluation of a teacher professional development program geared towards improving the education of English Learners, we examine our team-based analysis approach in our qualitative research process as one piece that contributes to increased transparency in qualitative research. As one component of a larger study primarily focused on quantitative impacts, the systematic implementation of rigor in qualitative analyses has become increasingly crucial. Through the team-based analysis of field notes from direct observations of after-school lesson design meetings, transcribed interviews from the meeting facilitators, written records provided by program coaches, and transcribed interviews from interviews with program coaches, we provide a detailed account of how our process unfolded in the co-construction of codebooks, coming to consensus about the salience of various codes, restructuring of coding hierarchies, and establishing interrater reliability among multiple coders.
Computer-Assisted Qualitative Data Analysis: Applications to Educational Research
Savitha Moorthy, Berkeley Policy Associates, savitha@bpacal.com
For some, the use qualitative data analysis software automatically signals increased rigor in the analytic process, while for others, it raises concerns that the dominance of queries and auto-retrieve functions mechanizes analysis, and discourages researchers from pursuing other ways of scrutinizing the data. In our team-based coding approach, we examine the use of software as an invaluable and time-saving analysis tool that in combination with careful inquiry, can contribute to sustaining increased transparency in the qualitative analysis process. We provide an in-depth look into our coding processes using data from an implementation study of a teacher professional development program. Using field notes from direct observations, transcribed interviews with program coaches, and written records provided by the program, we explore the various applications of Nvivo8 in our data sets and how it can be utilized to improve rigor in qualitative research.

Session Title: Quantitative Methods TIG Business Meeting and Discussion: Building the Evidence Base for Evaluation Design
Business Meeting Session 252 to be held in Panzacola Section H2 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
TIG Leader(s):
Patrick McKnight, George Mason University, pmcknigh@gmu.edu
George Julnes, University of Baltimore, gjulnes@ubalt.edu
Karen Larwin, University of Akron, drklarwin@yahoo.com
Fred L Newman, Florida International University, newmanf@fiu.edu
Presenter(s):
Leonard Bickman, Vanderbilt University, leonard.bickman@vanderbilt.edu
Discussant(s):
William Shadish, University of California Merced, wshadish@ucmerced.edu
Abstract: Len Bickman will present on how findings on implementation of different designs gives us insight on selecting designs in different contexts. Will Shadish will respond to Bickman's presentation and also present the findings of a new study in which participants were randomly assigned to either a random field trials or regression discontinuity study. The impacts of the different approaches and the implications from this research will be discussed.

Session Title: Painting a Picture Worth 1000 Words: Using Microsoft Excel or R Statistical Software to Create Powerful Graphs and Charts
Skill-Building Workshop 253 to be held in Panzacola Section H3 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Evaluation Use TIG and the Quantitative Methods: Theory and Design TIG
Presenter(s):
Catherine Callow-Heusser, EndVision Research and Evaluation LLC, cheusser@endvision.net
Abstract: Powerful graphs and charts can be created quickly and easily in Microsoft Excel and in the R software environment for statistical computing and graphics. In this session, participants will receive numerous Excel templates and snippets of R code that will help them create a variety of bar charts, boxplots, histograms, and other pictures that can help tell stories "at a glance." Graph types, colors, labels, scales, formatting, and other topics important to easily interpreted graphs will be discussed, and participants with their own laptops will have an opportunity to change these properties of graphs and charts. The presenter's background in Computer Science, Instructional Technology, and Evaluation have helped her "push the limits" of Excel and go beyond the typically used features of R to create powerful graphs and charts to display results of complex data to educators and others who might have otherwise closed their eyes to avoid the numbers.

Session Title: Nonprofits and Collaborative Evaluation
Multipaper Session 254 to be held in Panzacola Section H4 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Lorna Escoffery,  Escoffery Consulting Collaborative Inc, lorna@escofferyconsulting.com
Transforming an Early Intervention Pilot Project to a Fully Sustained Program by Embracing Change and the Inclusion of All Partners
Presenter(s):
Lorna Escoffery, Escoffery Consulting Collaborative Inc, lorna@escofferyconsulting.com
Marta Pizarro, Health Foundation of South Florida, mpizar4682@aol.com
Abstract: Developing a new intervention program to provide services is challenging and its sustainability can be daunting. However, by forging a dynamic partnership among foundation and program staff and evaluation consultants, the Blind Babies Program grew from a pilot to a fully sustained program serving over 90 blind or visually impaired children. The activities in the process changed as the contextual factors changed and these factors included funding and funders, identified disabilities, assessment and monitoring instruments, internal staff, parents/caregivers, and requirements from external agencies (county, state, and federal). The results of the evaluation indicate that is it possible to develop, implement, refine, and sustain a new program when the main stakeholders have the pertinent knowledge; when decision makers know how to use data for the benefit of clients as well as program improvement; and funders, program staff and the evaluators respect each other as well as the clients being served.
Learning Inside and Out: The Role of Context and Co-creation in An Urban Evaluation Setting
Presenter(s):
Kate Tilney, Hope Community Inc, kbport@comcast.net
Abstract: This paper describes how a unique context effects the implementation of useful and appropriate evaluation processes, and how such a context dictates the nature of an evaluator's efforts and her role as a consultant working as a participant-observer twenty-five hours a week inside a large, non-profit Community Development Corporation in Minneapolis, MN. The context is divided into thirds: the values and culture of the organization itself; the nature of the 'internal evaluator' role and how it came to be defined co-creatively; and the slow transformation of the organization's evaluative culture. I also identify the challenges of the context, the keys to successes achieved so far, and the possibilities for future mutually-beneficial collaboration. Finally, I consider how some of the most relevant theories around collaborative inquiry--utilization-focused, participatory and empowerment evaluation theory specifically--have influenced the approach to the work and indeed, the entire context.
Context-Focused Evaluation Methods for Use With Ethnic-Specific Collaborations: Challenges and Successes in Establishing a Collaborative Evaluation for A Learning Community With a Strong Outreach Component
Presenter(s):
Lyn Paleo, First 5 Contra Costa, lpaleo@firstfivecc.org
Denece Dodson, First 5 Contra Costa, ddodson@firstfivecc.org
Lisa Morrell, First 5 Contra Costa, lmorrell@firstfivecc.org
Suzanne Gothard, University of California Berkeley, suzanne.gothard@gmail.com
Abstract: Many new mothers welcome the services of a Home Visitor to help them with the early challenges of motherhood; however, some refuse services which could benefit them and their infants, even when the services are provided by a professional from their own community. What facilitates a distrusting person to accept services, and why? The Hand to Hand Collaborative is testing combinations of outreach strategies and incentives to find effective ways of enabling trust for the services in their communities. In addition, the collaborative meets monthly as a Learning Community to engage in a continual learning and improvement process. The evaluation is based on a multi-strand impact model developed with the collaborative, and combines traditional assessments, non-print forms using drawings and labels, and questions for dialog at Learning Community meetings. This paper will discuss the challenges and successes of evaluation for a Learning Community collaborative comprised of multiple ethnic-specific agencies.

Session Title: Cases for the Program Evaluation Standards, 3rd Edition: Reflective Practice and Problem-solving
Panel Session 255 to be held in Sebastian Section I1 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the AEA Conference Committee
Chair(s):
Hazel L Symonette, University of Wisconsin Madison, hsymonette@odos.wisc.edu
Discussant(s):
Hazel L Symonette, University of Wisconsin Madison, hsymonette@odos.wisc.edu
Karen Kirkhart, Syracuse University, kirkhart@syr.edu
Abstract: A new edition of the Program Evaluation Standards, 3rd Edition, has been completed. This session presents the new standards and their applications in five integrating cases, one for each of the five attributes of high quality evaluations: utility, feasibility, propriety, accuracy, and evaluation accountability. Each case scenario provides background information followed by specific applications with each standard in the context of other standards. Each application has five focuses: stakeholders' roles, evaluation dilemmas experienced by stakeholders, dilemmas related to the specific standard, strategies to implement the standard, and how other standards and complications of addressing multiple standards come into play. In addition, an accompanying book of cases applying the standards in a wide variety of contexts is being developed. The session presents the development of the case book with invitations to become involved in developing or reviewing case applications and dimensions of quality in the cases.
Utility Illustrated through a Health Promotion Education Case
Lyn Shulha, Queen's University, lyn.shulha@queensu.ca
The 3rd Edition Utility Standards are U1 Evaluator Credibility, U2 Attention to Stakeholders, U3 Negotiated Purposes, U4 Explicit Values, U5 Relevant Information, U6 Meaningful Processes and Products, U7 Timely and Appropriate Communication and Reporting and U8 Concern for Consequences. Each standard is illustrated through application to an evaluation of a multi-level program to "support the health and well being of community residents in ways that contribute to safe, active, responsive and ecologically responsible community living." Specific applications of the individual standards address the evaluations of environmental day camps for elementary school children and a leadership development project for high school students. Dilemmas include stakeholder identification issues, board member resignations, communication challenges, and differences in values including those related to the effectiveness of the evaluation. All the standards are brought to bear on the issue of how to align stakeholders' wishes and needs with evaluators' knowledge, skill, and expertise in evaluation.
Feasibility and Evaluation of School District Resource Use
Flora Caruthers, Florida Legislature, caruthers.flora@oppaga.fl.gov
The four Feasibility Standards are F1 Project Management, F2 Practical Procedures, F3 Contextual Viability, and F4 Resource Use. The case, an evaluation of a school district's resource use, illustrates the application of the standards through reflective practice and problem solving. For example, the application of F1 Project Management is illustrated by a team meeting in which members express concern about their ability to complete the project without a plan that contains sufficient detail for all members of the team to understand when. It discusses the work products required given the scope of the project and professional experience of various members of the team. The team's approach to addressing F3 Contextual Viability is illustrated by their efforts to accommodate more than just the informal grapevine operating in and acceptable to in-group members of the school district. Necessary trade-offs and possible conflicts in trying to optimize on all standards simultaneously are addressed.
Propriety in an Evaluation of Civic Engagement and Social Change
Rodney Hopson, Duquesne University, hopson@duq.edu
Propriety refers to what is proper, fair, legal, right, acceptable, and just in evaluation practice. Embedded in the attribute are ethical, legal, and professional issues that relate to rights and responsibilities of evaluators and participants, systems of rules and laws, and roles and duties inherent in the practice of evaluation. The revised and updated Propriety Standards are P1 Responsive and Inclusive Orientation, P2 Formal Agreements, P3 Human Rights and Respect, P4 Clarity and Fairness, P5 Transparency and Disclosure, P6 Conflicts of Interests, and P7 Fiscal Responsibility. To illustrate the reflective practice and use of Propriety, the seven revised Propriety Standards are applied to a case regarding civic engagement and social change in a local community. Individual applications are used to discuss how each standard is implemented in the application. Propriety standards are used to better understand evaluation practice and the dilemmas that require application of multiple standards simultaneously.
Accuracy in an Evaluation of Teacher Professional Development
Don Yarbrough, University of Iowa, d-yarbrough@uiowa.edu
The eight updated Accuracy Standards are A1 Justified Conclusions and Decisions, A2 Valid Information, A3 Reliable Information, A4 Explicit Program and Context Portrayals, A5 Information Management, A6 Sound Designs and Analyses, A7 Explicit Evaluation Reasoning, and A8 Communication and Reporting. Each of these 8 standards are applied to the same case, an evaluation of a multi-level teacher professional development program with multiple components, including a focus of teachers' skills and knowledge related to English Language Learners. The individual applications illustrate how to apply each standard and typical trade-offs that must be considered as different standards (especially with regard to Accuracy but also referencing those from Utility, Feasibility, and Propriety) are applied. The role of different stakeholder characteristics and their potential influences on accuracy provides part of the background requiring reflection and problem solving. Issues related to the way accuracy is intertwined with stakeholders' expectations and backgrounds are discussed.
Evaluation Accountability and Metaevaluation
Don Yarbrough, University of Iowa, d-yarbrough@uiowa.edu
One of the major changes in the Program Evaluation Standards, 3rd edition, is the emphasis on "evaluation accountability through metaevaluation" as a fifth attribute of quality evaluation projects. In the 2nd edition, metaevaluation was addressed in an accuracy standard (A12). However, evaluation quality and accountability require attention to all Utility, Feasibility, Propriety, and Accuracy standards through guided reflection informed by the purposes and contingencies of the metaevaluation. In the 3rd edition, the three Evaluation Accountability standards inform this reflection. They focus on documenting the evaluation in sufficient detail to allow accountability judgments (E1 Evaluation Documentation); supporting internal formative metaevaluation for all evaluation projects (E2 Internal Metaevaluation); and encouraging formal summative metaevaluation when it is feasible and warranted (E3 External Metaevaluation). The standards are illustrated with applications to the four case studies presented in the Utility, Feasibility, Propriety, and Accuracy chapters.

Session Title: Participation and Fidelity
Multipaper Session 256 to be held in Sebastian Section I2 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Gregory Teague,  University of South Florida, teague@fmhi.usf.edu
Issues and Opportunities in Consortium Evaluations: Cross-Sites, Levels, and Groups
Presenter(s):
Jeffry White, Ashland University, jwsrc1997@aol.com
James Altschuld, The Ohio State University, altschuld.1@osu.edu
Abstract: More and more evaluators are providing assessments for consortium types of projects. In such situations numerous challenges arise and by the same token there are unique opportunities to conduct research. The presenters are providing evaluation services to a two site project with each site having three levels of institutions involved (universities, 2 year colleges, and high schools). One obvious problem is that IRB clearance is more difficult in this circumstance. Another is that project activities are varied across the two sites necessitating minor variations in the items used in surveys. An example of an opportunity is the creation of a database that will allow tracking of students from high school through college or other post secondary activities. The project will be briefly described followed by an analysis of the issues and opportunities inherent in it and how they have been handled by the evaluators.
Process Elements in Program Fidelity Measurement: Inferential Gains in a Multi-site Experimental Study
Presenter(s):
Gregory Teague, University of South Florida, teague@fmhi.usf.edu
Abstract: Critics of recent practice in program fidelity measurement have faulted emphasis on structural features; although process elements can be more difficult to measure, they may be more essential to a program's intended outcomes. A SAMHSA/CMHS multi-site study illustrates the value of emphasizing program processes. The study investigated outcomes of consumer-operated services offered adjunctively to traditional mental health services for persons with serious mental illnesses. It used an experimental design and a common assessment protocol, following over 1,600 participants for 12 months in eight sites offering different program models. Both consumer-operated and traditional service programs were assessed using a fidelity measure informed by consumer-defined concepts of recovery-oriented service characteristics. Inclusion of fidelity data in analysis supplemented the primary hypothesized experimental finding of increased well-being by showing that specific recovery-oriented characteristics of the service environments accounted for substantial variation in well-being and other outcomes across program types and conditions.
A Balance Between Standardization and Flexibility: Using a Compendium of Indicators to Measure Child Well-Being in Multi-site Evaluations
Presenter(s):
Isabelle Carboni, World Vision International, isabelle_carboni@wvi.org
Nathan Morrow, World Vision International, nathan_morrow@wvi.org
Abstract: A multi-site evaluation of a global NGO's contribution to child well-being is a complex challenge. Individual project sites, National and Regional Offices, Global Head Quarters, as well as donors, all require particular information for advocacy, accountability and learning. How can one set of indicators meet all these needs? Using a compendium of indicators and a 'roll-up' evaluative process, each entity can collect, analyse and learn from the information that matters most. Child well-being is a multi-dimensional concept, understood differently in different places. Through a global consultation process, a common organizational understanding of child well-being was developed, in health, education, relationships/spirituality and rights. Building on this, a compendium of outcome indicators was developed. The compendium allows projects staff to choose indicators relevant to their specific context, based on community perceptions of child well-being, adding in new indicators as relevant, whilst still measuring progress towards child well-being outcomes at the global level.

Session Title: Evaluating the Federal Safe Schools/Healthy Students Initiative: Design and Analysis Approaches to Address Context Challenges
Panel Session 257 to be held in Sebastian Section I3 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Steve Murray, RMC Research Corporation, smurray@rmccorp.com
Discussant(s):
Danyelle Mannix, United States Department of Health and Human Services, danyelle.mannix@samhsa.hhs.gov
Abstract: The Safe Schools/Healthy Students Initiative is a collaboration among 3 federal agencies to support implementation of comprehensive community-wide plans to create safe and drug-free school environments. The program currently includes 175 grantees that target 4,204 schools and nearly 3.8 million students across the U.S. Consequently, numerous design challenges exist, such as accounting for variation in locally determined programs and data collection, in addition to federal context challenges, such as performance reporting requirements. The panel is composed of federal project officers and members of the national evaluation team who will discuss the large-scale, multi-site evaluation, specifically (1) overviewing the federal program environment, (2) describing local grant context, implementation, and the development of a program theory model to guide the evaluation, (3) illustrating the integration of qualitative data in the evaluation's mixed method design, and (4) discussing the innovative use of meta-analysis to analyze outcome data collected by local grant sites.
The Federal Context of the Safe Schools/Healthy Students Initiative
Michael Wells, United States Department of Education, michael.wells@ed.gov
Danyelle Mannix, United States Department of Health and Human Services, danyelle.mannix@samhsa.hhs.gov
Patrick Weld, United States Department of Health and Human Services, patrick.weld@samhsa.hhs.gov
The SS/HS Initiative offers a unique opportunity to examine large-scale program evaluation in the context of a federal environment that places many requirements and constraints on how the grants are conducted and managed. Federal programs stress the requirements for performance-based outcomes (e.g., Government Performance and Results Act, Program Assessment Rating Tool), valid and reliable data, addressing important problems, ensuring efficiency and fiscal responsibility, reducing burden on federal staff and grantees, and developing and disseminating useful solutions and recommendations. This evaluation involves the coordinated efforts of Federal Project Officers, local educational agencies, technical assistance providers, and national and local evaluators across a diverse set of socioeconomic contexts. It also involves coordination and integration of findings among several contractors.
Safe Schools/Healthy Students Local Grant Context and Program Theory
Andrew Rose, MANILA Consulting Group Inc, arose@manilaconsulting.net
G A Hill, MANILA Consulting Group Inc, ghill@manilaconsulting.net
Jennifer Keyser-Smith, MANILA Consulting Group Inc, jkeyser-smith@manilaconsulting.net
Shauna Harps, MANILA Consulting Group Inc, sharps@manilaconsulting.net
Kathleen Kopiec, MANILA Consulting Group Inc, kkopiec@manilaconsulting.net
Julia Rollison, MANILA Consulting Group Inc, jrollison@manilaconsulting.net
The national evaluation is further complicated as the 146 local grant sites adopt different approaches, activities, and programs to address problems specific to their communities. Local approaches are developed by partnerships, which must minimally be composed of representatives from the local education, mental health, juvenile justice, and law enforcement agencies. Further, local grant sites, although required to address common elements (e.g., reduced violence and alcohol and drug use, improved mental health services and early childhood social/emotional development), are not required to use common process or outcome measures. Consequently the evaluation entails analyzing and synthesizing data from a variety of sources, including project directors, state agencies, schools, and students as well as required grant partners using surveys, site visits, interviews, and focus groups. This presentation will stress the role of program theory and logic models in guiding the evaluation.
Integration of Qualitative and Quantitative Data in the Safe Schools/Healthy Students Evaluation
Alison J Martin, RMC Research Corporation, amartin@rmccorp.com
Marina L Merrill, RMC Research Corporation, mmerrill@rmccorp.com
Ryan D'Ambrosio, RMC Research Corporation, rdambros@rmccorp.com
Nicole Taylor, RMC Research Corporation, ntaylor@rmccorp.com
Lauren A Maxim, RMC Research Corporation, lmaxim@rmccorp.com
Roy M Gabriel, RMC Research Corporation, rgabriel@rmccorp.com
To respond to the federal program context and evaluation purpose, the National Evaluation Team developed a mixed methods approach, a concurrent nested design (Creswell et al., 2003), in which quantitative methods serve as the predominant method enhanced by qualitative methods implemented to describe partnership process and context. Quantitative data are collected through multiple surveys seeking information on grant activities, perceptions of partner contributions, and partnership functioning, and sites' annual performance reports contain required Government Performance and Results Act indicators that measure grant outcomes. Qualitative data are collected concurrently through annual group interviews on topics such as grant planning, implementation barriers, and partnership history and organization. This presentation will discuss data integration, which occurs at the analysis and interpretation stages. Qualitative data have been used to enhance quantitative data through thematic analysis and the transformation of qualitative data into numerical codes. Challenges inherent in qualitative-quantitative data integration will also be addressed.
Innovative Use of Meta-Analysis to Evaluate Large-Scale Multi-Site Federal Initiatives
Bruce Ellis, Battelle Memorial Institute, ellis@battelle.org
James Derzon, Battelle Memorial Institute, derzonj@battelle.org
Ping Yu, Battelle Memorial Institute, yup@battelle.org
Sharon Xiong, Battelle Memorial Institute, xiongx@battelle.org
A common challenge in conducting large-scale, multi-site evaluation studies of school safety and substance abuse prevention efforts has been the inability to measure and analyze changes in implementation and outcomes over time due to variations in types of data and the timeframe of the data being collected across different sites. The proposed paper discusses an innovative approach in addressing this challenge through the use of meta-analysis. Specifically, the proposed paper discusses how outcome data from different sites are collected; how outcome data using different instruments are processed and prepared for meta-analysis; and how the processed data are analyzed and used to assess changes in outcomes relating to school safety and student health over time. The presentation will be beneficial to evaluators in operationalizing data collection efforts, developing meta-analytic databases, and applying specific meta-analytic techniques to analyze diverse data.

Session Title: From Implementation to Sustainability: A Report on How the American Evaluation Association/Duquesne University Graduate Education Diversity Internship Program remained Responsive to the Evaluation Professional Needs and Stakeholder Demands during the First Five Years
Panel Session 258 to be held in Sebastian Section I4 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Sarita Davis, Georgia State University, skdavis04@yahoo.com
Discussant(s):
Sarita Davis, Georgia State University, skdavis04@yahoo.com
Abstract: The American Evaluation Association/Duquesne University Graduate Education Diversity Internship Program (AEA/DU GEDIP) was launched in the fall of 2004. Over the first 5 years of its implementation the program had to respond to increased demand from various stakeholders while remaining true to the mission and goals of the program. The program grew substantially, seeing an increase in the number of highly qualified applicants (from 13 in the first year to 91 in the fifth year), an increase in number and diversity of agencies willing to host and/or support interns, and an increase in the diversity of academic focus/discipline of the applicants and institutions the interns were affiliated with. Throughout the five years, the program director and coordinators conducted ongoing formative evaluation of the program. These efforts enabled reflexive and recursive analysis of the problems associated with program development, and modification. The panel will present how the program, during its implementation managed to respond to the increasing demands from applicants and other stakeholders while remaining faithful to the mission of the AEA and the original goals of the program, and describe the road to sustainability. Program implementation will be presented from the perspective of the three program coordinators who were involved during this five year period.
The First Two Years: Laying the Foundation and Developing Evaluative Thinking Among Graduate Students of Color
Prisca Collins, Governors State University, p-collins@govst.edu
This presentation will discuss the processes involved during the initial implementation of the program, the challenges encountered and the lessons learned during the first two years. This will include a description of the development of the internship curriculum, calendar, tools for continuous evaluation and establishment of a resource base for advising, mentoring and socialization of the interns. The discussion will highlight the processes behind the start up activities of the program, including but not limited to the development of evaluative thinking among the interns, raising funds, and attending to the unique needs of the first and second cohorts. Challenges related to engaging various stakeholders and building a viable learning community between the small number of interns and the evaluation community will also be discussed.
A View From a Former Intern: Examining Curricular Adaptations During the Mid-Years of the Program
Tanya Brown, University of California Los Angeles, jaderunner98@gmail.com
This presentation will discuss the implementation processes during the third year. The third year was notable for the inception of a second track of training specific researching logic model implementation in STEM evaluation projects. The presenter will highlight how the coordinator was sensitive to the students' development regarding the unifying goals of the program as well as addressing the unique challenges of working on the traditional AEA/DU GEDIP track versus the STEM track. She will outline how during her tenure she managed the diverging course of development of the cohort members and the shifting evolution of the program. The presentation will also provide the unique perspective of a former intern who became actively involved in the implementation of the program as program coordinator, and how she was able to address the challenges of that period drawing upon her experiences as an intern and other contextual circumstances.
The Road to Sustainability: The Fourth and Fifth Years of the American Evaluation Association (AEA)/Duquesne University (DU) Graduate Education Diversity Internship Program
Shane Chaplin, Duquesne University, shanechaplin@gmail.com
This presentation will discuss how the program continued to evolve in the last two years to accommodate the increasing demands from the applicants and other stakeholders. The presentation will highlight how the level of funding and type of funding helped shape the focus, structure, and mode of delivery of the internship activities. He will also discuss the reorganization process as the STEM funding ended and the development of virtual communication platforms to engage all the cohorts in dialogue. Finally, he will further delineate the leadership development theories informing the program activities and the shift in focus towards establishing sustainability of the program.

Session Title: Establishing Effective Relationships: Presentation of a New Checklist to Help Evaluators Understand and Work With Diverse Clients
Skill-Building Workshop 259 to be held in Sebastian Section J on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Graduate Student and New Evaluator TIG
Presenter(s):
Gary Miron, Western Michigan University, gary.miron@wmich.edu
Nakia James, Western Michigan University, d_njames05@yahoo.com
Tammy DeRoo, Western Michigan University, tammy.l.deroo@wmich.edu
Abstract: This skill-building session will introduce a new checklist that is designed to help evaluators establish relationships and work effectively with their clients. Broadly speaking, the checklist covers a list of issues and common obstacles that evaluators face when working with diverse clients. Checkpoints will highlight strategies and practices that will help ensure effective relationships are built and maintained. The checklist draws upon three key sources of information: (i) relevant literature, (iii) interviews with experienced evaluators and program officers that oversee evaluation contracts, and (ii) the national and international experience of the presenters. While the checklist is intended to be concise and provide only prompts for evaluators, the presentation and paper will allow a more in-depth description of the do's and don'ts when it comes to working with evaluation clients.

Session Title: Rumbles From the North: How Canadian Evaluators Respond (or not) to the Context for Evaluation
Panel Session 260 to be held in Sebastian Section K on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Presidential Strand
Chair(s):
François Dumaine, Canadian Evaluation Society, dumaine@pra.ca
Abstract: Fundamental and promising shifts are occurring in Canada's evaluation context and yet, they leave a number of enduring issues unaddressed. Significant steps have been taken to entrench the function of evaluation as a meaningful contributor to sound public policy. Government wise, a new federal policy on evaluation is expanding the reach of evaluation and engaging the highest levels of the federal public service. Community wise, the launch of the Canadian Evaluation Society's designation project will provide a clearer path to both evaluation practitioners and users. Our learning institutions are braving uncharted trails by establishing a nation-wide consortium of universities dedicated to improving access to formal evaluation courses. Once the dust has settled, the community will still be faced with serious questions, including those relating to our ability to conduct long-term outcomes-driven evaluations, or tailor our approach to communities such as the Aboriginal peoples.
What Is the Canadian Evaluation Society Up to? The Vision Behind the Designation Project
François Dumaine, Canadian Evaluation Society, dumaine@pra.ca
François Dumaine will draw upon his experience as the current President of the Canadian Evaluation Society to further expand on the vision behind the new designation project.
An Inside View: How the New Canadian Federal Policy on Evaluation Will Affect the Discipline
Paul Wheatley, Department of Justice Canada, paul.wheatley@justice.gc.ca
Paul Wheatley is Director of Evaluation with the Department of Justice Canada. He brings the perspective of federal departments, which are tasked with the implementation of the new federal policy on evaluation.
Feeding the Supply Side: The Consortium of Universities for Evaluation Education (CUEE) in Canada
J Bradley Cousins, University of Ottawa, bcousins@uottawa.ca
Brad Cousins is Professor of Educational Administration at the Faculty of Education, University of Ottawa. As an active member of the Consortium of Universities for Evaluation Education (CUEE), he will discuss the ongoing development of CUEE.

Session Title: Evaluating Health Programs in Developing Countries
Panel Session 261 to be held in Sebastian Section L1 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Monika Huppi, World Bank, mhuppi@worldbank.org
Abstract: This session is designed to present the results of evaluations of support for health programs in developing countries. It includes the findings of a broad World Bank Group evaluation, the findings of an evaluation of sector wide multi-donor programs (SWAPs), and of an evaluation of the Global Fund to Fight AIDS, Tuberculosis, and Malaria. The presentations will discuss both the substantive findings and the methodological challenges to conducting the evaluations.
Evaluating a Decade of World Bank Group Health Sector Support
Martha Ainsworth, World Bank, mainsworth@worldbank.org
The mandate of the World Bank Group is to reduce poverty and promote economic growth. Poor health and malnutrition contribute to low productivity of the poor; improving HNP outcomes is thus seen as a major way of reducing poverty. However, poverty is also a prime cause of poor health, malnutrition, and high fertility. The Independent Evaluation Group of the World Bank evaluated a decade of World Bank group support to help improve health outcomes in developing countries. It looked at what the objectives, effectiveness and main outcomes of this assistance were and what accounted for them. This session will discuss how the evaluation team went about answering these questions, what methodological challenges it encountered and what the major findings were.
Evaluating Health Sector Sector Wide Multi-donar Programs (SWAP)s
Denise Vaillancourt, World Bank, dvaillancourt@worldbank.org
A sector-wide approach (SWAp) is an approach to support a locally owned program for a coherent sector in a comprehensive and coordinated manner, moving toward the use of country systems. SWAps represent a paradigm shift in the focus, relationship and behavior of governments and their development partners. This presentation is based on (a) fieldwork undertaken in five countries where the World Bank, in coordination with multiple other development partners, has supported health SWAps (Bangladesh, Ghana, Kyrgyz, Malawi and Nepal); (b) internal completion reports on 11 completed health SWAp operations; and (c) a search of the SWAp literature. It stems from an Independent Evaluation Group study, and addresses three evaluative questions: Have the anticipated benefits of the SWAps been realized? Have health SWAps had any impact on health system performance and outcomes? How has the World Bank's role and performance changed under SWAps?
Five-year Evaluation of the Global Fund to Fight AIDS, Tuberculosis, and Malaria
Edward Addai, Global Fund to Fight AIDS, Tuberculosis, and Malaria, edward.addai@theglobalfund.org
The Global Fund was created in 2002 as a financing institution operating on the principles of country ownership, absence of a country presence, partnerships, lean and fast processes, and performance-based funding. Its Board decided in 2006 to evaluate the Global Fund's performance after its first five years, corresponding to a full grant cycle. The evaluation examined the contribution of the Global Fund and other international partners to scaling up against three diseases, the partnership model, and organizational effectiveness and efficiency. The design used primary data from district comprehensive assessments, secondary data from household surveys and country information systems, analysis of grant performance, review of documentation and the broader literature, and focus group interviews with Board members, Secretariat Staff, implementers, and partners at global and country levels. The presentation will highlight the methodology and findings, to be released in May 2009.

Session Title: Ideas for Partnering With International Evaluators: A Follow-up Brainstorming Session Sponsored by The American Evaluators Association's International Committee and the International Organization for Cooperation in Evaluation (IOCE)
Think Tank Session 262 to be held in Sebastian Section L2 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Jim Rugh, Independent Consultant, jimrugh@mindspring.com
Discussant(s):
Bob Williams, Independent Consultant, bobwill@actrix.co.nz
Michael Hendricks, Independent Consultant, mikehendri@aol.com
Gwen Willems, Willems and Associates, wille002@umn.edu
Thomaz Chianca, COMEA Evaluation Ltd, thomaz.chianca@gmail.com
Abstract: Part of the AEA vision states that "We support developing relationships and collaborations with evaluation communities around the globe to understand international evaluation issues." What are ways for us to promote such relationships and learnings? How should we choose which groups to partner with and what forms could such partnerships take? The AEA International Committee and the International and Cross-Cultural TIG held two brainstorming sessions on this topic in Denver in 2008. This will be a follow-up session to share what has been done over the past year, and to solicit additional ideas for fostering partnerships with associations of professional evaluators around the world. In break-out groups we will brainstorm criteria for selecting partners as well as ideas for activities that would be mutually beneficial to AEA and partner groups. We will also consider which sub-groups within AEA, such as local affiliates, might be interested in forming partnerships with international groups.

Session Title: Organizational Learning and Evaluation Capacity Building TIG Business Meeting and Presentations: Appreciative Inquiry, Organizational Learning, and Evaluative Excitement
Business Meeting and Multipaper Session 263 to be held in Sebastian Section L3 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
TIG Leader(s):
Rebecca Gajda, University of Massachusetts, rebecca.gajda@educ.umass.edu
Jean A King, University of Minnesota, kingx004@umn.edu
Evangeline Danseco, Children's Hospital of Eastern Ontario, edanseco@cheo.on.ca
Stephen J Ruffini, WestEd, sruffin@wested.org
Chair(s):
Gail V Barrington,  Barrington Research Group Inc, gbarrington@barringtonresearchgrp.com
Discussant(s):
Mary Gutmann,  EnCompass LLC, mgutmann@encompassworld.com
A Synergistic Process for Building Readiness and Excitement for Evaluation Among Program Staff at Youth Farm and Market Project: Combining Appreciative Inquiry and Outcomes Evaluation
Presenter(s):
Nancy Leland, University of Minnesota, nancylee@umn.edu
Andrea Jasken Baker, AJB Consulting, andrea@ajb-consulting.com
Gunnar Liden, Youth Farm and Market Project, gunnar@youthfarm.net
Barb McMorris, University of Minnesota, mcmo0023@umn.edu
Nancy Pellowski, University of Minnesota, pell0097@umn.edu
Abstract: An important challenge in program evaluation is garnering buy-in and enthusiasm among program staff to participate in evaluation. We faced this challenge in our outcomes evaluation work with Youth Farm and Market Project (YFMP). We discovered that another process, the process of appreciative inquiry, occurring concurrently within the organization, worked synergistically to produce readiness and excitement for evaluation among staff. The appreciative inquiry's goal was to determine YFMP's primary focus for the future and build a framework to ensure that this focus remain clearly evident throughout all aspects of the organization's work (i.e., from programming, to determining appropriate evaluation outcomes, marketing, and fundraising). As a result of these two processes YFMP staff are planning their next outcomes evaluation with more clarity and enthusiasm. YFMP works to facilitate healthy youth development in children age 8-18 through hands-on urban farming in the Twin Cities, Minnesota.
Using Appreciative Inquiry to Become a Learning Organization
Presenter(s):
Anna Dor, Claremont Graduate University, annador@hotmail.com
Abstract: Appreciative Inquiry (AI) is a phenomenon that looks at what is working right in organizations. It is an approach that engages people to look at their best of their past experiences in order to imagine the future they want - and find the capacity to move into that future. In AI, language that describes deficiencies and problems is replaced by positive questions structured around the theme of what works best in the organization. This paper examines the four phases of AI and presents a case example where AI was utilized in a Department of Homeland Security Component. The case follows the AI process and provides specific practical examples and lessons learned from this approach. The paper concludes with a connection of AI to evaluation and how it can be used in the field of evaluation.

Session Title: Restoring Context to Evaluation Practice
Panel Session 264 to be held in Sebastian Section L4 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the AEA Conference Committee
Chair(s):
Zoe Barley, Mid-continent Research for Education and Learning, zbarley@mcrel.org
Discussant(s):
Ernest House, University of Colorado at Boulder, ernie.house@colorado.edu
Abstract: Evaluators have traditionally paid close attention to the many contexts relevant to conducting program/policy evaluations. The need to interpret results of an evaluation including contextual barriers and supports has been well understood. Over the last several years, in part as a result of increased attention to randomized designs, the focus on context has been de-emphasized under the assumption that randomizing renders contexts equal across groups. Without information about the influences of policies, persons, agencies and the physical settings that surround the evaluand, findings are impoverished and may not be responsive to the needs of the stakeholders. They are not likely to be appropriate or adequate for decisions or actions to be based on them. Three panel members will discuss issues surrounding the nature of the loss of context for evaluators, for practitioners, and for the broader public grounding them in experience. Audience discussion of loss and restoration will follow.
Culling the Wheat From the Chaff: No Context, No Meaning
Mary Piontek, Columbia University, mp2800@mail.cumc.columbia.edu
A professional evaluator and an end-user of educational research and evaluation findings for almost 20 years, Dr. Piontek will discuss how users of educational research and evaluation findings must consider their institutional and organizational context in which policies, programs, and services are implemented and weigh their context against the "seemingly neutral" context stance presented in much of the recent educational research literature. Without grounding in the nuanced nature of the evaluand, practitioners must "cull" through findings from research and evaluation reports that may not only ignore sources of data, crucial stakeholders, and idiosyncratic implementation, but also fundamental assumptions of quality and merit embedded in the program contexts. Without careful attention to context, a practitioner might put undue emphasis on questions of efficiency or misinterpret impact when her/his institution adopts (or adapts) programs designed by (and evaluated in) other institutions.
Measuring the Trees but Missing the Forest: When Evaluators Fail to Include Context
Andrea Beesley, Mid-continent Research for Education and Learning, abeesley@mcrel.org
An evaluator with experience in randomized-controlled trials, Dr. Beesley will discuss the risks to evaluators in overlooking context. When evaluators focus on a narrow set of measurable outcomes, they risk missing important program outcomes, the very outcomes that are most of interest to current and potential program participants. Ignoring context may lead evaluators to ask the wrong questions or to exclude crucial stakeholders. In reducing a program to what is most easily quantified, and by not observing or recording contextual features, evaluators may conclude that a program is less successful, or more successful, than it really is. This can lead to evaluators' making claims about the worth of a program that turn out to be incorrect if the program is enacted in a different context, and also failing to describe the original context with enough care that others can judge whether their own context is appropriate/comparable to the one studied.
Will I Recognize the Forest When the Trees Are Cut Down? Contextualizing For Community
Sheila Arens, Mid-continent Research for Education and Learning, sarens@mcrel.org
At the heart of democracy lies an informed and involved citizenry; evaluation is one means of informing the public toward facilitating decision-making processes. However, how evaluative information is collected and reported has implications for democratic engagement. Some may argue that the press toward increased rigor provides a common language and increased understanding of causal mechanisms (ostensibly a consequence of design). Yet, it isn't hard to imagine a situation free of certain kinds of evidence and free of "why" and "under what circumstances" questions. Evaluative inquiry practiced as context-free isolates stakeholders, their interest and involvement wanes as methodologically sophisticated inquiries dominate and findings are devoid of data to enable extrapolating findings to other programs or policies. Evaluators ought to remain cognizant of requirements imposed by legislation and funding agencies, but there seems a responsibility to ensure that information provide sufficient contextual detail for decision-making.

Session Title: Evaluating Community Health: Developing Capacity and Lessons Learned
Multipaper Session 265 to be held in Suwannee 11 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Gregg Gascon,  Ohio Education Association, gascong@ohea.org
Impact of Contextual Factors on the Integration of Chronic Disease Program Interventions: Lessons Learned
Presenter(s):
Shelly-Ann Bowen, University of South Carolina, bowensk2@mailbox.sc.edu
Stoisor-Olsson Lili, University of South Carolina, stoisol@dhec.sc.gov
Abstract: A Health Region in SC was funded to implement proven effective health risk reduction programs on underserved populations with chronic diseases. Following are the lessons learned about effects of contextual factors on evaluation of the program. With program evaluation not being the objective of the program and the funder requiring evaluation reports, evaluation discussions were initiated, using program theory. These discussions encouraged program planners to write up detailed description of the program; form collaborative interdisciplinary teams; create logic models and timelines; and develop protocols for data collection. Thus, systematic evaluation feedback proved to be an effective method for program improvement and build accountability for their public health actions. However, at mid-point of the 2-year program, due to external factors, funding was retracted. Consequently, the evaluation team had little or no data to report short-term outcomes. Thus, in articulating program theory, it is imperative to consider contextual factors underlying program implementation.
Ladder to Leadership: Developing the Next Generation of Community Health Leaders
Presenter(s):
Kimberly Fredericks, The Sage Colleges, fredek1@sage.edu
Heather Champion, Center for Creative Leadership, championh@ccl.org
Julia Jackson Newsom, University of North Carolina at Greensboro, j_jackso@uncg.edu
Tracy Enright Patterson, Center for Creative Leadership, pattersont@ccl.org
Joanne Carman, University of North Carolina at Charlotte, jgcarman@uncc.edu
Abstract: Ladder to Leadership is a national initiative of the Robert Wood Johnson Foundation, in collaboration with the Center for Creative Leadership. The initiative aims to enhance the leadership capacity of emerging leaders in community-based nonprofit health organizations serving vulnerable populations. For this initiative, nine cohorts from nine communities across the US will participate in a 16-month community-based leadership development program. Social network analysis was used to assess collaboration among program participants prior to the program, at the end of the program, and a year post-program. These data can be used to compare sites and for longitudinal data analysis within sites. Initial findings from baseline data suggest that collaboration occurs most frequently among like service providers and those that are in the same geographic area within multi-county sites. These findings suggest that avenues must be created to form relationships beyond boundaries in order to create efficient networks for health and healthcare.
Evaluating Health Interventions in Communities: Lessons Learned From the Frequent Users of Health Services Initiative
Presenter(s):
Karen W Linkins, The Lewin Group, karen.linkins@lewin.com
Jennifer Brya, JB Management Solutions, jbrya@jb-llc.com
Abstract: Evaluations of community-based health programs, especially those involving multiple provider systems, provide rich opportunities for learning about effective strategies to enhance and improve care access and quality. However, these evaluations encounter many challenges, including barriers to cross-system data sharing, quality and consistency of secondary data, and the lack of standardization across systems. The following paper highlights the experiences and learnings from the evaluation of the Frequent Users of Health Services Initiative (FUHSI), an initiative implemented in six counties in California. FUHSI aimed to promote the development and implementation of innovative, integrated approaches to addressing the comprehensive health and social service needs of frequent users of emergency departments. The overall goal of the Initiative was to create a more responsive system of care for frequent users of health services that improve patient outcomes and promotes systems change at the organizational and county levels.
Evaluating the Effects of Information Diffusion Through Key Influencers: Evaluation of The National Institute of Allergy and Infectious Diseases (NIAID) HIV Vaccine Research Education Initiative (NHVREI)
Presenter(s):
Caroline McLeod, NOVA Research Company, cmcleod@novaresearch.com
S Lisbeth Jarama, NOVA Research Company, ljarama@novaresearch.com
Dan Eckstein, NOVA Research Company, deckstein@novaresearch.com
Allison Zambon, NOVA Research Company, azambon@novaresearch.com
Abstract: The National Institute of Allergy and Infectious Diseases (NIAID) HIV Vaccine Research Education Initiative (NHVREI) is working to build support for HIV vaccine research among U.S. populations most affected by HIV/AIDS (i.e., African Americans, Hispanic/Latinos, and men who have sex with men). NHVREI disseminates information regarding HIV vaccine research through partnerships with community organizations ; supportive messages are disseminated through outreach activities and the social networks of program staff, who are also key influencers within their communities. We developed a key influencer survey to evaluate the degree to which knowledge of, positive attitudes toward, and behaviors supportive of HIV vaccine research are diffused through staff of NHVREI-funded organizations to other key influencers in the highly impacted populations. Survey items assess precursors (i.e., knowledge, cognitions) to the key influencer behaviors supportive of HIV vaccine research (i.e., speaking in support of HIV vaccine research). We will discuss methods and cognitive interview findings.

Session Title: Health-Related Evaluation Approaches Used in Senior Populations With Disability or Limited English Proficiency
Multipaper Session 266 to be held in Suwannee 12 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Special Needs Populations TIG
Chair(s):
Glenn Landers,  Georgia State University, glanders@gsu.edu
Using a Participatory Approach to Develop a Health Communication Intervention in Three Languages for People With Limited Literacy, and Seniors and People With Disabilities on Medicaid
Presenter(s):
Carrie Graham, University of California Berkeley, clgraham@berkeley.edu
Beccah Rothschild, University of California Berkeley, beccah_rothschild@berkeley.edu
Linda Neuhauser, University of California Berkeley, lindan@berkeley.edu
Susan Ivey, University of California Berkeley, sivey@berkeley.edu
Susana Konishi, University of California Berkeley, susanamk@berkeley.edu
Abstract: Researchers at UC Berkeley used multi-phased participatory research to develop a guidebook for seniors and people with disabilities on Medi-Cal (California's Medicaid program). The guidebook, called 'What are my Medi-Cal choices?' was developed in three languages, English, Spanish and Chinese. Input on the content and format of the guidebook was collected from both beneficiaries and professionals through five different formative evaluation methodologies. Beneficiaries participated in in-depth interviews, focus groups and usability tests. Professionals who serve Medi-Cal beneficiaries participated in key informant telephone interviews and an advisory group. Developing effective communication materials for vulnerable populations can be challenging. There is no one 'formula'. Each design must be customized to fit the project goals and resources. For this project, participatory methods were extensive in order to develop effective communication for a population of seniors and persons with disabilities who have high levels of limited health literacy and limited English proficiency.
Using a Mixed Methods Evaluation Design to Improve Utility
Presenter(s):
Glenn Landers, Georgia State University, glanders@gsu.edu
Abstract: Aging and Disability Resources Centers (ADRC) are promoted by the Centers for Medicare and Medicaid Services and the Administration on Aging as models that make the long term service system more person-centered and consumer-directed, make it easier for people with disabilities of all ages to access information about home and community-based alternatives to institutional services, and support people of all income levels to live independently in their communities. ADRCs currently exist in 43 states. This study used a mixed methods evaluation design to achieve three main objectives: 1. Evaluate the implementation of three ADRC beta sites against the goals of the national ADRC program. 2. Evaluate the knowledge outcomes of ADRC stakeholders participating in a one-day workshop designed to increase interaction with regional ADRCs. 3. Evaluate the fiscal impact of ADRCs on the provision of institutional and community based long-term care.

Session Title: Weights and Measures: Resolving and Reporting on Differences Between Qualitative and Quantitative Findings
Multipaper Session 267 to be held in Suwannee 13 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Beverly Farr, MPR Associates, bfarr@mprinc.com
Abstract: This presentation reports on the study of a multi-site implementation of demonstration sites implementing a multiple pathways approach that integrates academic and technical skill and knowledge in a range of high schools. The variation in contextual features at the 16 sites provided the first challenge in conducting cross-site analyses, but establishing a balance between reporting on the quantitative and qualitative results, related to a notable high school reform strategy, proved to be the biggest challenge. Interest in the achievement results, which were mixed but indicative of positive results, seemed to overshadow the strong positive findings from the qualitative study of implementation.
Celebrating Success: Cross-site Analysis of Qualitative Findings
Beverly Farr, MPR Associates, bfarr@mprinc.com
This paper describes the qualitative methods used to study implementation of the multiple pathways approach in 16 high schools in California. Selected as demonstration sites, the schools reflect a range in format and contex, 'from autonomous high schools, to career academies, to course sequences. Since the designers had developed a rubric reflecting quality indicators, it was possible to use it to develop classroom observation and other instruments to document implementation quality. While some of the qualitative outcomes, effects on student behaviors, attitudes, and perceived learning, as well as on teacher practices, were very strong, the political realities related to school reform and funding for such reforms led to an over-emphasis on the achievement results.
Responding to the Over-emphasis on Quantitative Results
Denise Bradby, MPR Associates, dbradby@mprinc.com
This paper describes the quantitative methods used to study the student achievement outcomes of the multiple pathways approach in 16 high schools in California. The achievement indicators included the state high school exit exam and the California Standards Tests as well as grade-grade promotion, attendance, and graduation rates. The data were first analyzed for the network as a whole, against statewide results. These results were controlled for race/ethnicity which led to improved results for program participants. The researchers also made appropriate comparisons for each site, either to the rest of the school in which the program was found or to similar schools in the district. The results of the foregoing analyses was a mixed picture. The most significant challenge was the examination of ratings on the implementation rubric to achievement results.

Session Title: Using Longitudinal Assessment Data: Feedback Tool for Teachers and Schools
Demonstration Session 268 to be held in Suwannee 14 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Nathan Balasubramanian, Centennial Board of Cooperative Educational Services, nbala@cboces.org
Paul Bankes, Thompson School District R-2J, bankesp@thompson.k12.co.us
Abstract: This interactive demonstration introduces a new method of using assessment data to improve teaching. Participants will learn to make maximal use of student assessment data to improve teaching in schools - using a simple graphic and performance summary. Our feedback tool includes an innovative and transparent growth model that employs a test-independent, normalized performance index. Through this feedback tool, teachers, departments, schools, and an entire district can see high- and low-achieving students among the different subgroups progress over time - not only in the state assessments, but also in school and district formative assessments. With constructive and timely feedback, teachers can be empowered to frame issues and challenges around 4+2 essential questions on PLC's as they make needed adjustments to serve all students better. Participants will hear reports on the tools' initial use and how it is leading to improvements in teaching practices in the pilot schools in Colorado.

Session Title: Outcomes Measures in Educational Program Evaluation: Lessons About Validity
Multipaper Session 269 to be held in Suwannee 15 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Antionette Stroter,  University of Iowa, a-stroter@uiowa.edu
MOSART may Play out of Tune: A Mixed Method Look at Validity and Reliability Issues of Misconception Oriented Stanards-based Assessment Resource for Teachers (MOSART)
Presenter(s):
James Salzman, Ohio University, salzman@ohio.edu
Ken Gardner, Aurora University, kgardner@aurora.edu
Abstract: The Misconception-Oriented Standardized Assessment Resource for Teachers (MOSART) has been touted for use in Math-Science Partnership (MSP) grants as a valid instrument for measuring changes in content knowledge of teachers and their students. The multiple-choice items are aligned with K-12 physical and earth science content standards, as identified by the NRC's National Science Education Standards, and represent 'research'-documented misconceptions concerning science concepts. This paper provides a mixed method analysis of validity and reliability of MOSART. The authors first critically analyzed the quality of items when considered under the lens of test development standards (Nitko, 2001; Nunnally & Bernstein, 1994), then empirically tested the items and scales with a MSP sample. These analyses reveal some concerns about validity and reliability of the instrument. While not discounting the potential value of MOSART, the authors recommend evaluators employ these analyses when choosing instruments that claim established validity and reliability and reporting results.
The Analysis of the Classroom Behavior Observation Tool: Triangulating on Disruptive Classroom Behavior in the Evaluation Process
Presenter(s):
Steven Lee, University of Kansas, swlee@ku.edu
Julia Shaftel, University of Kansas, jshaftel@ku.edu
Jeaveen Neaderhiser, University of Kansas, jneaderh@ku.edu
Jessica Oeth, University of Kansas, joeth@ku.edu
Abstract: Many school reform initiatives include school-wide discipline or social-behavioral programs to reduce classroom behavioral problems negatively impacting academic achievement (Pelham et al., 2005). Evaluation of these initiatives requires reliable and valid behavioral measures of students at classroom school, district or state levels. Lee and Shaftel (2004, 2006, 2007) developed and validated surveys completed by teachers and students to identify the extent of problem behaviors and assets for students at the classroom level called the Classroom Behavior and Asset Scale Teachers (CBAST) and Students (CBASS). To triangulate on the assessment of classroom behavior problems, the present study developed and analyzed a classroom observation tool called the Classroom Behavior Observation (CBO). A large sample of observations in high school classrooms from a Midwestern state yielded data on the reliability and validity of the CBO. The implications for using the tools in school program evaluation and directions for future research will be discussed.
Student Grades as an Outcome Measure in Program Evaluation: Issues of Meaning and Validity
Presenter(s):
Kelci Price, Chicago Public Schools, kprice1@cps.k12.il.us
Abstract: As the demand for impact evaluation in education continues to grow it is important for evaluators to have valid measures for assessing program effectiveness. Although student grades are often used to assess the impact of educational interventions, there exists considerably ambiguity about the validity of this measure. This research explores the validity of grades with reference to three main issues: 1) the relationship of grades to student knowledge, 2) the sensitivity of grades to changes in student knowledge, and 3) whether there are systematic school factors which impact the relationship between grades and student knowledge. This presentation includes both high school and elementary levels, in math and reading. Implications of the findings for evaluators' use of grades as a measure of program impact are discussed.
Adapting Instruments to Measure Social Emotional Learning Outcomes in Young Children: Lessons Learned From the Evaluation of the Inner Resilience Program
Presenter(s):
Eden Nagler, Metis Associates, enagler@metisassociates.com
Susanne Harnett, Metis Associates, sharnett@metisassociates.com
Abstract: Having recently completed an evaluation of the impact of the Inner Resilience Program on teachers and their 3rd-5th grade students, we found there is a dearth of instrumentation to measure social and emotional learning (SEL) outcomes in young children. We will present our experiences identifying and adapting existing instruments for children. Decision-making about modifications and design choices will be discussed and final versions of the instruments will be shared during this paper presentation. Our evaluation findings will be presented and potential issues in interpreting results will be explored, including a discussion of the trade-off between richness of results and developmental appropriateness of instrumentation. An overall discussion of the lessons learned and remaining questions also will be facilitated. We hope that this presentation and discussion will help to advance the development of valid instrumentation for use with young children in the fast-growing field of SEL research and evaluation.
A National Study of Core Exemplary Teaching Characteristics: An Exploratory Construct Validation
Presenter(s):
Sheryl A Hodge, Kansas State University, shodge@ksu.edu
B Jan Middendorf, Kansas State University, jmiddend@ksu.edu
Linda P Thurston, Kansas State University, lpt@ksu.edu
Cindi Dunn, Kansas State University, ckdunn@ksu.edu
Abstract: Evaluators at the Office of Educational Innovation and Evaluation (OEIE) sought to 1) identify core exemplary teacher characteristics through an exhaustive review of the literature; 2) develop a construct for assessing the importance of these traits as identified by professionals in the field; and 3) explore whether emerging patterns of responses reveal underlying components within the larger construct. The sample (N = 719) included a nationally solicited disproportionately stratified random sampling of current teachers, principals, and superintendents. Equivalent geographical representation provided strong evidence for external validity of the study design, while capture of demographic characteristics enabled group comparisons, diminishing threats to internal validity. Results indicated that even though researchers, accrediting bodies, state and federal legislatures, and professional organizations differentially define characteristics of mastery teaching, current professional practitioners report unique agreement with the most important traits associated with the daily practices of exemplary teachers.

Session Title: Strategies for Conducting Research on Evaluation
Multipaper Session 270 to be held in Suwannee 16 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Research on Evaluation TIG
Chair(s):
Marvin Alkin,  University of California Los Angeles, alkin@gseis.ucla.edu
Using Citation Analysis to Study Evaluation Influence: Strengths and Limitations of the Methodology
Presenter(s):
Lija Greenseid, Professional Data Analysts Inc, lija@pdastats.com
Abstract: Citation analysis is an accepted, albeit sometimes controversial, methodology used to assess the impact of scientific research and researchers. Identifying highly cited scientific papers provides clues as to influential theories and scientists within a field. This presentation explores the strengths and limitations of using citation analysis to assess the influence of program evaluations and evaluators. The presentation provides an overview of how to conduct citation analyses and shares findings from a citation analysis study of 246 evaluation reports and instruments produced by four National Science Foundation-funded program evaluations. This presentation is intended for scholars of evaluation use and influence, as well as practicing evaluators, who are interested in learning about citation analysis, exploring methods for studying evaluation influence, and hearing about what type of evaluation product was most highly cited by the field (it wasn't either a weighty cumulative report or an AEA conference presentation!).
Reviewing Systematic Reviews: Meta-analysis of What Works Clearinghouse Computer-Assisted Interventions
Presenter(s):
Andrei Streke, Mathematica Policy Research Inc, astreke@mathematica-mpr.com
Abstract: The What Works Clearinghouse (WWC) offers reviews of evidence on broad topics in education, identifies interventions shown by rigorous research to be effective, and develops targeted reviews of interventions. This paper systematically reviews research on the achievement outcomes of computer-assisted interventions that have met WWC evidence standards (with or without reservations). Computer-assisted learning programs have become increasingly popular as an alternative to the traditional teacher/student interaction intervention on improving student performance on various topics. The paper systematically reviews (1) computer-assisted programs featured in the intervention reports across WWC topic areas, and (2) computer-assisted programs within Beginning Reading and Adolescent Literacy topic areas. This work updates previous work by the author, includes new and updated WWC intervention reports (released since January 2008), and investigates which program and student characteristics are associated with the most positive outcomes.
Integrating Mental Model Approach in Research on Evaluation Practice
Presenter(s):
Jie Zhang, Syracuse University, jzhang08@syr.edu
Abstract: Program evaluation, crossing many disciplines and fields, is one of the most fully developed and important parts of the broader evaluation field. There is a scarcity of rigorous and systematic research on evaluation practice. To answer many repeated appeals for more empirical studies on evaluation practice, this research attempts to contribute to the generation of empirical knowledge to evaluation, which is necessary to explain the nature of evaluation. Grounded in mental model research, this study proposes a conceptual framework to examine the various constructs such as reasoning, problem solving, and mental model, which interact with each other, and influence evaluation practice. The methodology to investigate the hypothesized relationships among these constructs will be structural equation modeling technique. The study will result in better understanding of the relationships can be beneficial for more efficient evaluation theory-building, better understanding of evaluation practice, and contribute to better design of evaluation training and instructions.

Session Title: Evaluative Directions in Developing Practices in Higher Education Assessment
Multipaper Session 271 to be held in Suwannee 17 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Syd King,  New Zealand Qualifications Authority, syd.king@nzqa.govt.nz
Discussant(s):
Rochelle Michel,  Educational Testing Service, rmichel@ets.org
Assessment of Higher Educational Variables That Lead to Academic Success
Presenter(s):
Steven Middleton, Southern Illinois University at Carbondale, scmidd@siu.edu
MH Clark, Southern Illinois University at Carbondale, mhclark@siu.edu
Abstract: Institutions of higher education have place an emphasis on student learning for a variety of reasons, including the improvement of education and future marketability of students in order to enhance the schools' reputation and attract potential students. Grade point average (GPA) is one way to evaluate student progress. This study used 64 first-year students to examine how academic integration mediates the relationship between academic motivation and college GPA. Of the seven different types of academic mediation examined, academic integration fully mediated the relationships between two subscales of intrinsic academic motivation (knowledge and accomplishment) and GPA. Thus, higher intrinsic motivation for knowledge and accomplishment leads to better academic integration, which leads to higher GPAs. Therefore, institutions of higher education can augment student achievement by emphasizing social and academic programs that promote academic integration.
A Collaborative Approach to the Development of a University-wide Alumni Outcomes Assessment: Theory to Practice
Presenter(s):
Jennifer Reeves, Nova Southeastern University, jennreev@nova.edu
Candace Lacey, Nova Southeastern University, lacey@nova.edu
Barbara Packer-Muti, Nova Southeastern University, bpacker@nova.edu
Abstract: This session will provide the opportunity to discuss the development of a university-wide alumni outcomes assessment. Internal academic review at Nova Southeastern University (NSU) requires that all programs provide data on student learning outcomes from current students and alumni. With 14 different academic colleges awarding undergraduate, graduate, and professional degrees, collaboration is essential to develop a single, university-wide assessment instrument. A team of university representatives, including faculty members, researchers, administrators, and an alumni development officer worked with an external consultant to design a uniform, annual assessment grounded in the university's core values. The assessment provides a mixed methods approach to evaluating alumni feedback with both closed and open-ended questions.
Promoting School Accountability Through the Formative Use of Summative Test Results
Presenter(s):
Rachael Tan, Schroeder Measurement Technologies Inc, rtan@smttest.com
Abstract: Within the licensure industry, examination results are generally used only to decide whether to grant a license allowing a candidate to practice in their field. However, Multidimensional Item Response Theory (MIRT) analyses of candidate test results allow formative information to be gathered regarding how well schools are training students for practice within their profession. This study uses the results from a 100-item Computerized Adaptive Test (CAT) to provide prescriptive information to 42 schools in regards to how well their candidates performed in 14 content areas. Such information allows schools to determine in which content areas their candidates are deficient, and in which areas they are successfully preparing candidates for professional practice. By employing MIRT to identify curricular shortcomings, evaluation efforts can be focused on improving instruction in the most critical areas.
Assessing Undergraduate Student Learning Outcomes on Higher Order Skills under Open and Closed-Book Exam Conditions in Online Learning
Presenter(s):
Tary Wallace, University of South Florida, tlwallace@sar.usf.edu
Haiyan Bai, University of Central Florida, hbai@mail.ucf.edu
Abstract: The uniqueness of web-based courses causes faculty to question well-established and prudent assessment procedures on obtaining valid scores of higher-order skills tests if students take open-book exams on the Internet in the evaluation of the achievement of undergraduate students. The attitude and achievement scores of students who took open- and closed-book exams in an undergraduate measurement class are examined. Multivariate repeated measures are used to study the differences of overtime changes on attitudes and achievement scores between the closed-book condition and the open-book conditions using 167 undergraduate students participating in a large University in the south-east area of the United States. The study results would provide useful information to faculty members for informed decision-making with regards to open-book testing of higher-order measurement skills in a web-based and web-enhanced course.

Session Title: Who Knows? Facilitating Meaningful and Manageable Participatory Data Analysis
Demonstration Session 272 to be held in Suwannee 18 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Kristin Bradley-Bull, New Perspectives Consulting Group, kristin@newperspectivesinc.org
Thomas McQuiston, USW Tony Mazzocchi Center for Safety, Health and Environmental Education, tmcquiston@uswtmc.org
M Josie Beach, USW Local Union 12-369, josiebeach@charter.net
Tobi Mae Lippin, New Perspectives Consulting Group, tobi@newperspectivesinc.org
Abstract: How can participatory evaluators simultaneously support high-quality data analysis and meaningful participation of lay people and other evaluation team members in the analysis process? Join this session for a practical look at some of the key strategies we have developed over a decade of facilitating participatory evaluation. Evaluation processes featured here involve teams comprised of labor union staff, rank and file workers, and external consultants. We will discuss some of the broader participatory strategies we use and the types of evaluation projects we conduct (national in scope; designed to leverage change at worksite, industry, and national policy levels). Then we will walk through the specifics of the data analysis processes we have developed. These include: providing targeted data analysis training and support to the team, developing intermediate data reports, using iterative processes of large and small group work including strategically constructing and staging small group analysis activities, and using technology.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Building 4-H Evaluation Capacity Through Learning Communities: Tales From the National 4-H Evaluation Learning Communities Project
Roundtable Presentation 273 to be held in Suwannee 19 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Extension Education Evaluation TIG
Presenter(s):
Sarah Baughman, Virginia Tech, baughman@vt.edu
June Mead, Cornell University, jm62@cornell.edu
Benjamin Silliman, North Carolina State University, ben_silliman@ncsu.edu
Mary Elizabeth Arnold, Oregon State University, mary.arnold@oregonstate.edu
Nancy Franz, Virginia Tech, nfranz@vt.edu
Abstract: The National 4-H Evaluation Learning Communities are exploring the use of learning communities as a means to increase evaluation capacity in four state 4-H programs; Ohio, Oregon, Vermont and Virginia. Each learning community is unique to the culture and context of its members. Presenters will discuss state-level learning communities in practice and the lessons learned from this collaborative process. The focus will be on the process of engaging and sustaining 4-H Youth Development professionals in evaluation learning. Each learning community has used the 'Evaluating for Impact: Educational Content for Professional Development', as a guiding framework for the learning process but at the same time, each learning community is autonomous and operates differently. Presenters will explore lessons learned and best practices gleaned from this project for increasing evaluation capacity among 4-H Youth Development professionals
Roundtable Rotation II: Building an Extension Community of Practice in Evaluation
Roundtable Presentation 273 to be held in Suwannee 19 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Extension Education Evaluation TIG
Presenter(s):
Benjamin Silliman, North Carolina State University, ben_silliman@ncsu.edu
Heather Boyd, Virginia Tech, hboyd@vt.edu
Michael Lambur, E-Extension, lamburmt@vt.edu
Abstract: The national eExtension Evaluation Community of Practice (E-CoP) is a virtual community of university and local educators engaged in teaching and applying best practices in evaluation. The purpose and work of this new organization is described and discussed in three themes: 1) the purpose and work of eXtension communities of practice; 2) interactive webcasts sponsored by the E-CoP; and 3) links between E-CoP and the AEA E&E Topical Interest Group. Communities of practice facilitate relevant, ongoing, interactive learning and collaboration critical for building evaluation capacity. Virtual communities maximize synchronous and asynchronous learning with minimal fiscal and time expenditures. Evaluators have key roles in assessing how and why these organizational and technological innovations build organizational or individual capacity. This roundtable provides participants with opportunities to explore how to make the most of a community of practice in our own field of evaluation.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Understanding the Effectiveness of Higher Education Coursework in Preparing Teachers for the Profession
Roundtable Presentation 274 to be held in Suwannee 20 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Assessment in Higher Education TIG
Presenter(s):
Leigh D'Amico, Independent Consultant, kale_leigh@yahoo.com
Tammy Pawloski, Francis Marion University, tpawloski@fmarion.edu
Janis McWayne, Francis Marion University, jmcwayne@fmarion.edu
Abstract: Higher education institutions often struggle with evaluating the impact of coursework on career preparation and success. The Francis Marion University Center of Excellence to Prepare Teachers of Children of Poverty (COE) is committed to preparing pre-service teachers to effectively implement curriculum and instruction to maximize the experiences and achievement of children of poverty. The COE developed five modules that are embedded into numerous education courses. Through surveys, focus groups, and interviews with pre-service teachers, faculty members, current teachers, and school administrators, the COE examines the impact of these modules on teacher preparedness. Results demonstrate that pre-service teachers who complete module-based coursework feel confident in their ability to teach children of poverty. In addition, school administrators have confirmed the importance of the module-based coursework in teacher preparation. Assessments to determine knowledge gained through module-based coursework and a data system to track teacher stability and student outcomes are forthcoming.
Roundtable Rotation II: Context and Validity in Evaluating Measures of Teacher Beliefs
Roundtable Presentation 274 to be held in Suwannee 20 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Assessment in Higher Education TIG
Presenter(s):
Heeju Jang, University of California Berkeley, heejujang@berkeley.edu
Abstract: Understanding teacher beliefs is an important component in teacher education and professional development because beliefs, knowledge and practice operate with one another in the process of learning to teach. Currently, teacher beliefs are assessed in several different ways. Journal reflection or interview to assess teacher beliefs is interpretive in nature since it takes context into account. Yet, such approach tends to lack validity, reliability, and generalizability. Survey, typically with Likert-scale items, is easier to administer and collect data at a large scale, but Likert-scale items tend to be written broadly and placed out of context. Thus, the respondents need to interpret survey items the way that make sense to them, which is a major threat to validity. In this paper, I illustrate the challenges of assessing teacher beliefs using TIMSS 2003 teacher questionnaire in order to bring our attention to the role of context in the discussion of validity of measures of teacher beliefs. I also present an alternative approach to assessing teacher beliefs that take context into consideration.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Putting Data and Evaluation Into Context for Non-profits
Roundtable Presentation 275 to be held in Suwannee 21 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Campbell Bullock, San Joaquin Community Data Cooperative, cbullock@co.san-joaquin.ca.us
Olga Goltvyanitsa, San Joaquin Community Data Cooperative, ogoltvyanitsa@co.san-joaquin.ca.us
Abstract: Our evaluation team at the San Joaquin Community Data Cooperative has been awarded a two-year grant from the California Wellness Foundation. This grant, entitled Improving Health through Data and Evaluation, centers on providing data and evaluation training to at least 10 non-profits in San Joaquin County in the Central Valley of California. This work started in January of 2009 and we are proposing to conduct a roundtable at this year's AEA conference in order to share our progress with this work. As part of this conversation we will be sharing our overall training approach which centers on four comprehensive components. We will also be sharing our pre-survey, our organizational data and evaluation needs assessment, our training curriculum, and the lessons learned to date. A key part of the roundtable conversation will center on obtaining feedback from attendees as to how we can add to or improve our trainings.
Roundtable Rotation II: Challenging The Dominant Paradigm: Lessons From Evaluators and Their Foundation and Non-profit Partners
Roundtable Presentation 275 to be held in Suwannee 21 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Susan P Curnan, Brandeis University, curnan@brandeis.edu
Ricardo Millett, Millett & Associates, ricardo@ricardomillett.com
Cheryl Grills, Loyola Marymount University, cgrills@lmu.edu
Abstract: This roundtable session presents the approaches taken to evaluation design and implementation in complex community initiatives with the thorough and intense engagement of diverse non-profit community partners, social activists, funders and evaluation experts. In each case, community partners challenged the notion that 'the dominant evaluation paradigm' (complex experimental design including treatment and control groups) was appropriate for social, educational and human service initiatives. They pressed evaluators and funders to consider alternative methods that not only 'prove' the worth of a program, but also help 'improve' the practice. Further, in these days of scarce resources, the case examples will demonstrate how to ensure that evaluators address factors related to context, implementation and outcomes, even when the historical and recent pressure from government and other funders may trend toward measuring 'outcomes only.' Constructive lessons and courageous mistakes will be shared freely.

Session Title: Asa Hilliard Think Tank: Implications of Culturally Responsive Methodology: Where Do We Go From Here?
Think Tank Session 276 to be held in Wekiwa 3 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the AEA Conference Committee
Presenter(s):
Cindy Crusto, Yale University, cindy.crusto@yale.edu
Discussant(s):
Leona Johnson, Hampton University, leona.johnson@hamptonu.edu
Katherine Tibbetts, Kamehameha Schools, katibbet@ksbe.edu
Julie Nielsen, University of Minnesota, niels048@umn.edu
Abstract: This Diversity Committee sponsored think tank to will discuss the implications the work of Dr. Hilliard for evaluation in diverse settings. In particular, participants will discuss the role of the evaluator in bridging potential conflicts in expectations and value systems between key stakeholders when evaluations conducted in non-mainstream contexts.

Session Title: Unique Challenges With International Monitoring and Evaluation Activities: Examples From the Centers for Disease Control and Prevention (CDC)
Multipaper Session 277 to be held in Wekiwa 4 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Thomas Chapel, Centers for Disease Control and Prevention, tchapel@cdc.gov
Abstract: Monitoring and evaluation (M & E) activities conducted in international settings often present unique challenges and opportunities due to multiple factors, including the lack of resources (financial and personnel), lack of capacity to monitor and evaluate activities, and limited understanding of the importance and use of evaluation. Further, logistical challenges such as limited personal interactions due to distance and time-zone differences can create additional complexities. This panel will describe how two programs at the Centers for Disease Control and Prevention have implemented M & E activities in international settings. The first presentation will focus on the development and implementation of a M & E framework in six international locations, with discussion on the challenges, successes, and lessons learned during this process. The next two presentations will describe specific strategies and solutions that have been developed, implemented, and in some cases replicated, to address particular M & E issues.
Multi-site Evaluations: Lessons Learned in Implementing a Monitoring and Evaluation Framework for the Centers for Disease Control and Prevention's (CDC's) Global Disease Detection Program
Rachel Nelson, Centers for Disease Control and Prevention, rqk0@cdc.gov
Suzanne Elbon, Ciber Inc, sge4@cdc.gov
Namita Joshi, Centers for Disease Control and Prevention, ngs5@cdc.gov
Alison Kelly, Centers for Disease Control and Prevention, ayk7@cdc.gov
Douan Kirivong, Centers for Disease Control and Prevention, bpq7@cdc.gov
Naheed Lakhani, Centers for Disease Control and Prevention, bqv8@cdc.gov
Dana Pitts, Centers for Disease Control and Prevention, gqo1@cdc.gov
Scott F Dowell, Centers for Disease Control and Prevention, sfd2@cdc.gov
The Centers for Disease Control and Prevention (CDC) supports Global Disease Detection (GDD) Regional Centers in six countries that focus on early detection and rapid containment of emerging infectious diseases. The GDD Program uses a comprehensive monitoring and evaluation (M & E) framework to collect accomplishments and achievements from the GDD Centers. The M & E process began in 2006, with an open-ended questionnaire, and has since developed into a more formalized process that includes standardized indicator definitions, a database collection tool, and a quarterly-based reporting schedule. In this presentation, we will share challenges and lessons learned from the past three years, including a step-by-step description of how we developed, finalized, and implemented the M & E framework across six international Centers.
Monitoring and Evaluation From Afar - How the Centers for Disease Control and Prevention's (CDC's) Sexually Transmitted Diseases (STD) Program Approaches Time and Distance Constraints in Evaluation
Sonal Doshi, Centers for Disease Control and Prevention, sdoshi@cdc.gov
Since 1990, the Centers for Disease Control and Prevention, Division of Sexually Transmitted Disease Prevention (CDC/DSTDP) has provided categorical funding for sexually transmitted disease (STD) prevention activities through grants to 65 cities, states, and territorial public health agencies, including six United States Associated Pacific Island Jurisdictions (USAPIJs). These six jurisdictions are: American Samoa, Commonwealth of the Northern Mariana Islands (CNMI), Federated States of Micronesia (FSM), Guam, Republic of the Marshall Islands, and Republic of Palau. Although CDC funds STD prevention activities in these areas, our partners in the USAPIJs lack sufficient resources, infrastructure, and access to consistent technical assistance to fully implement comprehensive and sustainable STD programs compared with their mainland health department counterparts. This presentation will discuss the process, challenges, and lessons that CDC/DSTDP headquarters learned while implementing an evaluation of an adolescent health center in Saipan, CNMI while based in Atlanta, GA.
The Value of an Electronic Database in Standardizing and Enhancing Evaluation Activities for the Centers for Disease Control and Prevention's (CDC's) Global Disease Detection Program
Suzanne Elbon, Ciber Inc, sge4@cdc.gov,
Rachel Nelson, Centers for Disease Control and Prevention, rqk0@cdc.gov
Douan Kirivong, Centers for Disease Control and Prevention, bpq7@cdc.gov
Dana Pitts, Centers for Disease Control and Prevention, gqo1@cdc.gov
Naheed Lakhani, Centers for Disease Control and Prevention, bqv8@cdc.gov
Scott F Dowell, Centers for Disease Control and Prevention, sfd2@cdc.gov
The Centers for Disease Control and Prevention (CDC) requires that its six Global Disease Detection (GDD) Regional Centers submit annual reports as part of its monitoring and evaluation process. During the first two years of data collection, GDD Centers reported accomplishments via paper-based reporting. This resulted in a cumbersome reporting process and difficulties in summarizing data. To standardize and simplify data collection, as well as enhance reporting, a database was developed for the GDD Centers. Presenters will discuss factors that influenced the design of the database, the process of pilot-testing, implementation across all GDD Centers, and will demonstrate how the database has enhanced the overall GDD monitoring and evaluation process. Limitations of the current design and plans for revisions will be addressed.

Session Title: Youth Participatory Evaluation: Models, Strategies, and Outcomes
Panel Session 278 to be held in Wekiwa 5 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Jane Powers, Cornell University, jlp5@cornell.edu
Discussant(s):
Shep Zeldin, University of Wisconsin, rszeldin@facstaff.wisc.edu
Abstract: This session will focus on an approach that engages young people in evaluating the programs designed to serve them and in conducting research on issues and experiences which affect their lives. Although participatory approaches have been used for decades, involving adolescents is a relatively recent phenomenon, but one which is becoming increasingly visible. Three projects will be presented which have used different youth participatory evaluation models and which have engaged youth in a wide range of evaluation roles. All projects used their evaluation data to create change at the program, organizational, and community level. Presenters will describe the strategies used, the evaluation outcomes, and the benefits of the approach for young people, programs, communities, and the practice of evaluation. Participants will be encouraged to discuss the potential application of the approach to current work and future projects.
Speaking for Themselves: Multicultural Participatory Evaluation With Young People
Katie Richards-Schuster, University of Michigan, kers@umich.edu
What are the lessons learned from a multicultural participatory evaluation with young people? This presentation will draw on findings from a youth participatory evaluation project that involved a multi-racial team of youth developing knowledge about young people's perspective on race, ethnicity, and segregation in a major metropolitan region and evaluating the impact of a program designed to increase intergroup dialogue, challenge segregation, and create change. This represents a model for examining the importance and impact of multicultural participatory evaluation with young people, highlighting the importance of age and race as critical factors in information gathering and analysis, and recognizing the importance of opening opportunities for new voices in creating knowledge about diversity and community. The presentation will focus on the process of the evaluation as a case example for discussion. It will discuss the evaluation process, lessons learned, facilitating and limiting factors, impact and outcomes, and observations for future practice.
Organizing Ourselves: Working With Youth to Assess Their Organizational Structures
Kim Sabo Flores, Kim Sabo Consulting, kimsabo@aol.com
Over the past decade there has been a proliferation of youth led and youth run organizations both nationally and internationally. However, these organizations often have similar hierarchical structures as those found in adult organizations, with a limited number of youth in leadership roles. This presentation will share work being done in three separate projects that are engaging young people in the process of critically assessing and reflecting on their organizational structures and how they support or prevent inclusive youth participation. The audience will learn more about successes and challenges of engaging youth in this work and will have the chance to review organizational assessments developed by young people.
Conducting a Community Needs Assessment in Collaboration With Homeless Youth
Jane Powers, Cornell University, jlp5@cornell.edu
A participatory approach was used to conduct a community needs assessment on the scope and nature of youth homelessness. We collaborated with a group of formerly homeless youth to carry out the study. The youth were involved in all aspects of the project from designing the tools and methodology, to recruiting subjects, collecting the data, interpreting the findings, and presenting results to key community stakeholders. Youth participation was integral to the success of this project: it enabled us to gather data on a hard-to-reach population, deepened understanding of the findings, and led to increased community awareness and system level change. The multiple benefits of this approach will be discussed as a strategy to promote positive youth development, advance knowledge, impact policy, and improve services for homeless youth.

Session Title: Improving on Peer Review, Publication, and Patent Analysis: Views From Denmark, Taiwan and Korea
Multipaper Session 279 to be held in Wekiwa 6 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
David Bartle,  Ministry of Economic Development, david.bartle@med.govt.nz
The Peer Review and Dialogue: Including Research Organization in an Evaluation Process
Presenter(s):
Finn Hansson, Copenhagen Business School, fh.lpf@cbs.dk
Abstract: The introduction of an audit culture in public universities and research organizations all over Europe has set up new conditions for the role and function of the traditional peer review system. New and primarily quantitative performance based evaluation systems based on bibliometrics has taken the lead in research evaluation. Key problems with the new research evaluation systems are their standardized and normalized approach. They aim at measuring in relation to stable and well defined disciplinary and organizational surrounding. On the other hand research on the front line is more and more based on often temporary network collaboration, and cross and transdisciplinary combinations, all of which is very difficult to measure in a bibliometric based system. The paper discuss if introducing an organized dialogue in the peer review system enhance the evaluation to catch the new and often temporary organizational dimensions in research
Analyses and Evaluations of the Academic Activities in Humanities and Social Sciences in Taiwan
Presenter(s):
Chia-Hao Hsu, Science & Technology Policy Research and Information Center, chiahao@mail.stpi.org.tw
Yu-Ling Luo, Science & Technology Policy Research and Information Center, ylluo@mail.stpi.org.tw
Kai-Lin Chi, Science & Technology Policy Research and Information Center, klchi@mail.stpi.org.tw
Abstract: Paper published is a way to demonstrate the academic research performance. Due to the subjective judgments, the uncertainty of humanities and social sciences is higher than that of natural sciences. Thus, a larger reference scope and a verifiable bibliometric method are needed for evaluation. It ought to be clarified the output styles first then be developed the appropriate indicators based on the data gathered. This proposal is to utilize a fully integrated database about the books and papers of Taiwanese university scholars to establish the academic evaluation indicators. The contours of collaboration styles in knowledge diffusion among the scholars would also be sketched. This study could not only give guidance to evaluate academic performance in humanities and social sciences, but also provide the references to the authorities when they formulate the related rewarding and evaluation systems in the future.
The Study on the Patent Performance of National Research and Development (R&D) Programs in Korea Using Survival Analysis
Presenter(s):
Seongjin Kim, Korea Institute of Science and Technology Evaluation and Planning, shaqey@kistep.re.kr
Seung Jun Yoo, Korea Institute of Science and Technology Evaluation and Planning, biojun@kistep.re.kr
Abstract: Survival analysis is a class of statistical methods for studying the occurrence and timing of event. This paper defines an event as an occurrence of a patent. In Korea, every year KISTEP surveys the data of patents related to National R&D programs. In this paper, this data was used. It includes the investment of R&D, and is sorted by research stage, Technology type, socio-economic objective. We can find the portfolio of national R&D investment in the data. We can also know when the R&D program started and the first patent occurred. It means that. we can measure the interval in which the event occurs. Using R&D investments, the interval where the first patent occurs, we can do survival analysis. In the survival analysis, we find the elements or the fields which have a great effect on the occurrence of a patent. In the result, we can find where our governments have to investment to make the first patent occur quickly. At last we can suggest our governments more invest certain fields to develop more patents

Session Title: Multi-Level and Multi-Factor Approach to Research Program Evaluation: Designing and Implementing the Evaluation of the Europeans Union's Framework Programs for Research and Technology Development
Panel Session 280 to be held in Wekiwa 7 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Neville Reeve, European Commission, neville.reeve@ec.europa.eu
Abstract: Evaluation of the European Union's multi billion Euro research Framework Programs (FPs) is a complex task involving multiple numbers of actors and multiple levels of activity across the EU. The European Commission has the role both of coordinating the overall approach to the evaluation of the FPs as well as supporting development of good practice and sharing of information on research evaluation in the EU. In recent years the Commission's system for research evaluation has been considerably revamped to take account of changing needs. Changes either made or envisaged include novel types of evaluation study, stronger coordination, better use of systematically collected data for the purposes of monitoring and more emphasis on the need for evaluation of long-term impacts including institutional and structural changes. This Panel aims to draw together several of the themes from the range of current activities with an interest for policy makers and practitioners.
Objectives and Constraints With Strategy for Research Program Evaluation: Experience of Developing the European Union's (EU) approach to Evaluation of the Framework Programs
Peter Fisch, European Commission, peter.fisch@ec.europa.eu
European Commission research evaluation activities have evolved considerably since the start of the Framework Programs in the early 1980s. In general there has been a trend towards greater use of independent studies and quantitative data. The evaluation system is composed of actors (the European Commission, the research performers, evaluation consultants), links (networks to share information and best practice) and rules governing implementation (European Commission wide standards for evaluation and guides to support the implementation of evaluation techniques). This presentation will chart the evolution of the evaluation system and explain the major components which make up that system. It will describe the broader context and the policy objectives which have helped shape this evolution. The presentation will define a set of measures which can be used for measuring the performance of an evaluation system. It will reveal the links between evaluation implementation and context at the systemic level.
Evaluating the Rationale, Implementation and Achievements of a €20 Bn Research Program? Meta Evaluation of Evidence for the Evaluation of the Sixth Framework Programs
Erik Arnold, Technopolis, erik.arnold@technopolis-group.com
The evaluation of the 6th Framework Program (2002-2006) was carried out in 2008 by a panel of 13 independent experts. It was based on a huge array of evidence including around 30 independent studies, self-assessments and impact assessments from the European Member States. The report from this evaluation was delivered in February 2009 and is now being widely disseminated. This presentation will examine in detail the work of the panel of experts in carrying out the evaluation. It will focus in particular on a meta-analysis of the evidence base which was used. This will provide both an analytical perspective on the challenges of combining disparate data sources but also practical rules of thumb and discussion on the pitfalls to be overcome.
Evaluating Research Policy: Understanding and Assessing Policy To Develop The European Research Area
Philippe Laredo, Manchester Business School, philippe.laredo@mbs.ac.uk
This evaluation of policy concerns the 'European Research Area dimension of Community activities'. It provides a conceptual framework of the ERA to consider its aims, roles and functions. It begins with positioning ERA within the Lisbon process, which shows that ERA is for the European Council, a means towards the central objective to become 'the most competitive knowledge based economy and society'. It then considers two approaches: one deals with the activities considered by the Commission as central to achieve it. The second examines objectives against which to benchmark FP6 i.e. designed to help realise the ERA'. Complementary discussion focuses on the rationales for building the ERA. Once account is taken of the fact that ERA is not a state but the repeated outcome of a long-lasting process of Europeanisation, four dimensions of a rationale, are analysed: fragmentation, knowledge production dynamics, innovation capabilities of existing industries, societal challenges.
Monitoring And Assessing The Progress Of Research Programs-Implementation of performance monitoring and interim evaluation of the EU's 7th Framework Programs
Neville Reeve, European Commission, neville.reeve@ec.europa.eu
Costas Paleologos, European Commission, pjf@hq.cas.ac.cn
Martin Ubelhor, European Commission, 
The 7th Framework Program marked a huge step forward for EU research. Nor only was it larger (around 50 billion Euros) and covering a wider range of research areas, it also contained important novelties such as the European Research Council (ERC) for frontier research and Joint Technology Initiatives (JTIs) to achieve industry-led consortia of critical research mass in key technology areas. The presentation will include analysis of the separate exercises to monitor the performance and report on FP7 progress. This includes a new system of implementation monitoring based on core indicators, a progress report in 2009 and an interim evaluation in 2010. Issues to be examined will be the means to strengthen the connection between evaluation and policy and how to assess research in progress. The presentation will explore coordination issues in the light of the connected exercise being carried out by DG Information Society and Media on ICT research.

Session Title: Global Perspectives on Human Resource Program Evaluation
Multipaper Session 281 to be held in Wekiwa 8 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Business and Industry TIG
Chair(s):
Amy Gullickson,  Western Michigan University, amy.m.gullickson@wmich.edu
A Case Study of HR Program Evaluation of Merger and Acquisition
Presenter(s):
Chien Yu, National Taiwan Normal University, yuchienping@yahoo.com.tw
Chin-Cheh Yu, National Taiwan Normal University, jackfile971@gmail.com
Abstract: Merger and acquisition are frequently used in the last two decades. The process of merger and acquisition is promoted by free market force in developed nations. However, it is not the case in Taiwan. Banks, insurance companies, and security firms are encouraged to make merger and acquisition. Public banks are urged to be merged by private ones. One case of merger and acquisition is quite unique for it merges one public bank and a foreign insurance company. The organization culture(s) are very different among these three organizations. The name of 'conservative' will be labeled with public bank while 'open' with foreign insurance company and labeled 'median' as the merger. How to make the process of merging smooth is a very tough job for the HR department. This unique experience deserves further exploration. The main purpose of the study is to study the effectiveness of HR programs employed in these processes of merger and acquisition.
Organizational Conflict: Assessing Effectiveness of Dispute Resolution Systems
Presenter(s):
Jeannie Trudel, Indiana Wesleyan University, jeannie.trudel@indwes.edu
Ray Haynes, Indiana University, rkhaynes@indiana.edu
Abstract: Workplace conflict permeates organizations at all levels and poses a substantial threat to organizational effectiveness, hurting the bottom line. This presentation discusses the need for assessing and evaluating the effectiveness of organizational dispute resolution systems, given the trend of corporate America moving away from formal litigious processes to manage conflict in the workplace. Research indicates that alternative dispute resolution (ADR) methods are time and cost effective for resolving organizational conflict particularly in the area of employment. ADR includes mediation, arbitration, ombudspersons, conciliation, fact-finding, mini-trial and peer reviews. As more organizations move toward a systemic perspective in managing workplace conflict, the design and implementation of dispute resolution systems have gained wide acceptance. However, the effectiveness and success of such systems remains to be assessed. A model and framework for assessment is presented.

Session Title: Environmental Education Evaluation: Time to Reflect, Time for Change
Multipaper Session 282 to be held in Wekiwa 9 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the Environmental Program Evaluation TIG
Chair(s):
Matthew Birnbaum, National Fish and Wildlife Foundation, matthew.birnbaum@nfwf.org
Abstract: This panel is based on a special issue in the Journal of Evaluation and Program Planning (August 2009 release) that is dedicated to exploring the state of evaluation in environmental education. Situated at the intersection of environmental conservation and youth and adult education, environmental education is a 50 year old discipline seeking to contribute to environmental sustainability through a diversity of practices from information dissemination to capacity building. The common denominator is education, typically in non-formal settings, connecting individuals to nature with an objective to change "the learner's cognitive, affective and participatory knowledge, disposition and skills" (Carleton-Hug and Hug, Chapter Two of this Special Issue). Only recently has evaluation been recognized as an important tool for program development and assessment of effectiveness. As such, those evaluating environmental education programs face a host of opportunities and challenges.
Environmental Education Evaluation: Time to Reflect, Time for Change
Kara Crohn, University of California Los Angeles, ksdcrohn@sbcglobal.net
Matthew Birnbaum, National Fish and Wildlife Foundation, matthew.birnbaum@nfwf.org
Evaluation in environmental education is fairly nascent despite decades-long attention to its importance. In setting the context for future chapters appearing in this special issue of the Journal of Evaluation and Program Practice, particular attention is devoted to the political circumstances associated with retrenchment in the public sector and increased involvement of citizens in environmental issues in their regions. It further is nested in the context of potential political reforms in a stable market democracy where education is but one strategy that can be bundled with regulations and taxes/subsidies. Additional attention is also devoted to explaining the links of environmental education evaluations to many of the key evaluation theories - utilization focused evaluation, evaluative capacity building, and program-theory driven evaluation. The final section of this chapter situates the subsequent chapters of this volume based on the demographic target (youth, adolescent or youth) as well as connection to a particular evaluation theory.
Challenges and Opportunities for Evaluating Environmental Education Programs
Annelise Carleton-Hug, Trillium Associates, annelise@trilliumassociates.com
William Hug, California University of Pennsylvania, hug@cup.edu
Environmental education organizations can do more to either institute evaluation or improve the quality of their evaluation. In an effort to help evaluators bridge the gap between the potential for high quality evaluation systems to improve environmental education, and the low level of evaluation in actual practice, we reviewed recent environmental education literature to reveal the challenges and opportunities for evaluating environmental education programs. The literature review identified strategies for confronting the challenges in environmental education evaluation, as well as notable opportunities for increasing the quality of evaluation in environmental education. A discussion of a recent environmental education evaluation provides practical examples of these challenges and opportunities.
Environmental Education Evaluation: Reinterpreting Education as a Strategy for Meeting Mission
Joe E Heimlich, Ohio State University, heimlich.1@osu.edu
Critically considering the role of environmental education in meeting conservation outcomes is increasingly necessary for environmental agencies and organizations. Evaluation can serve a critical role in moving organizations to alignment between educational goals and organizational mission. Moving theory-driven evaluation into mission-based program theory, this chapter examines the ways in which educational goals can and should be linked to conservation outcomes for an agency or organization.
Building Environmental Educators' Evaluation Capacity Through Distance Education
Lynette Fleming, Research, Evaluation & Development Services, fleming@cox.net
Janice Easton, University of Florida, jeaston@ufl.edu
Evaluation capacity building is seldom mentioned in the environmental education literature, but as demonstrated by the lack and poor quality of EE evaluations, is much needed. This article focuses on an online course, Applied Environmental Education Program Evaluation, which provides nonformal educators with an understanding of how evaluation can be used to improve their EE programs. The authors provide descriptions of key aspects and strategies for addressing challenges they face in teaching AEEPE, such as: reducing attrition, developing and maintaining a social learning environment online, and improving students' understanding of attribution and logic models. While the course equips environmental educators with the skills necessary to design and implement basic evaluations, there is less certainty that the course contributes to generating demand for evaluation within organizations and the profession. Therefore the authors call on national organizations and associations for help with increasing the demand for ECB in the EE community.
Critically Considering the Role of Environmental Education in Meeting Conservation Outcomes
Joe E Heimlich, Ohio State University, heimlich.1@osu.edu
Critically considering the role of environmental education in meeting conservation outcomes is increasingly necessary for environmental agencies and organizations. Evaluation can serve a critical role in moving organizations to alignment between educational goals and organizational mission. Moving theory-driven evaluation into mission-based program theory, this chapter examines the ways in which educational goals can and should be linked to conservation outcomes for an agency or organization

Session Title: Using Online Learning Circles as a Strategy for Professional Development Among Practicing Evaluators
Panel Session 283 to be held in Wekiwa 10 on Thursday, Nov 12, 10:55 AM to 12:25 PM
Sponsored by the AEA Conference Committee
Chair(s):
Anna Ah Sam, University of Hawaii at Manoa, annaf@hawaii.edu
Abstract: The use of online learning circles is an emerging strategy for enabling participants from diverse locales, cultures, experiences, and perspectives to work together using their diversity as a resource to achieve deeper understandings. Recently, the American Evaluation Association piloted a one-year project employing online learning circles as an avenue for professional development among mid-career evaluation professionals. Fourteen evaluation professionals from across the United States and Vietnam participated in the learning circle, representing a dynamic, engaged set of learners. The degree to which participants were satisfied with the strategy varied, although the majority of learning circle members identified benefits to participating. The papers presented as part of this panel illustrate the challenges, benefits, and lessons learned on behalf of the participating evaluators.
The Learning Circle Model for Collaborative Project Work
Margaret Riel, Pepperdine University, margaret.riel@sri.com
Deborah Loesch-Griffin, Turning Point Inc, trnpt@aol.com
The Learning Circle model is described by (1) a set of defining dimensions (diversity, knowledge-building dialogue, project-based work, distributed responsibility, phase structure, and end product); (2) by norms that direct circle interaction (trust, respect, flexible thinking, individual responsibility, group reciprocity); and (3) by the phase structure that guides the process from getting ready to closing the circle. The presentation of the learning circle model will set the stage for a discussion of the process of implementing this model in many different contexts. The purpose will be to illustrate how the learning circle model is similar and different form other strategies for group work. This introduction will also show the website where materials, resources, and examples for supporting learning circles have been collected. It will end with a short description of the the way in which the model has been used in different settings.
Is the Challenge of Participating in a Learning Circle High Tech or High Touch?
Bob Pawloski, University of Nebraska, rwpawloski@unmc.edu
The AEA Learning Circles Fellows project likely appealed to mid-career AEA members for diverse reasons. In one particular case, a part-time practitioner in program evaluation was suddenly hired to a position which was wholly devoted to program evaluation. The Learning Circle was intended to be a convenient means to quickly come up to speed on current trends in practice, as well as explore the possibilities of creating a regional affiliate of AEA. The high tech aspect of participating in online communities did not appear daunting to this AEA Learning Circle fellow, who had considerable experience in cyber environments. However, it was the learning that came from the high "touch" aspect of learning in the social milieu - both face-to-face and online - that posed a larger unintended challenge to at least this participant. The experience appears to align somewhat with Vygotsky's notion that learning is largely a social activity.
Lessons Learned in Planning, Delivering and Improving Learning Circles
Jennifer Dewey, ICF Macro, jennifer.dewey@macrointernational.com
Learning circles are a facilitation skill applicable in multiple contexts. Their flexibility, while a benefit to adult educators seeking to engage learners in non-traditional ways, can be challenging when it comes to planning them. Several AEA Mid-Career Learning Circles initiative evaluators have experienced challenges with participant recruitment, technology, and their own time to design and implement an LC. The LC model has distinguishing characteristics and phases, but less guidance on specific steps for development, execution, and assessment. Getting to Outcomes TM (GTO) is a ten-step process that LC developers can use to plan, implement, and evaluate an LC by assessing needs, defining goals and objectives, identifying best practices and fitting them to meet needs, assessing capability and capacity, making a plan, evaluating process and outcomes, and promoting learning after the conclusion of the LC. An example of a GTO-based LC, and links to empowerment evaluation, will be provided.

Return to Evaluation 2009
Search Results for All Sessions