2011

Return to search form  

Session Title: Values and Ethics: Challenges in Evaluation Practice
Think Tank Session 753 to be held in Pacific A on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Presidential Strand
Presenter(s):
Michael Morris, University of New Haven, mmorris@newhaven.edu
Discussant(s):
Linda Schrader, Florida State University, lschrader@fsu.edu
Randall Davies, Brigham Young University, randy.davies@byu.edu
Bonnie Stabile, George Mason University, bstabile@gmu.edu
Abstract: This interactive session will explore the connections between values and ethics and how these issues are manifested in evaluation practice. The session especially welcomes new evaluators who are beginning to explore the values dimensions of their work. The presenters will begin with an overview of the "Guiding Principles" (AEA, 2004), followed by a discussion about how values are represented in ethical dilemmas. Participants will then be divided into groups to discuss various evaluation cases that represent different stages of an evaluation study - defining the scope of an evaluation, research design, data collection, and communication of results. Each case will include an ethical dilemma relevant to the particular stage of the evaluation that involves values and valuing. A set of questions will guide participants in small group discussions.

Session Title: Capturing Cooperative Extension Program Contributions Using the Community Capitals Framework
Demonstration Session 754 to be held in Pacific B on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Extension Education Evaluation TIG
Presenter(s):
Keith Nathaniel, University of California Cooperative Extension, kcnathaniel@ucdavis.edu
Barbara Baker, University of Maine, barbara.baker@maine.edu
Matt Calvert, University of Wisconsin, Extension, matthew.calvert@ces.uwex.edu
Mary Emery, South Dakota State University, mary.emery@sdstate.edu
Abstract: Participants will: - Gain insight into the efforts of a Multi-State USDA Integrated research and extension project focused on gathering impact on positive community contributions made by 4-H youth programs; - Be introduced to the community capitals framework (CCF) and put into practice this community mapping exercise as a means to demonstrate program impacts. Participants will be introduced to a community mapping activity based on the Community Capitals Framework (CCF) developed by Cornelia and Jan Flora at Iowa State. The mapping evaluation process addresses the seven capitals, including human, natural, social, political, financial, built, and cultural. This demonstration will provide participants with a qualitative program evaluation tool that can be applied to all Extension programs. This tool will benefit individual Cooperative Extension Agents/Educators and community leaders as they plan, deliver, and evaluate programs.

Session Title: Master Teacher Series: Preparing Data for Their Next Big Thing: Statistical Analysis
Demonstration Session 755 to be held in Pacific C on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Dale Berger, Claremont Graduate University, dale.berger@cgu.edu
Abstract: It is well known that statistical analyses and inferences can be quite inaccurate if there is a mismatch between data and statistical models. Yet we all have been tempted to run statistical analyses before we have completed a critical first step -- data screening. To avoid costly errors and embarrassment, careful attention must be given to identify and deal with problematic data before we apply our statistical tools for decision making. This demonstration will include a step-by-step application of a checklist of issues to be addressed as we prepare for data analysis. We will discuss diagnostics and remedies, along with principles that can guide our choices. Participants will be given a checklist for data screening and examples of diagnostic and remedial applications with SPSS syntax and output. Examples include univariate, bivariate, and multivariate applications.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Evaluation Advisory Groups: A Missing Literature and Practice
Roundtable Presentation 756 to be held in Conference Room 1 on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Michael Baizerman, University of Minnesota, mbaizerm@umn.edu
Don Compton, Centers for Disease Control and Prevention, dcompton@cdc.gov
Ross Velure Roholt, University of Minnesota, rossvr@umn.edu
Abstract: Evaluation advisory or consult(ation) groups (EAG/ECG) are a common (?) practice in program evaluation, one often stipulated by funders and/or by statute. Yet this practice is not written about in the evaluation literature. What is this practice and why is there so little written about it in evaluation or in other fields? These two questions structure the presentation. Advisory consultation groups have several purposes, are brought together in several ways and are structured variously. All of this will be detailed, with examples from the public and non-profit sectors. Data from a small, multistate survey done for DNPAO, CDC will also be presented. The question of why there is little literature on advisory/consultative groups in evaluation and in other fields despite the seemingly common use of these advise structures will also be discussed. As with managing evaluation and evaluation capacity building, earlier work by us, these advice structures seem now to be simply taken-for-granted as an ordinary practice, one almost invisible, and one unworthy of scholarship and critique. In contrast, this is not so in the field of environmental work, as will be shown. This example will be used to suggest other hypotheses to account for the relative absence of an evaluation literature on evaluation advisory/consultative groups, and to present suggestions for developing this practical and theoretical knowledge.
Roundtable Rotation II: If I Knew Then What I Know Now! Finding Meaning in a Disastrous Grant Experience
Roundtable Presentation 756 to be held in Conference Room 1 on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
NaomiJeffery Petersen, Central Washington University, njp@cwu.edu
Abstract: In this roundtable, I'd like to discuss my analysis of the roles played by the funding agency, the R1 university, and this external evaluator during a multi-million-dollar 5 year grant that began optimistically and ended for me with great disappointment. Once I admitted how naïve I was, and that a evaluation perspective is the minority and toothless view amongst most non-human subjects researchers, I have found a therapeutic insight by anchoring my observations to such models as Trochim's System Evaluation Protocol (2010) and a fairly exhaustive literature review, leading to a more proactive strategy for any future evaluation jobs. This has further informed my teaching of assessment and evaluation courses, which we will discuss depending on the interest of the participants.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Case Studies of Evaluation Practice
Roundtable Presentation 757 to be held in Conference Room 12 on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Research on Evaluation
Presenter(s):
David Williams, Brigham Young University, david_williams@byu.edu
Abstract: Although many have theorized about how to conduct formal evaluations that are truthful and that stakeholders will use, we know very little about how evaluation consumers develop their evaluative attitudes and skills, how they carry out the hundreds of informal evaluations they conduct daily, how they fit formal studies into their complex evaluation lives, or how they think and feel about this major dimension of human experience. This presentation presents the results of several case studies of evaluators (formal and informal) and shares details about how people translate their values into evaluations in their work and personal lives. Patterns theorists and professional evaluators might consider as they seek to fit their questions and studies into stakeholders' existing evaluation worlds are proposed and will be discussed. Implications will be explored for further research and how to integrate findings into theories and practices of valuing and evaluation.
Roundtable Rotation II: Valuing Our Strengths: Using Findings From Positive Psychology in Evaluation Plans and Processes
Roundtable Presentation 757 to be held in Conference Room 12 on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Research on Evaluation
Presenter(s):
Kim Perkins, Claremont Graduate University, kim.perkins@cgu.edu
Abstract: Most programs being evaluated have explicitly positive goals. The burgeoning field of positive psychology offers a new evidence base regarding the types of positive constructs and outcomes that organizations and programs usually wish to promote. This roundtable will encapsulate and transmit new information regarding current findings about a variety of constructs of importance to programs and evaluators, including the enhancement of individual and collective strengths, the creation and measurement of engaging and transformative experiences, and the creation of organizational structure that foster creativity and minimize burnout. We will discuss participants' concerns regarding both the measurement of program outcomes and solutions for situations occurring within the organizations we work with.

Session Title: Ethics and Values in International Evaluations
Multipaper Session 758 to be held in Conference Room 13 on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Laura Luo,  China Agriculture University, luopan@cau.edu.cn
The Inter-Dependent Roles of Evaluators and Stakeholders in the Design, Implementation and Utilization Of Evaluation: The Case of Education Programs Management in Tanzania and Ethiopia
Presenter(s):
Fanny Kazadi Nyembwe, Tanzania Monitoring and Evaluation Managemnet Services, nifa5@yahoo.com
Abstract: The paper illustrates how the co-dependent roles of evaluators and stakeholders (program implementers, funders and beneficiaries) during the design, implementation and utilization of an evaluation is influenced by values in specific contexts (in two case, Tanzania and Ethiopia) to bring about intended change. Drawing on experience from Education programs from Ethiopia and Tanzania, the paper analyses the significance of the level of collaboration/ participatory mapping approach during each phase of evaluation to so as acquire lessons learned and a process 'best fit' in order to influence decision making. The paper goes further to throw light on analyses on how the cultural context of the programs aided the countries to identify various and potential impacts which might affect the whole outcome of the evaluation.
What are the Values and Assumptions behind International Evaluation?
Presenter(s):
Ross Conner, University of California Irvine, rfconner@uci.edu
Alexey Kuzmin, Process Consulting Company, alexey@processconsulting.ru
Abstract: Around the world, evaluators talk of 'international evaluation' and sometimes subgroups within evaluation organizations have developed to foster it, for example, AEA's International and Cross Cultural Evaluation Topical Interest Group. This raises the question, 'What exactly is 'international evaluation' and how does it differ from 'evaluation' generally?' This presentation will present answers to this question generated at a think-tank session at last year's AEA meeting. The answers highlight core values and assumptions behind international evaluation. One core value of 'international evaluation' can be described as a 'state of mind' about how evaluation is approached. The state is distinguished by a respect for and active search for diverse, cross-national perspectives. An assumption follows from this value: that 'international evaluation' may involve evaluators from different countries working together on a one-nation-focused project, or evaluators from the same country working on a cross-nations project. This presentation will explore and expand on these ideas.
Ethical Dilemma in upholding Evaluation Values: Lessons and Reflections from the field
Presenter(s):
Hannah Kamau, Pact Inc, hkamau@pactworld.org
Alex Rotich, Pact Inc, arotich@pactworld.org
Abstract: Evaluators have a responsibility to uphold values to a wide range of stakeholders such as project beneficiaries, funding agencies, implementers, government and the evaluation discipline. These values border on the rights and dignity of the subject, accountability and transparency to the donor and the government, contribution to the evaluation discipline and a contribution to the society as a whole. However, upholding these values calls for a delicate balance between upholding the value system, conducting the evaluation in a complex environment and meeting the client's needs. This paper draws on the lessons, challenges and opportunities faced in implementing evaluation in a number of African and Euro-Asian countries implementing development programs.

Session Title: Involving Stakeholders in Evaluation: Alternative Views
Multipaper Session 759 to be held in Conference Room 14 on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
SeriaShia Chatters,  University of South Florida, schatter@mail.usf.edu
Staff Members as Stakeholders in Participatory Evaluation: Three Case Studies
Presenter(s):
Nicole Ardoin, Stanford University, nmardoin@stanford.edu
Kathayoon Khalil, Stanford University, kakhalil@stanford.edu
Abstract: Scholars have suggested an important role for engaging staff members in evaluation processes, from inception and implementation to data analysis and presentation (Powell et al. 2008). We will present research on three case studies with a range of staff engagement in evaluation, opportunities and challenges with staff engagement in evaluation, and potential directions for capacity building in this area. The first case is the Oregon Zoo's ZooCamp program, whose more than 50 staff members serve over 4,000 Pre-K to middle-school-aged children. Staff participate in evaluation by developing learning objectives establishing numeric and qualitative measurements of student learning. The second and third case studies are residential environmental education centers—Great Smoky Mountains Institute at Tremont and NorthBay Adventure Center—where staff have participated in developing, implementing, and sustaining evaluation over nearly a decade. We'll present findings from research into the effectiveness of these staff engagement processes in evaluation.
The Experience Sampling Method (ESM): A Tool for Assessing Stakeholder and Program Values
Presenter(s):
Cristina Tangonan, Claremont Graduate University, cristina.tangonan@gmail.com
Nicole Porter, Claremont Graduate University, nicole.porter@cgu.edu
Abstract: This paper will consider the utility of employing the Experience Sampling Method (ESM) in program evaluations, especially those that are participatory or collaborative in nature. First developed by Mihaly Csikszentmihalyi, the ESM is a technique used to gather real-time data pertaining to the subjective experiences of individuals and the thoughts, feelings, and emotions linked to those experiences. Selected past research has utilized the ESM to further understand adolescents' experiences in educational settings, employee work satisfaction in organizations, and the phenomenon of flow. Few evaluations, however, utilize this cost-effective method. This paper will explore the ESM's benefits at multiple stages of an evaluation and discuss the advantages of using the ESM to promote stakeholder involvement in evaluation activities. Finally, this paper will explore how the ESM can provide insight into the underlying values and complexities of an evaluand that may not otherwise be addressed.
Valuing the Role of Adults as Allies: A Pilot Project to Understand the Roles of Adults in Youth Participatory Research and Evaluation Efforts
Presenter(s):
Mariah Kornbluh, Michigan State University, mkornblu@gmail.com
Katie Richards-Schuster, University of Michigan, kers@umich.edu
Jennifer Juras, Youth Leadership Institute, jenjuras@gmail.com
Abstract: This paper presentation focuses on the initial findings from a pilot study designed to explore the role of adult allies in youth participatory research and evaluation efforts. The study surveys self identified adult allies about their past experiences, their perspectives on the work, and their understanding of what facilitates and supports their roles as allies to young people. This study employs two different methods, an on-line survey and qualitative interviews. The survey and interview protocol was designed in collaboration with a community based partner. This presentation will discuss the conceptual framework for the project, the process of collaboratively developing the survey and interview protocol, and the pilot findings of the project to date.

Session Title: Starting Cost-Inclusive Evaluation
Demonstration Session 760 to be held in Avila A on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
Presenter(s):
Brian Yates, American University, brian.yates@me.com
Abstract: Participants learn the basics of four common, alternative strategies for modeling, evaluating, managing, and systematically improving key relationships between resources consumed and outcomes produced in health and human service: cost analysis, cost-effectiveness analysis, cost-benefit analysis, and cost-utility analysis. Quantitative and qualitative understandings of what occurs between the "costs in" and "outcomes out" is further enhanced by a fifth model that distinguishes between performance of and participation in program activities, and between desired and actual change in biopsychosocial processes responsible for program outcomes. Examples of each step in understanding and improving relationships between resources used, procedures implemented, biopsychosocial processes altered or instilled, and outcomes achieved are drawn from evaluation research in health, mental health, and substance abuse treatment. In addition, a clinical trial is reanalyzed to illustrate how cost-inclusive evaluation can enhance the ability of applied research to help us systematically understand and manage human service systems.

Session Title: Systems in Evaluation TIG Business Meeting and Think Tank: International Perspectives on Systems Evaluation
Business Meeting Session 761 to be held in Avila B on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Systems in Evaluation TIG
TIG Leader(s):
Janice Noga, Pathfinder Evaluation and Consulting, jan.noga@pathfinderevaluation.com
Margaret Hargreaves, Mathematica Policy Research, mhargreaves@mathematica-mpr.com
Mary McEathron, University of Minnesota, mceat001@umn.edu
Presenter(s):
Janice Noga, Pathfinder Evaluation and Consulting, jan.noga@pathfinderevaluation.com
Abstract: As the Systems in Evaluation TIG continues to grow and bring in new members, the TIG is becoming more international, welcoming members from across the globe, including Asia (Japan, South Korea, Indonesia); Africa (South Africa, Tanzania, Uganda, and Tanzania); Central Asia (Azerbaijan, India, and Pakistan); Europe (Austria, France, Germany, Greece, Italy, the Netherlands, Spain, Sweden, Switzerland, and the UK); the Middle East (Egypt and Saudi Arabia); North America (Canada and the US); South America (Brazil, Peru, and Venezuela), and the South Pacific (Australia and New Zealand). This diversity has enriched the TIG through lively workshop discussions, enlightening panel presentations, and culturally enriched evaluation music! We have assembled a panel of evaluators from New Zealand, South Africa, Brazil, the Netherlands, North America, and the UK, to talk about their systems evaluation approaches and perspectives. We invite you to join us for what will be a fascinating discussion.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Impact of Continuous Assessment Practices on Secondary Schools Students: The Nigerian Perspective
Roundtable Presentation 762 to be held in Balboa A on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Anthonia Nnenna Ofong-Utoh, National Examinations Council, Nigeria, toniautoh@yahoo.com
Okpala Promise Nwachukwu, National Examinations Council, Nigeria, promiseokpala@yahoo.com
Abstract: Continuous Assessment practice does not exit in a vacuum.The concept of continuous Assessment is perhaps one of the most conceptual issues of the present day 6-3-3-4 educational system in Nigeria.It is in light of this,perhaps, the policymakers in Nigeria educational sector acknowledge the need for effective integration of vaild and reliable assessment procedure into the nation's formal school system.This could be seen from the emphasis given to continuous assessment (CASS) in the National policy on education federal Republic of Nigeria (2004)as procedure that should infiltrate the country's educational sysetm to ensure that the Nigerian child really learn in school instead of mainly using the school as an exam-Preparation, and exam-Writing venue.
Roundtable Rotation II: An Evaluation of a Local Assessment Program Using Online Survey Tool
Roundtable Presentation 762 to be held in Balboa A on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Jinhai Zhang, Dallas Independent School District, jinhaizhang@hotmail.com
Abstract: An assessment program-Benchmark Assessment started in 2002 in a local school district in Texas. The program which included Reading, Mathematics, Science, Social Studies and English (ESL), is to monitor and improve teacher's instruction and student's academic progress throughout the school year. An online survey was conducted in a large urban school district in Texas. The purpose of this evaluation was to examine how the benchmark assessment influenced teachers' classroom instruction and students' learning, and how teachers were satisfied with the assessment program. The results indicated that a majority of teachers believed that the benchmark assessment were helpful in analyzing students' strengths and weaknesses, improving teachers' class instruction, and providing effective feedback of how they taught the curricula. Qualitative data analysis was included in the evaluation.

Session Title: Evaluating a 16-month Emerging Community Health Leaders Program, Ladder to Leadership: Strategies and Challenges for Measuring Long-term Impact
Panel Session 763 to be held in Balboa C on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Heather Champion, Center for Creative Leadership, championh@ccl.org
Abstract: Ladder to Leadership is a 16-month leadership development program aimed at increasing leadership skills and professional networks for emerging, non-profit, community health leaders from eight communities across the US. Each cohort of up to 30 Fellows participate in three multi-day, face-to-face leadership development sessions; action learning projects; professional coaching; and developmental goal setting. A comprehensive longitudinal evaluation was designed to measure the impact of the program on the Fellows, their organizations, and their community. This session addresses the unique challenges and lessons learned from the methods used to evaluate this program, including: 1) the use of social network analysis to measure changes over time in networking and collaboration among Fellows and their organizations, 2) a comparison of methods used to measure impact of the program (i.e., a 360né¦ rater assessment versus self- and boss-reported impact surveys), and 3) measuring the impact of the action learning component of this program.
Measuring Long-term Impact of an Emerging Community Health Leaders Program: A Comparison of Evaluation Methods
Heather Champion, Center for Creative Leadership, championh@ccl.org
Tracy Patterson, Center for Creative Leadership, pattersont@ccl.org
Kimberly Fredericks, The Sage Colleges, fredek1@sage.edu
Julia Jackson-Newsom, University of North Carolina, Greensboro, j_jackso@uncg.edu
Determining the best method or methods for measuring the long-term impact on individuals, their organizations, and their communities for an emerging community health leaders program presents a number of challenges. The Ladder to Leadership program evaluation employed the use of multiple measures of impact including a 360né¦ multi-rater, now-then assessment (Reflections-¬), customized impact surveys for both participants and their bosses, and success case method interviews of participants. These methods were employed to gain input from multiple perspectives, to maximize our ability to measure program outcomes both quantitatively and qualitatively, and to measure the sustainability of impact over time. A comparison of the benefits and limitations of each of these methods and how they are used together to comprehensively evaluate the LTL program is discussed.
Examining Changes in Social Networks Among Emerging Leaders in the Ladder to Leadership Program
Kimberly Fredericks, The Sage Colleges, fredek1@sage.edu
Tracy Patterson, Center for Creative Leadership, pattersont@ccl.org
Heather Champion, Center for Creative Leadership, championh@ccl.org
Julia Jackson-Newsom, University of North Carolina, Greensboro, j_jackso@uncg.edu
Ladder to Leadership is a national program of the Robert Wood Johnson Foundation, in collaboration with the Center for Creative Leadership. For this project, we are investigating the changes in networking and collaboration among cohorts of Fellows from eight different communities across the US and were followed for three years. Social network analysis was used to assess relationships among program participants before and after the program. These data have been used to study changes overtime in the networks in each of these communities utilizing actor-oriented stochastic models. Findings suggest that there are proximity, gender, balance, transitivity, and popularity effects within the networks. Longitudinal social network analysis can allow for the understanding of the determinants of tie formation and social support which can allow for programmatic changes to enhance and maintain relationships. Although the challenge of maintaining a high response rate over time is problematic, the potential application for evaluation can be widespread.
Evaluating the Value and Impact of Action Learning as Part of a Leadership Development Initiative for Emerging Community Health Leaders
Tracy Patterson, Center for Creative Leadership, pattersont@ccl.org
Heather Champion, Center for Creative Leadership, championh@ccl.org
Kimberly Fredericks, The Sage Colleges, fredek1@sage.edu
Julia Jackson-Newsom, University of North Carolina, Greensboro, j_jackso@uncg.edu
Over the course of the 16-month, RWJF-funded Ladder to Leadership program, fellows work in Action Learning teams to design and implement projects that address community-based leadership challenges. This critical program component is designed to help fellows apply new and enhanced leadership skills presented in the various components of the program's curriculum to a health or health system issue of community significance. Each of the teams of 5-6 fellows receives ongoing support from a community sponsor and a learning coach. The Ladder to Leadership evaluation captures data on the value and impact of the action learning component through surveys and interviews of fellows during and after the initiative, surveys of action learning coaches and sponsors, and analyses of the team deliverables. This paper draws on this experience to identify and discuss strategies, challenges, and lessons learned for evaluating process-based learning activities, learning transfer, and individual components of multi-method development initiatives.

Session Title: Implementing Evaluations: Strategies for Success
Skill-Building Workshop 764 to be held in Capistrano A on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building
Presenter(s):
Joanne Abed, Battelle Centers for Public Health Research and Evaluation, abedj@battelle.org
Sheri Disler, Centers for Disease Control and Prevention, sdisler@cdc.gov
Carlyn Orians, Battelle Centers for Public Health Research and Evaluation, orians@battelle.org
Robin Shrestha-Kuwahara, Centers for Disease Control and Prevention, rbk5@cdc.gov
Linda Winges, Battelle Centers for Public Health Research and Evaluation, winges@battelle.org
Shyanika Rose, Battelle Centers for Public Health Research and Evaluation, rosesw@battelle.org
Abstract: There's no such thing as a "perfect" evaluation. Most evaluations are fraught with challenges relating to such aspects of evaluation as context, logistics, data collection, data analysis, and dissemination of findings. Yet a thoughtful evaluator can navigate these challenges successfully, preventing some and resolving others as they arise. Furthermore, a relatively small number of sound evaluation practices (or "super-strategies") can help address multiple challenges simultaneously. Workshop participants will work together to identify common challenges that can hinder the smooth conduct of an evaluation. They will then develop strategies to address those challenges, during both planning and implementation of an evaluation. Finally, participants will surface broader "super-strategies" that can be incorporated into evaluation practice. In conclusion, participants will receive materials on strategies for success in implementing evaluations developed by the Centers for Disease Control and Prevention's Air Pollution and Respiratory Health Branch, its grantees, and contractor staff from Battelle.

Session Title: Values and Ethical Issues in Internal Evaluation
Multipaper Session 765 to be held in Capistrano B on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Internal Evaluation TIG
Chair(s):
Stephanie Dopson,  Centers for Disease Control and Prevention, sld9@cdc.gov
Discussant(s):
Eric Barela,  Partners in School Innovation, ebarela@partnersinschools.org
Predicament and Promise: The Internal Evaluator as Ethical Leader
Presenter(s):
Francis J Schweigert, Metropolitan State University, francis.schweigert@metrostate.edu
Abstract: Internal evaluators can make a significant contribution to ethics within their organizations: there are risks but also great potential. I offer here an analysis of the predicament and promise of the internal evaluator, highlighting similarities between ethics and evaluation in order to clarify and strengthen the role of internal evaluators in the ethics of evaluation practice and the ethics of the organizations they serve. In this paper, I first distinguish ethics from other values-related functions, namely, morals, markets, culture, and law. I then show how evaluation parallels ethics in examining questions of value under standards of public scrutiny. As the person specially commissioned within an organization to systematically question and listen on matters of value, the internal evaluator represents both external standards of inquiry and internal loyalties to organizational mission and membership. It can be a dangerous yet powerful position for advancing standards of value—in both evaluation and ethics.
Dealing with Ethically Challenging Situations: The Value of Incorporating Democratic Process within an Evaluation
Presenter(s):
Biljana Zuvela, Canadian National Institute for the Blind, biljana.zuvela@cnib.ca
Abstract: This session will provide a retrospective analysis of an internal evaluation of a partnership program between a vision rehabilitation organization and an independent ophthalmologist with the highly controversial issues and stakeholders' conflicting values and views. The focus of the presentation will be on what we did, or failed to do in trying to incorporate democratic process (inclusion, dialogue, deliberation) within the evaluation when we encountered ethically and politically challenging situation. We hope that by making our example open for discussion, our story will contribute to the impressive work in evaluation that supports inclusion, dialogue, deliberation and encourages evaluators to make ethical decisions when they encounter difficult situations that require a strong grounding in ethics and evaluation professionalism.

Session Title: Evaluation Challenges in a Multi-State Program for Jail Diversion and Trauma Recovery for Veterans
Multipaper Session 766 to be held in Carmel on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Amy Salomon, Advocates for Human Potential Inc, asalomon@ahpnet.com
Discussant(s):
Henry Steadman, Policy Research Associates Inc, hsteadman@prainc.com
Abstract: This session examines evaluation challenges in mental health and substance abuse treatment evaluation embedded in the world of criminal justice. State-level evaluators discuss process evaluation challenges and solutions stemming from participation in a national multi-state project aimed at diverting people entering the criminal justice system and offering trauma-informed care, with arrested veterans suffering from trauma symptoms as the primary target population. Funded by the Substance Abuse and Mental Health Services Administration, Center for Mental Health Services, the Jail Diversion and Trauma Recovery Program-Priority to Veterans grant program calls for each state to begin with one or more pilot sites to be expanded statewide and sustained past the five-year period of the grant. This context for mental health service evaluation presents issues and opportunities for evaluators, and the four papers address the challenges that have emerged as most pressing in the first three years of the project.
Hitting a Moving Target: Strategies for Overcoming Program Referral Challenges
Susan Pickett, University of Illinois, Chicago, pickett@psych.uic.edu
Debra Ferguson, Illinois Department of Human Services, debra.ferguson@illinois.gov
As one of the SAMHSA-funded Jail Diversion & Trauma Recovery projects, the Illinois Veterans Reintegration Initiative (VRI) provides integrated mental health, housing and substance use treatment services to veterans with trauma histories who are involved in the criminal justice system. VRI takes place in two sites: Cook County and Rock Island County. Identification of program referrals-VRI participants-involves various criminal justice, mental health, and veterans services at each site. Program referral challenges vary per site as well: for example, at one site, sources we originally counted on to help identify potential participants have experienced budget constraints and administrative changes that have limited their referrals to the VRI program. This session discusses issues related to identifying eligible participants across sites and systems; developing strategies to deal with unanticipated referral challenges; and working with program partners to maximize referral sources.
The Heterogeneity of Intercept 2: Implications for Diversion
Annette Christy, University of South Florida, achristy@fmhi.usf.edu
Colleen Clark, University of South Florida, cclark8@usf.edu
Sarah Rynearson-Moody, University of South Florida, srynearson@usf.edu
Autumn Frei, University of South Florida, afrei@usf.edu
The Sequential Intercept Model is widely used in studies related to criminal justice, including the 13 states with funding via the Justice Diversion Trauma Recovery (JDTR) initiative. Florida has chosen to focus on intercept 2 (initial detention/court hearing) to identify veterans for their JDTR pilot. The lack of county sheriff's office willingness to identify veterans at booking has meant that veterans must be identified at multiple points within intercept 2, including daily magistrate court, public defender's office, violation of probation court, and VA veteran justice outreach referrals. There is heterogeneity of client needs, recruitment issues, and evaluator demands across these intercept 2 points. The nature of the diversion varies dependent on the point of recruitment in intercept 2. There is also variability in how trauma should be assessed at different points in intercept 2. These evaluation design and practice issues WITHIN an intercept are the focus of the presentation.
Seeking Safety and Veteran Outcomes: Issues of Program Fidelity and Impact on Outcomes Within a Broader Service Array
Stacey Manser, University of Texas at Austin, stacey.manser@mail.utexas.edu
Sam Shore, Texas Mental Health Transformation and Behavioral Health Operations, sam.shore@dshs.state.tx.us
Aaron Diaz, Center for Health Care Services, adiaz@chcsbc.org
The federally funded Veterans Jail Diversion and Trauma Recovery Project in Texas utilizes the Seeking Safety curriculum which has been proven effective to address the trauma/PTSD and substance abuse issues of veterans (Najavits, 1992). It was designed for flexible use, consisting of 25 topics that can be conducted in any order and provides client handouts and guidance for clinicians. Trained staff provide the program topics as part of a participant's individual treatment plan which also includes other individual/group therapy for substance abuse or mental health issues, case management, sober living, and linkage to other service needs. Measuring Seeking Safety fidelity as well as topic dosage for each participant in light of other federal cross-site evaluation requirements will be discussed. Local evaluation design and analysis must also consider not only the curriculum effects but the wide array of services received as predictors of outcomes.
Fidelity Issues for the Trauma, Addictions, Mental Health, and Recovery Model in Rhode Island
John Stevenson, University of Rhode Island, jsteve@uri.edu
Karen Friend, Pacific Institute for Research and Evaluation, kfriend@pire.org
Brenda Amodei, Pacific Institute for Research and Evaluation, bamodei@pire.org
Jordan Braciszewski, Pacific Institute for Research and Evaluation, jbrasciszewski@pire.or
Paul Florin, University of Rhode Island, pflorin@mail.uri.edu
Promoting recovery from trauma is a central focus for the Jail Diversion and Trauma Recovery Program in RI. The TAMAR model was selected by clinical staff leaders with the aspiration that it could become an evidence-based psycho-educational support intervention for veterans following arrest for criminal activities. An earlier TAMAR "manual" was developed for a different population, and this session addresses questions regarding the use of fidelity assessment by evaluators in a formative context, with an evolving intervention targeting varied client groups. What approaches to defining and measuring fidelity can help to formalize the intervention and support clinical staff trying to develop a training model for extending the application of the model across the state? Can session activities in individual modules be linked to clinician and client feedback methods for formative application? Distinct from a controlled trials context, this service-oriented project calls for its own solutions.

Session Title: Issues in Generating Evaluative Data
Multipaper Session 767 to be held in Coronado on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the AEA Conference Committee
Baseline Data and Research Design
Presenter(s):
Ximena Burgin, Northern Illinios University, xrecald1@niu.edu
Marcella Zipp, Northern Illinois University, mreca@niu.edu
Abstract: The planning stage of a grant proposal includes the understanding of the problem to be studied through current data. Current data will indicate the problem/s to be addressed, intervention/s, methodological approaches, and stakeholders of the issue/s. Thus, it is important to identify meaningful baseline data to demonstrate achievement of outcomes through the project. There are a variety of ways to obtain baseline data, such as restricted databases, public databases, commercial databases, and your own. The researcher should know the validity and reliability scores of the instruments that databases gathered information from. The baseline information will be meaningful if the instrument measures the desired domains (validity) and the instruments' content is appropriate, correct, meaningful, and useful for the specific inferences made from data (reliability). Moreover, consistency of scores and repeatable results from one administration of instrument to another should be considered as part of the evaluation design.
Beyond 'Agree' and 'Somewhat Disagree': Using Q Methodology to Reveal Values and Opinions of Evaluation Participants
Presenter(s):
Ricardo Gomez, National Collegiate Inventors and Innovators Alliance, rgomez@nciia.org
Angela Shartrand, National Collegiate Inventors and Innovators Alliance, ashartrand@nciia.org
Abstract: In this paper we introduce Q methodology as an alternative to survey-based research methods. Whereas the typical outcome of a survey-based study is a descriptive statistical analysis of pre-specified independent categories deemed relevant by the researcher(s), the outcome of a Q study is a more authentic set of factors that capture people's attitudes and perspectives about an issue. Q also has the capacity to reveal underlying or unrecognized social discourses that can represent other agendas connected to an issue. Q methodology statistically identifies different points of view on a given topic based on how individuals sort a set of statements about that topic. Because people are required to rank through a sorting procedure, they must make choices which reflect their underlying values. The sorted statements are then statistically analyzed and the resulting factors are qualitatively interpreted, thus bridging the gap between qualitative and quantitative inquiry.
An Evaluation Framework for a Smart Parking System
Presenter(s):
Tayo Fabusuyi, Numeritics, tayo.fabusuyi@numeritics.com
Victoria Hill, Numeritics, tori.hill@numeritics.com
Robert Hampshire, Carnegie Mellon University, hamp@cmu.edu
Abstract: We present an evaluation of ParkPGH, a smart parking system that provides real-time information on the availability of parking spaces within the Pittsburgh Cultural District. The initiative is in response to increased demand for parking spaces and the desire to improve parking experiences through the provision of real-time information on parking availability. Primary data, obtained through both in-person and online surveys of patrons of the Pittsburgh Cultural District events, was utilized for the baseline data analysis, process evaluation and outcome evaluation phases. Secondary data that utilized count data obtained from website use logs was employed for the output evaluation phase. The contributions of the evaluation framework are the insights it provides on how the key challenges created by the unique environment within which the system was deployed were addressed and how the framework was used to track respondents longitudinally using a binary system that identifies distinct cohorts of respondents.

Session Title: Using Evaluation to Inform Public Health Policy: A Sodium Reduction Perspective
Panel Session 768 to be held in El Capitan A on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Rashon Lane, Centers for Disease Control and Prevention, rlane@cdc.gov
Abstract: This session will show the value of using evaluation to inform and support public health policy. Throughout this session presenters will highlight how they use various evaluation methods to expand what is known about public health policy interventions in the area of sodium reduction. This includes a national perspective on using evaluation to build practice based evidence, employing case study methodology at a community level, and a unique county level perspective on the use of evaluation data to inform decision makers on policies that impact county residents. The critical role evaluation plays in supporting sodium reduction strategies in the United States will be highlighted by all of the presenters.
Informing Public Health Policy: A National Perspective on the Role of Evaluation
Rashon Lane, Centers for Disease Control and Prevention, rlane@cdc.gov
Jan Losby, Centers for Disease Control and Prevention, kfy9@cdc.gov
Kristy Mugervero, Centers for Disease Control and Prevention, klynchmugavero@cdc.gov
The Division for Heart Disease and Stroke Prevention (DHDSP) at the Centers for Disease Control and Prevention utilizes evaluation to build practice based evidence when expanding into new areas of program planning and policy development. The presenters will describe how a portfolio of evaluation approaches can be used to build practice based evidence. DHDSP uses evaluation tools (e.g., logic models) and methods (e.g., case study, benchmarking, surveys) to inform future public health policies in the area of sodium reduction. DHDSP presenters will share how evaluators and policy staff work together with internal and external stakeholders to enhance evaluation planning. This session will demonstrate the critical role evaluation plays to inform public health policy at the national level.
Using Case Study Methods to Evaluate Policy, Systems and Environmental Changes and Build the Evidence Base for Community Sodium Reduction Efforts
Heather Kane, RTI International, hkane@rti.org
LaShawn Curtis, RTI International, lcurtis@rti.org
Jim Hersey, RTI International, hersey@rti.org
Barri Burrus, RTI International, mcf@rti.org
Marjorie Margolis, RTI International, mmargolis@rti.org
Implementing policy, systems and environmental (PSE) changes has become an important strategy for improving public health, but not a lot of guidance for evaluation in this area exists. Evaluation is needed to show the impact and value of these approaches. This presentation will describe how the evaluation team will employ in-depth case studies and cross-site analyses to build the evidence base for sodium reduction PSEs. The in-depth, individual case studies will examine PSE development, adoption and implementation in Sodium Reduction in Communities Program awardees and will identify successes, best practices, and lessons learned. The team will also employ conduct a cross-site analysis to examine common processes or patterns. Assessing these meaningful commonalities across sites can increase confidence in the results by contributing to "user-generalizability," a construct similar to external validity. The presentation will conclude with a discussion the promises and challenges of employing case study methods to assess PSE interventions.
Translating Evidence to Practice: The Use of Data to Drive Local Policy - A Los Angeles County Perspective
Tony Kuo, Los Angeles County Department of Public Health, tkuo@ph.lacounty.gov
Patricia Cummings, Los Angeles County Department of Public Health, pcummings@ph.lacounty.gov
Gloria Kim, Los Angeles County Department of Public Health, glkim@ph.lacounty.gov
Brenda Robles, Los Angeles County Department of Public Health, brrobles@ph.lacounty.gov
Margaret Shih, Los Angeles County Department of Public Health, mshih@ph.lacounty.gov
The Los Angeles County Department of Public Health routinely conducts research and uses program evaluation as transformative means for translating evidence into practice, especially for public policy development. Strategic use of data through innovative approaches such as the health impact assessment (HIA) can be highly effective in offering clarity on complex policy issues and in convincing decision-makers to adopt policies that have salutary health impacts. The presenters will share the Los Angeles County experience on how scientific data have been used in the past and are presently being used to drive local policy. Case examples from the Department's portfolio, including the use of the HIA to inform menu labeling legislations, the application of earned media strategies (e.g., public release of surveillance data) to raise public awareness of key policy issues, and the recent effort to reduce sodium content through food procurement policy research will be described in detail.

Session Title: Evaluation Systems for Complex International Programs: Fostering Learning and Innovation While Providing Accountability
Multipaper Session 769 to be held in El Capitan B on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Douglas Horton, Independent Consultant, d.horton@mac.com
Discussant(s):
Jane Maland Cady, McKnight Foundation, jmalandcady@mcknight.org
Abstract: International research and development programs are increasingly complex, with multiple "northern" and "southern" partners working in partnership in highly dynamic settings to achieve such broad goals as poverty reduction, food security, and environmental sustainability. Those who fund and manage such programs look to evaluation to provide evidence of results and impact as well as lessons and insights for improving program design and implementation. The presentations in this session show how three international programs are developing evaluation systems to address these challenges. These are the Collaborative Crop Research Program of the McKnight Foundation, the Andean Change Alliance, and the Initiative for Conservation in the Andean Amazon.
Adaptive Action: Simple Evaluation for a Complex Program
Glenda Eoyang, Human Systems Dynamics Institute, geoyang@hsdinstitute.org
The McKnight Foundation's Collaborative Crop Research Program supports place-based research and development to improve nutrition and livelihood for people in highly vulnerable locales. The program is complex: 65 diverse projects; four regions; a community of practice in each region; partnerships between Northern and Southern scientists; focus on change in agricultural and institutional systems; commitment to capacity development; partnership with Bill and Melinda Gates Foundation; 3 languages; and multiple social and biophysical science disciplines. While the evaluation challenge is complex, the design had to be simple in concept and implementation. The evaluation design involves an iterative, three-step process that supports shared learning and aligned action; integrated monitored, evaluation, and planning; and local, regional, and program-wide capacity development. This paper summarizes the evaluation design and outlines implementation challenges and approaches.
Using Participatory Impact Pathway Analysis to Evaluate Participatory Methods for Rural Innovation and Social Inclusion
Emma Rotondo, PREVAL, rotondoemma@yahoo.com.ar
Rodrigo Paz, Institute for Social and Economic Studies, rodrigopaz@supernet.com
Graham Thiele, International Potato Center, g.thiele@cgiar.org
The Andean Change Alliance is a collaborative regional program operating in Bolivia, Colombia, Ecuador, and Peru that seeks to improve the capacity of national agricultural research systems to promote pro-poor innovation and inclusion in development markets and services, promote collective learning and knowledge sharing with participatory methods, and influence policy formulation with evidence accumulated in an "Arguments Bank." This paper discusses the main challenges that evaluators face in designing and implementing an evaluation system in a complex program like this one. It describes the main features of the evaluation methodology developed, which is based on "participatory impact pathway analysis". It assesses the strengths and weaknesses of the evaluation methodology and formulates lessons for improving the use of participatory impact pathways analysis in different kinds of development programs.
How a "Light" M&E System Worked for a Complex Environmental Program: The Initiative for Conservation in the Andean Amazon Experience
Brenda Bucheli, Initiative for the Conservation, Andean Amazon, brenda_bucheli@yahoo.es
The Initiative for Conservation in the Andean Amazon (ICAA) aims to improve stewardship of the Amazon Basin's globally and nationally important biological diversity and environmental services. This five-year program is supported by US $35 million from USAID and $10 million in counterpart funding. The initiative is implemented by 21 implementing partners organized under four field-based consortia and an ICAA Support Unit (ISU). Work of the consortia is guided by a strategic framework and six shared indicators related to capacity building, policy dialogue and implementation, and leveraging of new resources. A mid-term assessment called for more evidence on the impacts of ICAA. This paper discusses challenges to providing such evidence and summarizes how the shared indicator evaluation system was complemented to address these challenges and meet ICAA's needs for both learning and accountability.

Session Title: Comparing Feminist, Human Rights and Gender Perspectives for Evaluating the Impacts of Programs and Policies on Women
Think Tank Session 771 to be held in Huntington A on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Feminist Issues in Evaluation TIG
Presenter(s):
Denice Cassaro, Cornell University, dac11@cornell.edu
Discussant(s):
Michael Bamberger, Independent Consultant, jmichaelbamberger@gmail.com
Denise Seigart, Stevenson University, dseigart@stevenson.edu
Tessie Catsambas, EnCompass LLC, tcatsambas@encompassworld.com
Kathryn Bowen, Centerstone Research Institute, kathryn.bowen@centerstone.org
Abstract: The Think Tank is designed as a follow-up to the proposed Professional Development Workshop on "The tools and techniques of feminist and gender responsive evaluation", providing more time to explore issues raised in the workshop. Gender analysis uses different approaches for collecting information on differential access to services, political participation, control of resources and decision making based on gender. Feminist evaluation incorporates the concern for social justice and personal values, and explores the social, cultural and political factors that underpin inequality based on gender. Meanwhile, policies about incorporating gender awareness are becoming increasingly prominent in the United Nations and other organizations. Participants will divide into groups and discuss the changing gender policy landscape, and participant experiences/personal values. It is anticipated that international evaluators receiving scholarships through the AEA- UN-Women cooperative program will attend/share their perspectives.

Session Title: Valuing Evaluation in Government: A View From Three Neighboring Countries
Panel Session 772 to be held in Huntington B on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Government Evaluation TIG
Chair(s):
Robert Lahey, REL Solutions Inc, relahey@rogers.com
Abstract: This session will provide an overview of the practice of Evaluation in the federal public sector in each of the USA, Canada and Mexico - how it is perceived; how it gets carried out; and, how it supports the public good. The development and evolution of performance monitoring and Evaluation has not been static in any of the three countries, typically changing as a result of a number of possible factors - a change in government; a public sector reform; new emphasis on possible uses; etc. The session in highlighting the key features of the model used in each of the countries, will show differences and similarities for the way each government uses Evaluation in carrying out the business of government. In this context, how each government values Evaluation will start to become apparent.
The Monitoring and Evaluation System of Mexico
Agustin Caso Raphael, Secretaria de Hacienda y Credito Publico, agustin_caso@hacienda.gob.mx
In elaborating the Mexican model for performance monitoring and evaluation (M&E), the presentation will make some comparisons with M&E as practiced in the USA and Canada. A key focus will be put on how M&E information is being used in government - in planning, programming and budgeting - and how this has evolved as the M&E system has matured. Emphasis will also be given to how evaluation plays out in the interaction of the three levels of government in Mexico. Critical structural aspects aimed at ensuring transparency in information flows and the delivery of timely and useful M&E information will also be discussed. With a Presidential election set for 2012 in Mexico, there will be some looking ahead to the future of the M&E system.
The Canadian Monitoring and Evaluation System: Lessons Learned From Thirty Years of Development
Robert Lahey, REL Solutions Inc., relahey@rogers.com
This presentation will highlight the key elements of the Canadian Monitoring and Evaluation (M&E) system, with a focus on both the structure of the system and the way that it is being used in government. An important element of the Canadian experience is the way that the model has evolved over the past thirty + years. Some key elements of the model will be described - the 'drivers' that have been put in place and that serve to generate demand for M&E information; various checks and balances to maintain the independence/neutrality of the Evaluator, without impeding the Evaluator's role in knowledge generation and dissemination; the emphasis placed on 'transparency' as a key element of the enabling environment; and, the various capacity building efforts aimed at 'professionalizing' Evaluators. Efforts to bring Evaluation information closer to expenditure management in government and some of the challenges for the future will also be addressed.
Can Program Evaluation be More Valued in the United States?
John Pfeiffer, Office of Management and Budget, john_r._pfeiffer@omb.eop.gov
The art and science of program evaluation has a long and distinguished history in the United States. Thus, its relatively limited use as a guide to program management and funding decisions by policy makers seems somewhat surprising. This presentation by a long-time career program examiner at the US Office of Management and Budget (OMB) will explore the reasons why program evaluation has been less influential than might have been expected as a decision-making resource, with particular attention to the differing needs and values of policy officials and evaluators and the institutions within which they work. The presentation also will explain recent steps being undertaken by OMB under the Obama Administration to improve the quality and usefulness of program evaluations and will outline ways to make the results of program evaluation more widely felt.

Session Title: Federal Government Evaluations: Case Studies
Multipaper Session 773 to be held in Huntington C on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Government Evaluation TIG
Chair(s):
Stanley Capela,  HeartShare Human Services, stan.capela@heartshare.org
Methodological Issues Involved in Evaluating Large Scale Regional Economic Development Initiatives: Workforce Innovation in Regional Economic Development (WIRED) as a Case Study
Presenter(s):
Kevin Hollenbeck, Upjohn Institute, hollenbeck@upjohn.org
Linda Toms Barker, Berkeley Policy Associates, linda@bpacal.com
Jeff Kaplow, Public Policy Associates, jkaplow@publicpolicy.com
Abstract: This paper recounts our organizations' experiences in addressing key methodological issues in evaluating the Workforce Innovation in Regional Economic Development (WIRED) initiative, a large scale economic and workforce development initiative. The first issue is how to attribute outcomes to project activities when a rigorous randomized controlled trial (RCT) is not feasible. What are the pros and cons of a matched comparison area approach? A second methodological issue concerns length of time for outcome measurement. Changes in economic growth and worker preparation and training are likely to take many years. Third, the paper discusses how the evaluations of WIRED attempted to map and analyze social networks. In short, the purpose of this paper is to contribute to the field of workforce development and economic development program evaluation by describing how thorny issues of attribution, outcome dynamics, and social network mapping can be addressed in evaluating large scale multi-county initiatives.
Case Study: An Approach to Examining Team Science in the Age of Translational Research
Presenter(s):
Kathryn Nearing, University of Colorado, Denver, kathryn.nearing@ucdenver.edu
Samantha Farro, University of Colorado, Denver, samantha.farro@ucdenver.edu
Marc Brodersen, University of Colorado, Denver, marc.brodersen@ucdenver.edu
Abstract: The Colorado Clinical and Translational Sciences Institute (CCTSI) funds 'team science' awards through its pilot grant program. These seed grants are awarded to teams of investigators based at multiple research/healthcare institutions and whose expertise collectively spans the translational spectrum. The programmatic theory of change is that novel collaborations will facilitate innovation and the emergence of new discoveries, as well as the translation (application) of results/advancements to improve clinical and community-based practice. This paper will present key characteristics of team science and a case study analysis of the first two cohorts of pilot awards. The analysis will examine the nature of the research question(s) being explored, how the experience impacted the development of translational research core competencies, and the research productivity of these teams compared to other pilot awardees. Emerging insights (re: opportunities and challenges of team science as an approach to fostering cutting-edge research) will also be presented.
Developmental Evaluation to Inform Programs Implemented Based on Meta-Analysis Recommendations
Presenter(s):
Juna Snow, Innovated Consulting, jsnow@innovatedconsulting.com
Michael Coplen, Federal Railroad Administration, michael.coplen@dot.gov
Joyce Ranney, Volpe Transportation Center, joyce.ranney@dot.gov
Abstract: This paper presentation will share lessons learned from a developmental evaluation (Patton, 1994; 2009) of a multistakeholder group that formulates safety recommendations for the railroad industry. The analysis group situates itself within the high-stakes context of safety in tension with the productivity-pressured worker. The group, comprised of key representatives of the government (regulatory), labor (unions), and management (carriers), is charged with the role of independent meta-analysis with the purpose of creating change within the practice of switching operations ultimately to reach its goal of zero fatalities. The evaluator has conducted a retrospective study on the implementation process and its effects since the group's last report, in preparation for the release of its new report in 2011. The evaluator continues to serve the group as an embedded member who provides ongoing direction and reflection through the evaluative lens, balancing the at times contentious values, to provide support and foster innovation within the safety operations within the railroad industry.

Session Title: Evaluating Out-of-School Time Program Quality
Skill-Building Workshop 774 to be held in La Jolla on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Jenell Holstead, University of Wisconsin, Green Bay, holsteaj@uwgb.edu
Mindy King, Indiana University, minking@indiana.edu
Abstract: The presentation describes a free assessment instrument that can be used by evaluators to assess out-of-school time program quality. Participants will learn how to use the instrument to assess programming, how to score the instrument, how to provide feedback to program staff, and guide discussion to make enhancements in programming.

Session Title: New Directions in Multisite Evaluation
Panel Session 775 to be held in Laguna A on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Theories of Evaluation TIG
Chair(s):
Robert Blagg, EMT Associates Inc, rblagg@emt.org
Abstract: Naturally occurring variation (e.g., setting, implementation, outputs, effects) within and across multisite evaluations can be measured to provide a basis for analyses that produce internally and externally valid knowledge supporting evidence-based practice. However, capturing this variation requires careful measurement of site environments, implementation (e.g., intervention or service), and outcomes across sites. An inductive (e.g., micro to macro) mixed-method measurement approach, which is flexible enough to adapt to program context, can accurately and efficiently capture this "real world" variation and support analysis that reveals the relation of this variance (constraints and contributions) to outcomes. This session will include four presentations which detail the evaluation theory and technique behind the natural variation approach (i.e., identify necessary steps in modeling, measurement, analysis and interpretation) and highlight several examples of current successful application of this approach. Implications and value of this approach for evaluation theory and practice will be discussed throughout.
Natural Variation Designs: Maximizing the Information Potential of Multi-site Evaluations
J Fred Springer, EMT Associates Inc, fred@emt.org
This presentation provides an overview of the logic and purpose of natural variation approaches as alternatives to experimental design when multiple contexts (e.g., programs, communities, classrooms) are units of analysis. The presentation a) explicates the logic of the approach and its design requirements ; b) identifies measurement approaches (most often mixed method measurement) appropriate to capturing the important elements of variation in setting, design and implementation; c) identifies design adaptations (e.g., in-site comparisons, over time, effect size, clustering, meta-analytic technique, exploratory-confirmatory iterations) that provide a robust balance of internal and external validity in different resource and data environments; and e) identifies alternative analysis techniques that can be used in these studies (e.g., hierarchical analysis, cluster comparisons, meta-analytic regression). Specific examples from past and current evaluations are used throughout, and the benefits to development of useful evidence-based practice are emphasized.
Natural Variation Designs: Diverse Multisite Applications
Robert Blagg, EMT Associates, Inc., rblagg@emt.org
This presentation explicates the breadth of application of natural variation designs by presenting similarities and differences in four major multi-site studies. Studies to be highlighted include: the five year, 48 site, CSAP funded National Cross-Site Evaluation of High Risk Youth Programs through which many aspects of natural variation design were explicated and refined; the current ONDCP funded National Evaluation of the Drug Free Communities program that is implementing a rigorous natural variation design including several hundred communities; the SAMHSA CSAT-funded, 20 site Adult Treatment Drug Court multi-site evaluation; and the US Department of Education analysis of bullying laws and policies enacted in four states and twenty four sites. These studies represent unique multi-site environments that support variations in multi-site design. Benefits of the natural variation approach in producing actionable information useful to decision makers will be highlighted.
Evaluation of Susbstance Abuse and Mental Health Services Administration (SAMHSA) Center for Substance Abuse Treatment (CSAT)-funded Adult Treatment Drug Courts
Carrie Petrucci, EMT Associates Inc, cpetrucci@emt.org
This presentation provides a comprehensive measurement strategy to support the natural variation design and analysis being used in the multisite evaluation of 20 SAMHSA CSAT-funded Adult Treatment Drug Courts. The measurement model is based in a realist evaluation framework which answers the question: what works best for whom under what circumstances or contexts. The presentation includes the logic model underlying the measurement tools; the comprehensive data sets that are drawn on to document setting, design, implementation and outcomes; and the site visit protocol that provides a practical solution to integrating observations, interviews, and program data gathered in site visits. Both quantitative and qualitative data are collected as needed to best understand processes and outcomes, including concept mapping interviews, focus groups and program records. The comprehensive measurement design was developed and applied through our highly skilled collaborative team that includes Westat, EMT, and Carnevale Associates.
Application of the Natural Variation Design to California's Statewide Evaluation of the Mental Health Services Act
Elizabeth Harris, EMT Associates Inc, eharris@emt.org
UCLA's Center for Healthier Children, Families and Communities and subcontractor EMT Associates are conducting the Statewide Evaluation of California's Mental Health Services Act. The study uses a natural variation design that emphasizes a) identifying county environment characteristics that distinguish between distribution, quality and efficiency of mental health service implementation; b) identifying service configurations that maximize quality and efficiency criteria; and c) identifying county policy and administrative practices that produce accountability and coordinated service delivery among multiple providers. Natural variation approaches to measurement and analysis provided a sound perspective for a) modeling the analysis, b) designing data collection, and c) conducting exploratory analyses to identify latent structures in setting and process data relevant to analysis questions. The presentation will provide detail on a) selection of optimal program and fiscal data, b) extraction of relevant data from program and fiscal records, c) creation of comparable data across counties, and d) preliminary results.

Session Title: Evaluation Follow-Up: Challenges and Lessons
Think Tank Session 776 to be held in Laguna B on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Scott Chaplowe, International Federation of Red Cross and Red Crescent Societies, scott.chaplowe@ifrc.org
Discussant(s):
Osvaldo Feinstein, Madrir Complutense University, ofeinstein@yahoo.com
Michael Hendricks, Independent Consultant, mikehendri@aol.com
Scott Chaplowe, International Federation of Red Cross and Red Crescent Societies, scott.chaplowe@ifrc.org
Abstract: This Think Tank will examine key challenges and lessons for international, humanitarian and development, organizations in evaluation follow-up. The utility of evaluations is widely recognized as a fundamental standard of quality evaluations, and evaluations are premised on their potential contribution to organizational effectiveness and learning. But in the real world, this is not always easy to achieve - many of us know of evaluation reports that collect dust on the shelf or are forgotten in the computer archive. How can evaluations be better planned, conducted, and reported upon to improve their follow-up, and ultimately increase their utility? How can organizations better design and employ processes and protocol for evaluation follow-up, such as management response and action planning? These will be the guiding question of this Think Tank. It will draw upon reflections from a study commissioned by the United Nations Evaluation Group, as well as input from other participating members working with international organizations.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: The UTeach Institute's Approach to Program Replication and Multi-site Evaluation: Moving Forward to Measure Fidelity of Implementation and Sustain the Innovation
Roundtable Presentation 777 to be held in Lido A on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Presenter(s):
Alicia Beth, University of Texas, Austin, abeth@austin.utexas.edu
Pamela Romero, University of Texas, Austin, promero@austin.utexas.edu
Kimberly Hughes, University of Texas, Austin, khughes@austin.utexas.edu
Mary Lummus-Robinson, University of Texas, Austin, mlummus@austin.utexas.ed
Mary Walker, University of Texas, Austin, mwalker@austin.utexas.ed
Abstract: The UTeach Institute was established in 2006 in response to national concerns about the quality of K-12 education in the areas of science, technology, engineering, and mathematics (STEM) and growing interest in the innovative and successful secondary STEM teacher preparation program, UTeach, started in 1997 at The University of Texas at Austin (UT Austin). The Institute currently supports and evaluates UTeach replication at 21 universities across the U.S. Excluding UT Austin, 4,190 students were enrolled nationwide in Spring 2011. Given UT Austin's retention and graduation rates, and the rates at which their graduates enter and are retained in the field, we project that graduates of these 21 programs will teach more than 3.5 million K-12 students by 2019. In this session, we will describe our approach to replication and multi-site evaluation, and seek ideas on ensuring the sustainability of these programs, evaluating fidelity of implementation, and future steps for research.
Roundtable Rotation II: The Art of Evaluating Common Constructs That are Commonly Misunderstood
Roundtable Presentation 777 to be held in Lido A on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Presenter(s):
Susan Shebby, Mid-continent Research for Education and Learning, sshebby@mcrel.org
Sheila A Arens, Mid-continent Research for Education and Learning, sarens@mcrel.org
Jane Barker, Mid-continent Research for Education and Learning, jbarker@mcrel.org
Xin Wang, Mid-continent Research for Education and Learning, xwang@mcrel.org
Jesse Rainey, Mid-continent Research for Education and Learning, jrainey@mcrel.org
Abstract: In this presentation, evaluators discuss different methods used to collect data on what initially appeared to be a straightforward construct. Presenters will briefly describe a cluster randomized controlled trial examining the impact of English language learner (ELL)-specific curricular materials and teacher professional development on student English language proficiency. As part of this study, presenters collected data from participants regarding the educational programs (instructional models) used to instruct ELLs. However, there was simply no shared understanding of instructional model constructs at school sites. This is problematic when one considers that both primary research and secondary data analyses of ELL interventions often rely on self-report data founded on an assumption of a common understanding of constructs. Evaluators will discuss the challenges and benefits associated with the different data collection methods employed during the study. Although not originally planned, diverse methods were necessary and allowed for triangulation of data.

Session Title: Hear Me! Integrating Feedback From Distinct Perspectives for an Evaluation of a Statewide Childcare Provider Training on Nutrition and Physical Activity in Delaware
Multipaper Session 778 to be held in Lido C on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Gregory Benjamin, Nemours Health & Prevention Services, gbenjami@nemours.org
Abstract: Nemours Health and Prevention Services, with support from a USDA Team Nutrition grant, developed an innovative training for Delaware childcare providers on the changing obesity-related regulations for childcare settings. A multi-component, mixed-methods evaluation was used to first create and subsequently improve the training. Input from stakeholders was solicited in multiple ways and resulting changes were made to the training and companion resources along the way. Evaluation methods included a) focus groups with providers to assess needs and gain feedback on training materials and design; b) surveys that assessed satisfaction with the training, provider knowledge on regulations and whether practice changes occurred; and c) additional focus groups with parents to understand their needs related to nutrition and physical activity. This session will address the ways in which this project integrated feedback from different perspectives in real time, maximizing the potential for the training to have an impact on Delaware children.
Taking Pilot Evaluation Results to Heart, and Fast!
Gregory Benjamin, Nemours Health & Prevention Services, gbenjami@nemours.org
Nemours Health and Prevention Services, with support from a USDA Team Nutrition grant, developed an innovative training for Delaware childcare providers on the changing obesity-related regulations for childcare settings. A pilot training was first conducted with a cohort of childcare providers (n=73). Data collected via four focus groups (n=28) conducted with the pilot childcare providers supplied the evaluation team with rich, meaningful data, including a) ways to improve the statewide training; b) barriers and facilitators to implementing practice changes at their centers or homes; and c) their level of engagement (or lack thereof) with parents. Pre- and post-training surveys were also administered, and significant positive changes in providers' knowledge of the nutrition and physical activity childcare regulations were documented. Methodology and results will be shared, including the ways in which the feedback was used to improve the content and structure of the training and companion materials in real time.
An Outcome Evaluation of a Large-Scale Training for Childcare Providers in Delaware: Methodology and Results
Tiho Enev, Nemours Health & Prevention Services, tenev@nemours.org
Laura Lessard, Nemours Health & Prevention Services, llessard@nemours.org
Informed by data from the pilot training, Nemours Health and Prevention Services executed eight large-scale trainings for providers, reaching over 1,000 family home and center childcare providers in Delaware. The training included didactic and interactive sessions designed to increase participant knowledge of the regulations and ultimately increase compliance. A printed Toolkit was also distributed to participants and included materials for childcare directors/owners, teachers, food service staff and parents. An outcome evaluation was conducted to examine the impact of the training on participant knowledge, self-efficacy, and self-report of barriers and facilitators to implementation. Pre-, post-, and sixty day follow-up surveys were administered to all participants either in-person (pre- and post-surveys) or via mail (follow-up survey). Results from these surveys were used to further validate the appropriateness and acceptability of the training and companion materials to the target audience and demonstrate the overall impact of the training.
From Both Sides: Valuing Feedback From Parents Whose Children Attend Delaware Childcare Centers and Family Homes
Stefanie VanStan, Nemours Health & Prevention Services, svanstan@nemours.org
In concert with the pilot and large-scale evaluations of Delaware childcare providers who attended a nutrition and physical activity training, Nemours Health and Prevention Services also conducted focus groups with parents whose children attended the centers and homes that received the training. The purposes of these focus groups were: (1) to gain clarity on how parents want to receive nutrition information from their providers; (2) to learn where parents get child nutrition information; and (3) to gain a better understanding on parents' knowledge around Delaware nutrition and physical activity regulations. These focus groups provided a forum for the researchers to not only listen to this perspective, but also, to identify areas of opportunity where providers and parents can work together to promote healthy nutrition. Results from the focus groups will be presented, including points of triangulation from the childcare providers' feedback and the parents'.

Session Title: Methods and Tools for Evaluating Clinical and Translational Science
Panel Session 779 to be held in Malibu on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Arthur Blank, Albert Einstein College of Medicine, arthur.blank@einstein.yu.edu
Discussant(s):
Paul Mazmanian, Virginia Commonwealth University, pemazman@vcu.edu
Abstract: Clinical and translational science is at the forefront of biomedical research and practice in the 21st century. The NIH-funded Clinical and Translational Science Awards (CTSAs) are the largest initiative at NIH and the 55 center grant evaluation teams constitute a unique national field laboratory in the evaluation of biomedical research and practice. The five presentations in this panel address the use of: Microsoft Project 2010 for linking strategic goals to evaluation; social network analysis of annual online surveys of scientists; methods of tracking and measuring the impact of research training programs; Tylerian matrix analysis to augment traditional logic modeling with information about strategic goals and objectives and link them to methods; and return on investment approaches for assessing pilot and educational programs. The panel will present five different evaluation studies and discuss their implications both for the specific context of the CTSAs and for the field of evaluation generally.
Linking Strategic Goals to Evaluation Using Microsoft Project 2010
Lisle Hites, University of Alabama, Birmingham, lhites@uab.edu
Susan Lyons, University of Alabama, Birmingham, lyons@uab.edu
Molly Wasko, University of Alabama, Birmingham, mwasko@uab.edu
Strategic planning and programmatic evaluation are two essential components of a successful program. At the University of Alabama at Birmingham's (UAB) CTSA, the Center for Clinical and Translational Science (CCTS), programmatic evaluation efforts have centered on the strategic planning process to ensure that performance measures are grounded in strategic goals and assessed accordingly. However, given the enormous scope of work inherent in each CTSA (e.g. the integration of many different cores or programmatic components, the vast range of research targets stretching from pre-clinical research through the spectrum including T1, T2, T3 and T4 research), comprehensive evaluation and continuous improvement of so many activities becomes onerous. The UAB CCTS has elected to utilize Microsoft (MS) Project 2010 as an organizational management tool to facilitate this massive evaluation project, and the value-added of using a comprehensive project planning tool, along with successes and challenges will be discussed.
Using Survey-based Social Network Analysis to Establish an Evaluation Baseline and Detect Short-term Outcomes of a Clinical and Translational Science Center
Megan Haller, University of Illinois, Chicago, mhalle1@uic.edu
Eric Welch, University of Illinois, Chicago, ewwelch@uic.edu
Using data from an annual online survey of scientists at the University of Illinois at Chicago's Center for Clinical and Translational Science (CCTS) and a control group of comparable scientists, this paper will examine how collaborative network structure and resource exchange patterns vary between CCTS participants and non-participants and whether CCTS related institutions are associated with pattern variation. The survey captures ego-centric collaborative network structure both within and outside academe, duration and origin of relationship, resource and knowledge exchange, attitudes toward clinical and translational research, and a range of activities including grants, conferences, workshops, new manuscripts, clinical research initiatives, interaction with the public, and education and policy activity. Survey based ego-centric network analysis enables the establishment of a multidimensional baseline for analysis that captures early outcomes, enables attribution to program activities, and provides feedback to program managers.
Tracking for Translational: Novel Tools for Evaluating Translational Research Education Programs
Julie Rainwater, University of California, Davis, julie.rainwater@ucdmc.ucdavis.edu
Erin Griffin, University of California, Davis, erin.griffin@ucdmc.ucdavis.edu
Stuart Henderson, University of California, Davis, stuart.henderson@ucdmc.ucdavis.edu
The Clinical and Translational Science Awards (CTSA) incorporate innovative translational research training programs aimed at producing a diverse cadre of scientists who work collaboratively to rapidly translate biomedical research into clinical applications. Evaluation of these programs that emphasize team science, interdisciplinary research, and acceptance of a range of career trajectories challenge evaluators to develop outcome measures that go beyond simply counting traditional academic products, such as individual publications and grants. This presentation describes methods used at the UC Davis Clinical and Translational Science Center to track, analyze and visualize the value and quality of translational research training. Using informatics and evaluation expertise, we developed tools that track products in a way that captures the essential qualities of translational research, such as multidisciplinary collaboration and teamwork rather than individual success. A method for visualizing the collaboration networks of successful multidisciplinary teams with Collexis Research Profiles will be described.
Integrating the Logic Model and Tyler Matrix Approaches in Evaluating Translational Science
Babbi Winegarden, University of California, San Diego, bwinegarden@ucsd.edu
Angela Alexander, University of California, San Diego, a1alexander@ucsd.edu
Effective evaluation of components is a critical aspect of our 360 degree evaluation for our CTSA grant. In order to effectively evaluate the components, the UCSD CTRI uses a mixture of the Logic Model and the Tyler Model. When completing the Logic Model, we emphasize our inputs (resources) and define our outputs (outcomes) as primary (count data), secondary (improved knowledge, skills and/or abilities) and tertiary (change in patient outcomes or overall impact). The Tyler Model adds information about goals, objectives and methods that helps tie all of the pieces together. We have found that the Tyler Matrix method is a great complement to the Logic Model; together achieving what neither does alone. So far, our component directors have found this evaluation process to be effective. We will share our evaluation process, logic models, excel spreadsheet which combines the two approaches, feedback from directors, and our metrics process with the audience.
Reframing Analysis: Return on Investment Protocols for Clinical and Translational Science Programs
Kyle Grazier, University of Michigan, kgrazier@umich.edu
William Trochim, Cornell University, wmt1@cornell.edu
Limited ability and experience assessing the value of CTSA research funding on accelerating the translation of scientific knowledge is a generic issue faced by both individual CTSAs and by NIH. To address this issue, investigators from U of M, Weill Cornell, and OHSU examine the return on investment of two key CTSA programs: pilot grants and education & training. By carefully studying the economic and social inputs and outputs of these programs, this work produces investigator, program and institutional estimates of return on investment. We create detailed protocols for assessing the value of these two CTSA functions. These protocols have specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered and analyzed. We will provide a model and specific protocols that all CTSAs could potentially use to assess the economic and social returns on NIH and institutional investments in critical activities.

Session Title: Evaluation for Encouragement and Evolution to Innovation: Toward the new progress phase of RT&D
Multipaper Session 780 to be held in Manhattan on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Naoto Kobayashi, Waseda University, naoto.kobayashi@waseda.jp
Discussant(s):
Osamu Nakamura, National Institute of Advanced Industrial Science and Technology (AIST) Chugoku, osamu.nakamura@aist.go.jp
Abstract: Recently, Japan has been suffering from the unprecedented national tragedy by the heavy earthquake, enormous tsunami and terrible nuclear accidents. In order to recover from the disaster and let industries revive (advancement, transformation or creation etc.), a rapid and opportune chain of innovation is inevitable. Although it goes without saying that appropriate research, technology and development (RT & D) must be performed, the effective and efficient evaluation of the RT&D and its reflection are especially important. Therefore, we put emphasis on the evaluation to encourage the RT&D group from the viewpoint of (1) the strategy for innovative technology development, (2) efficient RT&D and academic-industrial alliance, (3) communication between academia and industry to realize innovation, and (4) new analysis methodology for evaluation of outcomes and economic effects. This session is intended to discuss the issue including reports of various progress phases of RT&D from several organizations -Waseda University, AIST and NEDO.
Strategy and Evaluation of Research Initiatives in Waseda University
Naoto Kobayashi, Waseda University, naoto.kobayashi@waseda.jp
Along with the new research organization system, Waseda Univeristy has started a new research program named “Research Initiative' in 2009. The research initiative is aiming at an increase in the international competitiveness and the formation of an autonomous and continuous research organization. We have selected 8 research initiative fields derived from global and social issues, and strategic research issues of the university. Each research initiative has a research period of 5 years, and ex-ante evaluation, interim evaluation, final evaluation and follow-up evaluation are performed before, during and after the period. Indices of (1)advancement, (2)originality, (3)autonomy, (4)academic and social influence, and (5)diverse human resources are taken into account for the evaluation. It is crucial that the evaluation can encourage the research groups and their activity. Especially the evaluation to encourage them for effective academic and social influence is very important, although researches in universities should be based on freedom and curiosity.
An Improved Approach of Research Unit Evaluation at the Beginning of the Third Research Program Term of AIST
Takashi Yoshimura, National Institute of Advanced Industrial Science and Technology (AIST), yoshimur@ni.aist.go.jp
Yoshiaki Tamanoue, National Institute of Advanced Industrial Science and Technology (AIST), y.tamanoue@aist.go.jp
Masashi Suzuki, National Institute of Advanced Industrial Science and Technology (AIST), suzuki-m@aist.go.jp
Shigeko Togashi, National Institute of Advanced Industrial Science and Technology (AIST), s-togashi@aist.go.jp
Hidenori Endo, National Institute of Advanced Industrial Science and Technology (AIST), h.endo@aist.go.jp
Kanji Ueda, National Institute of Advanced Industrial Science and Technology (AIST), k-ueda@aist.go.jp
We attempted to improve efficiency of research unit evaluation while emphasizing the perspective of social outcomes at the beginning of the third research program term of AIST. Main points of the improvements are to increase the number of external reviewers of the evaluation committee, reduce the burden of the evaluation process and strengthen the recommendation function of the evaluation committee. We will summarize the results of the research unit evaluation in the first year of the third research program term and also present some analysis of the results with the improved evaluation system.
Strategic Collaboration Network to Develop the Low Carbon Society by the Innovative Renewable Energy
Osamu Nakamura, National Institute of Advanced Industrial Science and Technology (AIST) Chugoku, osamu.nakamura@aist.go.jp
Shinichi Matsui, National Institute of Advanced Industrial Science and Technology (AIST) Chugoku, matsui-shinichi@aist.go.jp
Yoshiyuki Sasaki, National Institute of Advanced Industrial Science and Technology (AIST) Chugoku, y.sasaki@aist.go.jp
Japanese government has adopted the new growth strategy, which consists of life innovation and green innovation in order to revive the active Japan. AIST has set these two innovations as the mission of its 3rd research term, to support economy and environment and raise high QOL of the nation. In AIST Chugoku, Biomass Research Center has been developing the manufacturing technologies for renewable energy by utilizing resources of woody biomass abundant in Chugoku district. Moreover, based on these technologies, we contribute to act as a local innovation hub to collaborate with universities, public research institutes, and SMEs in Chugoku areas to encourage the local industry and economy. Scenario and roadmaps, network for innovation hub and dissemination of research outputs are especially important for strategy and evaluation. In this study, the strategy formation and the useful evaluation system will be discussed in order to enhance the dialogue between actors mentioned above.
Research on the Derivative Effect Created by NEDO Projects
Sayaka Shishido, New Energy and Industrial Technology Development Organization (NEDO), shishidosyk@nedo.go.jp
Kazuo Fukui, New Energy and Industrial Technology Development Organization (NEDO), fukuikzo@nedo.go.jp
Masaru Yamashita, New Energy and Industrial Technology Development Organization (NEDO), yamashitamsr@nedo.go
Mituru Takeshita, New Energy and Industrial Technology Development Organization (NEDO), takeshitamtr@nedo.go.jp
Starting in FY2004, NEDO began to conduct follow-up surveys to better understand the progress achieved after completion of its national projects. Entrustment contractors participating in NEDO's projects are surveyed utilizing questionnaires, and hearings are also held. In this study, we analyzed various cases of both successful and unsuccessful commercialization from such viewpoints as development phase organization, market factors, prospects for developing new business opportunities, and systems of research and development. As a result, we learned that NEDO's projects have created a derivative effect in peripheral areas (networks, personnel training, etc.) by developing related technologies in addition to the effect intended at the time of project initiation. In fact, it was clearly demonstrated that the projects, whose primary R&D themes achieved significant progress during the project period, had a high rate of commercializing products within five years after project completion. The factors that produced this result are discussed.
Study to Evaluate the Cost-effectiveness of NEDO Projects: Analysis of 'NEDO inside Products' Survey
Masaru Yamashita, New Energy and Industrial Technology Development Organization (NEDO), yamashitamsr@nedo.go
Kazuo Fukui, New Energy and Industrial Technology Development Organization (NEDO), fukuikzo@nedo.go.jp
Sayaka Shishido, New Energy and Industrial Technology Development Organization (NEDO), shishidosyk@nedo.go.jp
Mituru Takeshita, New Energy and Industrial Technology Development Organization (NEDO), takeshitamtr@nedo.go.jp
This study aimed to analyze and evaluate cost-effectiveness of NEDO projects on a macroscale, including both direct and indirect effects, based on achievements attained through development projects over a 30 year period in the fields of new energy, energy efficiency, environmental and industrial technologies. In the study, products which took more than five years to be commercialized after completion of a NEDO project and which were created using NEDO development results as their core technology were defined as 'NEDO inside products.' Approximately 30 NEDO inside products having relatively high sales were selected for the study. The sales history of products was reviewed and future sales, job creation effects and Coâ‚‚ emission reductions were estimated and analyzed using data collected from relevant companies and industry groups through questionnaires, interviews and scientific literature. Cost-effectiveness of NEDO's projects as well as social benefits were then evaluated from both a medium- and long-term perspective.

Session Title: Conceptualizing Culturally Responsive and Culturally Competent Evaluation
Multipaper Session 781 to be held in Monterey on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Stafford Hood,  University of Illinois at Urbana-Champaign, slhood@illinois.edu
To Be or Not to Be: Culturally Competent versus Culturally Responsive Evaluator
Presenter(s):
Gabriela Juarez, University of Illinois at Urbana-Champaign, gjuarez02@gmail.com
Jennifer C Greene, University of Illinois at Urbana-Champaign, jcgreene@illinois.edu
Abstract: A common expectation across evaluation theories today is that evaluators should demonstrate cultural competence. AEA's Guiding Principles stipulate that a culturally competent evaluator should be aware of 'their own culturally-based assumptions, their understanding of the worldviews of culturally-different participants and stakeholders in the evaluation, and the use of appropriate evaluation strategies and skills in working with culturally different groups.' So, cultural competence signals understanding and skills that enable evaluators to appropriately read and interpret cultural meanings. However, cultural competence is often used interchangeably with cultural responsiveness. Cultural responsiveness signals explicit evaluator attention to and valuing of the cultural context of the program. These two concepts of cultural competence and cultural responsiveness thus overlap and also offer unique purchase on issues of culture in evaluation. This paper discusses key differences between these two concepts and then proposes to combine them into one practical role expectation for evaluators' cultural attentiveness.
Educational Evaluation, Social Justice, and Diversity: An Analysis of the Interplay Between Border Theory and Contextually Culturally Responsive Evaluations
Presenter(s):
Melba Castro, University of California, Riverside, melbac@ucr.edu
Abstract: Contextually culturally responsive evaluations (CCRE) provide evaluators with a methodological theory to promote educational equity for diverse student populations and communities who have historically been underserved. With a clear infusion of culture and context, evaluation can be used as a tool for promoting social justice. Importantly, border analysis opens a new dimension of critical inquiry over methodological and epistemological practices in evaluation, such as research design choices, data collection, interpretation, and reporting. This paper incorporates border theory with CCRA to examine the interplay of values, methodology, and power in which evaluation can be used as a tool to promote educational equity and social justices for underrepresented and marginalized students. It critique of the notion that unbiased evaluations serve all students and programs equally and provides a discussion of what it means to be a culturally competent evaluator.
A Theoretical Framework and a Conceptual Model for Measuring Culture: A Potential Tool for Conducting Culturally Responsive Evaluation?
Presenter(s):
Khawla Obeidat, University of Colorado, khawla.obeidat@ucdenver.edu
Stafford Hood, University of Illinois at Urbana-Champaign, slhood@illinois.edu
Abstract: For some of us, it is strongly contended that evaluators should not only engage in evaluation research and practice but that this work should be embedded and serve local communities. At the same time, there is a danger for an evaluator's uninformed, insensitive, and even culturally biased attitudes to negatively impact each stage of an evaluation from the evaluation questions to how the evaluation findings are presented to the stakeholders. For more than a decade, there has been an increasing call for the evaluation community to fulfill its role in our culturally diverse society with cultural competence being a fundamental requirement for evaluators. This paper begins the exploration of those scales, questionnaires, tools, survey, standardized and unstandardized, that define and measure culture as a preliminary step for building a standardized measure of culture that can possibly be used in research and practice of culturally responsive evaluation.

Session Title: New Perspectives in Qualitative Evaluation
Multipaper Session 782 to be held in Oceanside on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Qualitative Methods TIG
Chair(s):
Jennifer Jewiss,  University of Vermont, jennifer.jewiss@uvm.edu
Discussant(s):
Jennifer Jewiss,  University of Vermont, jennifer.jewiss@uvm.edu
Adding Value with Arts-Informed Evaluations
Presenter(s):
Lynda Weaver, Bruyere Continuing Care, lweaver@bruyere.org
Michelle Searle, Queen's University, michellesearle@yahoo.com
Pamela Grassau, Elisabeth Bruyere Research Institute, pgrassau@bruyere.org
Lara Varpio, University of Ottawa, lara.varpio@uottawa.ca
Pippa Hall, University of Ottawa, phall@bruyere.org
Abstract: To go beyond surface knowledge and reach deeper thoughts, feelings and meanings…, we need to use the language of the mind: a language which is metaphorical, non-verbal, multi-sensorial and teeming with images' (Bento & Nilsson, 2009)1 Evaluations of programs concerned with human experiences are poised to accept a new mode to consider for planning, data collection and reporting. Arts-informed evaluations can allow program participants and evaluators to reflect on their experiences and express themselves creatively. This creativity is fostered from the holistic and personal involvement of one's sensory, intuitive and intellectual dimensions of our experiences. We will discuss the origins and affiliations of arts-informed evaluations with qualitative research, the value of adding art to different aspects of evaluation, and noted limitations. Some examples of art successfully incorporated into evaluation or research are also presented to show the potential of this burgeoning mode of inquiry.
Valuing Children's Visual Perspectives in Formative Evaluation
Presenter(s):
Tracie Costantino, University of Georgia, tcost@uga.edu
Melissa Freeman, University of Georgia, freeman9@uga.edu
Abstract: Students are an essential, although often overlooked, stakeholder group, especially when the overarching evaluative question is aimed at exploring the intersection of a multiplicity of new instructional practices. As a new professional development school, Synergy Elementary was implementing both state and district mandated practices and innovative educative and enrichment activities that altered the experiences of students in significant ways. Wanting to understand how students were experiencing these practices we recruited 20 third graders and 20 fifth graders who we met with twice in groups of 10 during spring 2010 for a total of eight focus groups. In addition to discussing their experiences, we asked students to draw themselves expressing a feeling they had in relation to a school experience. This paper will focus on an analysis of these drawings and what can be learned about students' experiences of school through alternative modes of representation.
'How Stories Become Evaluating Tools'
Presenter(s):
Rahel R Wasserfall, Brandeis University, rahelwasserfall@hotmail.com
Abstract: 'Story' is a central theme of qualitative research and evaluation. Each year, as the internal evaluator on staff of the International Summer School on Religion and Public Life, I discover the 'story' that both represents and illustrates the experience for the participants and epitomizes the program each particular summer. Although these stories can be analyzed on many different levels as they are very rich; in this presentation I will focus on their evaluative qualities. This presentation defines what an 'evaluative ' , a' discovery' and a 'reporting' story are and describes how to identify them among all the data collected. I will present such stories and show their utility as evaluating tools for the organizers. AEA participants will come away from this presentation with an understanding of the qualities of an 'evaluative story' and what differentiates them from 'discovery' and 'reporting' stories and their importance in evaluation.

Session Title: Beyond Fidelity II: Assessing the Context of Implementation
Think Tank Session 783 to be held in Palisades on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Human Services Evaluation TIG
Presenter(s):
Miles McNall, Michigan State University, mcnall@msu.edu
Discussant(s):
Madeleine Kimmich, Human Services Research Institute, mkimmich@hsri.org
Abstract: The think tank will focus on re-conceptualizing the current approach to evaluating implementation of "evidence-based practices" (EBPs). In most cases, implementation evaluations focus on the extent to which EBPs are implemented with fidelity to their original models. However, because EBPs are implemented in a wide variety of organizational, political and cultural contexts, evaluations of EBPs that maintain an exclusive focus on fidelity miss important contextual factors that may impact both the fidelity and effectiveness of interventions. As such, broader frameworks are needed that capture factors that influence the success or failure of implementation. Preliminary work from a similar AEA 2010 think tank will be the starting point for the session. Participants will work in teams to refine the 2010 maps of contextual domains and identify ways to assess the impact of these domains on implementation. Particular attention will be given to real-world constraints on conducting these expansive studies of effectiveness.

Session Title: The Theory and Practice of Evaluation in The Research to Sustainability Continuum
Multipaper Session 784 to be held in Palos Verdes A on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the AEA Conference Committee
Chair(s):
Leslie Goodyear, National Science Foundation, lgoodyea@nsf.gov
Discussant(s):
Gabriel M Della-Piana, Independent Consultant, dellapiana@aol.com
Abstract: Moving from basic educational research to the adoption and sustainability of evidence-based practices is a significant issue for educators, researchers, and funders. Evaluation is an integral part of this process. In this session, the research to sustainability continuum will be presented and discussed from multiple perspectives. The panels' viewpoints merge around the values inherent in the development and adoption of evidence-based practices in both formal and informal educational settings. The "stages" of (1) basic educational research; (2) development of educational practices; (3) replication and field testing; (4) scaling; and (5) adoption/institutionalization/sustainability provide a common foundation for discussing the theory and practice of evaluation in the research to sustainability continuum.
The Role of Evaluation in the Basic Research to Practice Continuum and an Example
Linda Thurston, National Science Foundation, lthursto@nsf.gov
Evaluation is a critical tool in the continuum that reaches from basic educational research to the adoption and sustainability of evidence-based practices in educational settings. At each juncture of the continuum, evaluation provides the evidence upon which to make decisions to continue forward toward the next stage or to return to the previous stage with an aim of improvement. This presentation will describe the basic stages of the continuum that are common to many evaluations and will use an example of the evaluation of a program for low-income women, Survival Skills for Women, from basic research to state-wide adoption during the years of the welfare reform movement. The program, developed by researchers 30 years ago, is still being implemented. The basic evaluation questions for making decisions at each stage of the continuum for this educational program will be described.
Examining the Value Assumptions Underlying the Research to Practice Continuum: Implications for Critical Tensions in Evaluation Practice
Connie Kubo Della-Piana, National Science Foundation, cdellapi@nsf.gov
Evaluation has roots in the confluence, competing demands, and tensions among expert practical knowledge, constraints of the social sciences, and the value underpinnings of the humanities. This paper examines the value assumptions underlying the interdependence of research, evaluation, and practice in developing program theory and setting standards of evidence. The paper argues that the inherent "pushes and pulls" of this interdependent relationship can lead to clarity, ambiguity, and paradoxes in evaluation practice. Implications are drawn for responding to several key critical tensions relevant to value assumptions in evaluation practice: fidelity to treatment and expert practitioner knowledge in adapting to context; choice of critical competitor or rival treatment comparison groups and the more common practice of "no treatment" or "extant groups", without observation of implementation in each case; and representation of programs as simple, complicated, or complex in relation to the organizational capacity for program development, implementation, and evaluation.
Utilizing the Research to Sustainability Continuum for Evaluation Capacity Building
Jan Middendorf, Kansas State University, jmiddend@ksu.edu
Providing quality evaluations for educational programs and projects requires evaluators to utilize basic paradigms and models to communicate the role of evaluation in decision-making at various stages in the journey from research to sustainability. This presentation will demonstrate using the continuum to facilitate important evaluation discussions in educational evaluation conducted by the Office of Educational Innovation and Evaluation at Kansas State University. Researchers must provide basic findings that can be translated into successful practices. Educators must utilize research-based practices, programs and products to assure success with learners. Administrators must make decisions about adopting and sustaining programs and practices. The research to sustainability model provides an important tool for evaluators to use to describe the relevance and value of evaluation at each step in the continuum to clients and stakeholders. This educative role of the evaluator promotes the capacity of evaluation users to collaborate in decisions about the evaluation process.
Evaluation of Discrete Steps in the Process of Institutionalizing Educational Practices: Are We Asking the Right Questions?
Leslie Goodyear, National Science Foundation, lgoodyea@nsf.gov
Underlying this continuum are assumptions that educational programs are progressive, that research informs that progress toward scale up and sustainability, and that implementation fidelity is required in order to move through the continuum and evaluate the effects of the program over these stages. This presentation will address the question: What happens when these assumptions do not mirror reality? It will focus on a program evaluation that is confronted with a program that has little if any implementation fidelity yet is scaling rapidly and could reach sustainability in many sites soon. The presenter will discuss: What questions should be asked in such cases? What challenges to the research to sustainability continuum are surfaced by such exceptions to the rule?

Session Title: Connecting Evaluation and Strategy
Panel Session 785 to be held in Palos Verdes B on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Helen Davis Picher, William Penn Foundation, hdpicher@williampennfoundation.org
Discussant(s):
Patricia Patrizi, Public/Private Ventures, patti@patriziassociates.com
Abstract: Why do some evaluations lead to significant changes in strategy and process and others not? This session will explore how an evaluation's design and the way it is conducted affects its utility for organizational strategy and process redesign. Foundation case experiences in which evaluation helped shape changes in process and strategy will be presented and examined, exploring such crucial dimensions as: - When the evaluation occurs, - What types of issues/questions the evaluation poses, - Who is involved in the design and management of the evaluation, and how these activities are conducted, - What factors increase or decrease the evaluation "client's" willingness to participate in and listen to the results of an evaluation, and - How we make use of insights from successful evaluations to make the results of future evaluations more likely to be internalized, and, therefore, more effective.
Upping the Game: Increasing the Effectiveness of a Land Conservation Grantmaking Strategy Through Evaluation
Helen Davis Picher, William Penn Foundation, hdpicher@williampennfoundation.org
Peter Szabo, Bloomingdale Management Advisors, pszabo@bloomadv.com
In 2005, the Philadelphia-based William Penn Foundation engaged Peter Szabo of Bloomingdale Management Advisors to evaluate its landscape conservation grantmaking strategy. He analyzed 20 Foundation grants and reviewed other regionally based foundations with land conservation programs. Key recommendations included tightening the geographic focus of the Foundation's capital grantmaking, increasing the emphasis on complementary policy and program work to leverage it, and simplifying the grantmaking process. Through two additional research and evaluation projects, Szabo helped the Foundation further refine the strategy and explore alternatives, leading to a $5.5 million grant to an intermediary for technical and financial assistance to protect significant landscapes in two regional priority areas. Helen Davis Picher, Director of Evaluation and Planning at the Foundation and Peter Szabo will discuss how the engagement was structured and facilitated to ensure that findings and recommendations were used to clarify the Foundation's goals and subsequently inform changes to the strategy.
Robert Wood Johnson Foundation's Two Decades of Tobacco Control
Laura Leviton, The Robert Wood Johnson Foundation, llevito@rwjf.org
George Grob, Center for Public Program Evaluation, georgefgrob@cs.com
In 1991, the Robert Wood Johnson Foundation (RWJF) began to tackle one of the most intractable problems in the field of public health-tobacco addiction. Over the next two decades, it invested significant funds and talent, focusing on policy and systems changes, such as higher tobacco excise taxes, smoke-free indoor air laws, access to cessation treatment, and the federal regulation of tobacco. In January, 2009, the Foundation contracted with the Center for Public Program Evaluation to provide an independent assessment of its tobacco work and to make appropriate recommendations based on that assessment. It was particularly interested in lessons from the tobacco work that could be applied to other large public health social transformation initiatives. This presentation will describe what that study found and how the Foundation used the results.
UC Davis Evaluation Informs Sierra Health Foundation Youth Grantmaking Strategy
Matt Cervantes, Sierra Health Foundation, mcervantes@sierrahealth.org
David Campbell, University of California, Davis, dave.c.campbell@ucdavis.edu
Nancy Erbstein, University of California, Davis, nerbstein@ucdavis.edu
University of California, Davis researchers evaluated the Sierra Health Foundation's REACH Youth Development Program from 2007-2010. As the centerpiece of an $8 million youth development strategy, REACH engaged youth with adults in seven coalitions to plan and implement community change strategies. Interim and final evaluation reports helped inform ongoing foundation discussion about its youth grantmaking strategy. Evaluation team and foundation representatives discuss key factors that enabled a strong connection between the evaluation and strategy deliberations, including: 1) an evaluation design that included questions about foundation strategy; 2) extensive fieldwork that uncovered data relevant to strategy; 3) intentional efforts by the evaluators to pose strategy questions, rather than simply reporting outcomes; 4) evaluator knowledge of the regional context, and 5) a close working relationship between the evaluation team and foundation staff, facilitating adaptation both during and after the evaluation.

Session Title: Evaluation to Improve Distance Learning: Distance Learning to Improve Evaluation: Lessons From Three Continents
Multipaper Session 786 to be held in Redondo on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Distance Ed. & Other Educational Technologies TIG
Chair(s):
Clare Strawn,  International Society for Technology in Education, cstrawn@iste.org
Evaluation of the Teacher Training Programme at Indira Gandhi National Open University, India
Presenter(s):
Sandhya Sangai, National Council of Educational Research and Training, sandhya.sangai@gmail.com
suresh Garg, Indira Gandhi National Open University, scgarg@ignou.ac.in
Abstract: The B.Ed programme, launched by Indira Gandhi National Open University, India in January 2000 to cater to a felt-need, has expanded several-fold since then (from 2,000 learners to 25,000 learners). This paper is the outcome of an evaluation of the B.Ed programme adapting major tenets of the CIPP model. The findings are based on the responses from a randomly chosen sample of 858 learners and 62 teacher educators from across the country. The methods employed included documentary analysis and questionnaire based survey. The analysis showed that the programme was well designed and printed study materials were of high quality; the workload was perceived to be heavier; use of technology was rare and student-teachers lacked aptitude for problem solving and independent thinking. In spite of this, the success rate was above 90%. To improve the programme, it would be desirable to include IT related courses and considerably improve learner support services.
Contributions of the Evaluation Process to the Hi Tourist! Program
Presenter(s):
Monica Pinto, Roberto Marinho Foundation, monicap@futura.org.br
Abstract: The purpose of this paper is to present and discuss the contributions of the evaluations carried out during the initial phase of implementation of the Olá Turista!. The program is developed by Roberto Marinho Foundation in association with the Brazilian Ministry of Tourism and aims to improve the reception services for foreign tourists in the 2014 World Cup. The program offers 80,000 free seats for online courses in English and Spanish, which include eighty hours of training. Accordingly, the evaluation processes which were addressed herein as to the contributions to the enhancement of the program are: the complete diagnosis - first action of the program - on the tourism sector in some of the cities where the project will be carried out; and the pilot phase, in which the course activities, the use of the material and the target students' profile were observed and tested.

Session Title: Methods II: Methodological Issues in Assessment in Higher Education
Multipaper Session 787 to be held in Salinas on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
John Yun,  University of California, Santa Barbara, jyun@education.ucsb.edu
Partial Respondents in Online Course Evaluations: Quantitative and Qualitative Analysis of Response Rate Patterns
Presenter(s):
David Nelson, Purdue University, davenelsond@gmail.com
Abstract: The advent of online student evaluations of course instruction has re-ignited debates about evaluation procedures and validity. Chief among these concerns is the nationwide decline in response rates for course evaluations conducting via an online medium. This paper examines patterns of student response rates in online evaluations from a large public research institution in the Midwest. It identifies several factors that may hinder student participation in voluntary course evaluations, and introduces a student demographic group that was heretofore absent from administrative analyses of student evaluation response rates. Data analysis demonstrates marked self-selection among students who are now presented with multiple evaluations to complete at once, in contrast to the staggered structure of paper and pencil-based course evaluations. An anonymous survey of these 'partial respondents' provides some insight into the motivations of students and their choices in which surveys to complete.
What are Course Evaluations Evaluating?: Establishing the Validity of University Course Evaluations
Presenter(s):
Nancy Rogers, University of Cincinnati, nancy.rogers@uc.edu
Jerry Jordan, University of Cincinnati, jerry.jordan@uc.edu
Abstract: While data from course evaluation forms are often used to make decisions about faculty and curriculum development, we seldom perform thorough validations of the course evaluation instruments themselves. The utility of these data can be obscured or diminished when questionnaire items are interpreted by students in ways not intended by evaluators constructing the survey. This research is centered on validating course evaluation instruments of undergraduate courses. First, two forms of data were collected to discern student perceptions of individual evaluation items. Students were asked in both questionnaire and interview formats what individual items meant to them and what factors drove their responses to those items. These student responses were compared with the intent of these items as articulated by the administrators/developers of the instruments. Since data collected through course evaluation instruments is often the foundation of curriculum reform, validation of instrument items is paramount to effective data-driven decision making.

Session Title: Values That Evaluation Brings to the Policy Process
Multipaper Session 788 to be held in San Clemente on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Mary Achatz,  Westat, maryachatz@westat.com
Deliberative Democratic Evaluation and Expanding the Role of Evaluation in the Policy Process
Presenter(s):
Cindy Roper, Clemson University, cgroper@clemson.edu
Abstract: This paper explores deliberative democratic analysis as a way to expand the role of evaluation in the policy process. It examines the relationship between evaluation and policy change, explores mechanisms for effective democratic evaluation and addresses how these mechanisms can be integrated into evaluation design. Using criteria put forth by Ernest House and Kenneth Howe in their paper, 'Deliberative Democratic Evaluation' (2000), it discusses how inclusion, dialogue, and deliberation can allow citizen/stakeholders to contribute to the legitimization of public policy and to participate in decision making in a meaningful way. References House, E. R., & Howe, K. R. (2000). Deliberative democratic evaluation. New Directions for Evaluation, 2000(85), 3-12.
Genre Sensitive Monitoring and Evaluation System, Science, and Applications
Presenter(s):
Rasha Qudisat, Ministry of Social Development of Jordan, rashaqudisat@gmail.com
Abstract: Until the end of 2008, Ministry of Social Development (MoSD) had no official unit for M&E of MoSD entitled with. However, for the last two years MoSD have done tremendous work to enhance its technical and institutional capacity to further improve M&E efficiency, considering and focusing the gender aspect in terms of policies, programs and creating decision support systems. Institutionalization gender based M&E system in a wide outreach government institution like MoSD is a substantial enterprise that largely necessitates organizational, managerial, and cultural alterations within the structure of the ministry, illustrating the need to identify and create communication of the vision, and clearness of purpose of M&E system, are important elements at the outset, and all need to be on participatory approach for the ownership of the M&E system, which leads to adaptation, application, consequently use maximum benefit from the system.
I am an Activist: The Obligation of the Evaluator to Take Sides in a Contest of Valuing
Presenter(s):
Terence Beney, Feedback Research & Analytics, tbeney@feedbackra.co.za
Abstract: This paper argues that the evaluator is ethically compelled, by virtue of adopting the discourse of scientific objectivity, and within a context defined as 'development', to represent the interests of intended beneficiaries. This position is substantiated by an illustrative analysis of an end of programme evaluation of a child labor intervention in Southern Africa. Appropriately critical findings precipitated a contest over the content of the text by competing interests personified in the donor, the programme implementer, the Botswana government and the evaluator. Ever-present was the marginalized voice of the intended beneficiaries of the programme, namely the Naro speaking San of the Gantsi District. The paper describes the competing discourses that emerged and how they influenced the final version of the text. The lessons learned for preserving the objective case in the interests of the intended beneficiaries are documented.

Session Title: Using Webinars to Build Evaluation Capacity: The Collaborative, Participatory, and Empowerment Evaluation Experience
Panel Session 789 to be held in San Simeon A on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
David Fetterman, Fetterman & Associates, fettermanassociates@gmail.com
Discussant(s):
Stephanie Evergreen, Evergreen Evaluation, stephanie@evergreenevaluation.com
Abstract: During March 2011, the American Evaluation Association's Collaborative, Participatory and Empowerment Evaluation TIG sponsored three webinars to build evaluation capacity. The first webinar was about empowerment evaluation, facilitated by Fetterman and Wandersman. The second highlighted the use of participatory evaluation, hosted by Ann Zukoski and Mia Luluquisen. The third webinar about collaborative evaluation was presented by Rita O'Sullivan and Liliana Rodriguez-Campos. The panelist found webinars to be a useful, efficient, and effect way of introducing these approaches to the membership. In the spirit of reflective practitioners, learning from our experience we have organized a panel of webinar hosts to highlight recommended steps and lessons learned. The discussion is organized around the following aspects: planning, implementation, and follow-up. Challenges include: - summarizing an entire approach into a 10-minute presentation, - managing technical issues (from optimal picture and font size to sound quality), and - presenting as a team (from remote sites)
Overview of the AEA Webinar Platform
Susan Kistler, American Evaluation Association, susan@eval.org
Susan will begin by providing a context for the discussion, including an overview of the webinar platform used for AEA. She will build on this presentation by providing lessons that the association has learned in conducting webinars, specifically including: Technology considerations, specifically as they relate to sounds quality, band-width, graphical user-interface (or user-friendliness), and computer platform compatibility Presenter considerations, related to successful presentation and facilitation Moderator considerations, specifically as they related to increasing access for attendees around the world (giving voice within audiences of varying sizes) In addition, Susan will highlight the limitations and realities encountered as a webinar provider. Finally, Susan will conclude with a very short description of webinar evaluation considerations, focusing on approaches designed to increase response rates and provide reliable and valid data
Hosting a Webinar: Planning, Implementation, and Follow-up
David Fetterman, Fetterman & Associates, fettermanassociates@gmail.com
David will begin with a brief description of what it is like to host or facilitate a webinar, including planning, implementation, and follow-up. He will highlight the "smoothness" factor, a result of careful planning, practice, trial runs, and knowing the topic and co-facilitator in some depth (previous working relationship). He will also discuss the importance of relying on AEA staff and webinar colleagues to critique initial drafts. Feedback topics include: mechanical and technical issues, such as making the picture on a slide large and printing the text on it to highlight the visual impact in a visual media substantive content issues, such as including a reference to both process and outcomes, as well as references and resources David will also speak to the issue of when things "get hairy" and go wrong in the middle of the presentation. These problems can be anticipated by sticking with the overall plan.
Webinars: Best Practices, Impact and Limitations
Abraham Wandersman, University of South Carolina, wandersman@sc.edu
Abraham Wandersman will discuss some best practices issues and expected outcomes in relation to short and long webinars. He will discuss the potential impact of the medium and some of its limitations.
Webinars: What to Include and Exclude, Tips, and Lessons Learned
Ann Zukoski, Rainbow Research Inc, azukoski@rainbowresearch.org
Mia Luluquisen, Alameda County Public Health Department, mia.luluquisen@acgov.org
Ann Zukoski and Mia Luluquisen will follow Abe's presentation and discuss three main topics: 1) First, how to decide what to include in a 10 minute presentation for a webinar. The short amount of time requires careful decision making about what to include, exclude and how to make the presentation fit with a series of webinars on the same topic. 2) Second, she will provide a list of helpful tips for what to have on hand to prepare for any technical difficulties. 3) Third, she will talk about lessons learned and ideas for how future AEA webinars can be improved.
Webinars: Tools to Heighten Awareness in Evaluation
Rita O'Sullivan, University of North Carolina, Chapel Hill, rjosull@mindspring.com
Rita O'Sullivan will discuss how webinars can be used to heighten awareness of recent thinking and new outlets for work in collaborative evaluation. Her webinar extracted information slated for publication that previewed a special issue of Evaluation and Program Planning, which focuses on collaborative evaluation. The limited webinar time allowed sufficient time to introduce the audience to the essence of the work and hopefully encouraged them to consult the more detailed article and special issue. Further she will reflect how partnering in these types of efforts can enhance the process.
Using Solid Presentation Skills to Make it a Collaborative, Participatory, and Empowering Experience
Liliana Rodriguez-Campos, University of South Florida, liliana@usf.edu
A webinar has many benefits for the presenter and audience. For example, it helps reach a larger audience; learn about new topics while being cost-effective; and can be collaborative by including question/answer opportunities for dialogue. With the growing expectation of delivering presentations via webinar, Liliana will discuss how to capture the audience's attention by the use of solid presentation skills. No matter the type of webinar, the presentation and communication skills are what ensure that the audience will remain engaged throughout the experience. Liliana will share information about how to refine your presentation style and engage audience's interest. Also, she will share visual layout guidelines and how to apply them for maximum effect. Furthermore, Liliana will identify some distracting situations (e.g., difficult audience) that can come across in your sessions and how to address those distractions.

Session Title: Examining Educational Policies and Practices Affecting Students "At Promise"
Multipaper Session 790 to be held in San Simeon B on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Linda Meyer,  Claremont Graduate University, linda.meyer@cgu.edu
Discussant(s):
Kate LaVelle,  The Boy's and Girl's Aid Society of Los Angeles, klavelle@5acres.org
A Quantitative Study of the Characteristics of Transient and Non-Transient Students in Nevada Elementary Schools
Presenter(s):
Andrew Parr, Nevada Department of Education, aparr@doe.nv.gov
Bill Thornton, University of Nevada Reno, thorbill@unr.edu
Abstract: This evaluation provides a summary of the assessment of math and reading, student characteristics, and school factors for approximately 14,500 students from nearly 300 elementary schools across Nevada with specific emphasis on student transience (student mobility). Student transience is not currently recognized as an at-risk subpopulation under the No Child Left Behind (NCLB) Act. Also, transient students are more likely to fall into at least one of the other at-risk subpopulations that are recognized in NCLB legislation in comparison to non-transient students. Attention to curriculum and school processes may prove to be important in serving the educational needs of transient students. The paper presents findings, recommendations, and discusses related policy issues.
Values, Equity, and Accountability: Exploring State Alternative Education Policy
Presenter(s):
Lynn Hemmer, Texas A&M International University, lynn.hemmer@tamiu.edu
Tara Shepperson, Eastern Kentucky University, tara.shepperson@eku.edu
Abstract: Performance based standards remain a powerful force in state and federal accountability systems, but it is not as clear how states define and document expectations for alternative education (AE) students. Because AE schools are encouraged to design programs to prevent/recover student dropouts, accountability rules governing traditional schools may not be appropriate in AE settings. There are indications that states often fail to collect outcomes data for alternative students following prevailing accountability requirements. There is little direct information about accountability, policies, and outcomes assessment for AE schools, nationwide. This study presents a cross-case descriptive study of California, Kentucky and Texas policies with the goal to analyze issues of equity and values related to AE programs. Findings indicating lesser accountability requirements suggest reduced expectations for AE programs and the at-risk students they serve, raising questions about educational equity.
Defining Actions and Values: Participatory Logic Modeling by Alternative School Teachers
Presenter(s):
Tara Shepperson, Eastern Kentucky University, tara.shepperson@eku.edu
Lynn Hemmer, Texas A&M International University, lynn.hemmer@tamiu.edu
Abstract: A group of teachers at an alternative school for 7 - 12 graders worked as a group backwards mapping student outcomes to classroom activities to clarify their beliefs and priorities for educating highly at-risk students. The exercise reveals the tacit belief that by building relationships with otherwise disenfranchised students, teachers can engage them in learning. Teachers were able to show a connection between classroom activities, improved student behavior, and increased learning. Teachers also discussed goals to develop the whole child for adult life. Less clear was how the school's project-based learning would in the short term improve student scores on standardized tests. Participatory pathways analyses helped teachers reflect on their values and priorities, build capacity to move towards academic rigor, merge innovation and accountability, and sustain the program.
Wrapping Services Around Children: An Evaluation of Wraparound
Presenter(s):
Jason Daniels, University of Alberta, jason.daniels@ualberta.ca
Brad Arkison, University of Alberta, brad.arkison@ualberta.ca
Stanley Varnhagen, University of Alberta, stanley.varnhagen@ualberta.ca
Abstract: Wraparound is described as an intervention planning process that can be applied to situations in which individuals have compound needs across many life domains that require many service agencies and/or government ministries (VanDenBerg, & Grealish, 1996). Wraparound addresses the complex needs of children and youth through a plan for services and supports that draws on the strengths, resources, and collaboration of multiple sectors and/or agencies. In this session we will describe our experiences with a large-scale evaluation of wraparound approaches within a Canadian province. Using research that we have conducted we will present and then critique two different approaches for determining and demonstrating the value of wraparound. Fidelity to theory and Social Return On Investment(SROI)are two approaches that can provide different types of information about the effectiveness of educational interventions. This session will provide attendees with the opportunity to examine alternative methods of determining value.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Funder Goals Versus Evaluator Goals: When World Views Collide
Roundtable Presentation 791 to be held in Santa Barbara on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Evaluation Use TIG
Presenter(s):
Michelle Simpson, University of Nebraska, michelle.simpson@unmc.edu
Kate Golden, University of Nebraska, kgolden@unmc.edu
Abstract: The relationship between the funders of various programs and the evaluators brought in to measure the effectiveness of those programs can be a challenging one to navigate. At times, the funder's focus on outcomes above all else can conflict with the evaluator's interest in analysis of process, program fidelity and adherence to fundamental standards of evaluation. Our work with both private foundations and government agencies has raised the following questions: 1. How do evaluators respond to funder pressure to highlight only positive outcomes? 2. How should evaluators document failures of fidelity to the intervention model being evaluated? 3. What happens when funders and evaluators discover they have very different understandings of appropriate evaluation techniques? We hope our stories from the trenches can spark a constructive dialogue on how to turn these challenges into opportunities. We have employed some helpful strategies that may be of use to others in the field.
Roundtable Rotation II: Negating the Power of Accountability
Roundtable Presentation 791 to be held in Santa Barbara on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Evaluation Use TIG
Presenter(s):
Kathleen Toms, Research Works Inc, katytoms@researchworks.org
Susan Pujdak, Inovo21, spujdak@inovo21.com
Abstract: The primary purpose of the 'Negating the Power of Accountability' round table discussion will be to explore the responsibility of evaluators 'to take into account the diversity of general and public interests and values' in the present climate of results based and accountability purposed evaluation. The presenters will trigger discussion through a series of questions based on the Guiding Principles for Evaluators: E, 'Responsibilities for General and Public Welfare.' The session will specifically address critical issues facing evaluators regarding control of the dissemination of evaluation findings by clients. The round table participants will strategize on ways to address this important issue while adhering to the guiding principles.

Session Title: Getting Down to Cases: Using Data to Inform Evidence-based Decision-making
Panel Session 792 to be held in Santa Monica on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Sally Olderbak, University of Arizona, sallyo@u.arizona.edu
Abstract: Ultimately, evidence has to be meaningful and relevant at the level of the individual decision maker. Data are only the raw material for the formulation of evidentiary conclusions. How such conclusions are formulated can be critical to their use. This session describes some settings and empirical data that need to be reflected in decisions made in specific instances. A meta-analytic study of Cognitive Behavioral Therapy provides conclusions that permit a range of decisions at levels represented by clinics, clinicians, and patients. Another example shows how calibration of an index of addiction severity can facilitate decisions at the clinical level. A third illustration, having to do with measures of impairment in clinical settings, shows how decisions may be improved when additional variables are considered. Finally, a large scale analysis of data on value-added assessment of teacher performance will be used to show the problems associated with decisions about individual teachers.
Meta-analytic Assurance and the Individual Case
Sacha Brown, University of Arizona, sdbrown@u.arizona.edu
Most treatment evaluations, including evaluations of Cognitive Behavioral Therapy (CBT), focus on aggregate-level outcomes while ignoring cost-benefit perspectives of individuals receiving treatment, therapist/agencies providing treatment, and those paying for treatment. Initial analyses comparing contributions from cognitive and behavioral manipulations demonstrate that cognitive therapy (CT) contributes little to CBT therapeutic outcome (e.g., Longmore & Worrell, 2007). The limited additive effect of CT and lack of analyses from stakeholder perspectives necessitates a more comprehensive examination of CBT efficacy. We evaluated a sample of randomized CBT component-controlled trials treating anxiety disorders. Our statistical analyses occurred in two major phases. One evaluated the additive effects of each unique component against wait-list and placebo controls. The second compared the effects of each component against one another. Stakeholder cost-benefit variables were also incorporated (e.g. treatment duration, treatment mode, specific disorder). Findings suggest that more analyses that include stakeholder perspectives provide new insights into treatment effectiveness
Beyond Evaluation: Individual Differences do Matter
Mende Davis, University of Arizona, mfd@u.arizona.edu
Latent correlations between Impairment, functioning, and disability in several disease conditions are sometimes very high, e.g.,for cerebral palsy and muscular dystrophy. For example, if patients cannot move their legs, they cannot walk up stairs. Despite the magnitude of the correlations, some of the residual variance can be explained by other variables. A notable instance is the additional explanatory power of intelligence. Smarter people are able to find ways to get around specific disabilities and impairments in order to permit better functioning. The relationship between these variables are quite different from one condition to another. Personal/family characteristics and available resources contribute to use of services and outcomes. States with ample funding for services, families 'in the know' regarding useful services, and individual characteristics (motivation, goals, habilitation vs. rehabilitation) make a big difference in outcomes. Evaluation results in general may not provide a good guide for what should be done in particular
Calibrating an Addiction Severity Index for Better Decision Making
Ryan Seltzer, Universiity of Arizona, rseltzer@email.arizona.edu
Calibration is a method for enhancing the interpretability of changes in arbitrary survey scores often attributed to treatment intervention. The Fagerstrom Test for Nicotine Dependence (FTND) is a six-item survey used by tobacco cessation programs to characterize patients into low, medium, and high levels of nicotine dependence. Calibration of the FTND is essential for quantifying how patients with varying levels of dependence progress through and use cessation programs. The FTND is administered at intake to all patients enrolling in the Arizona Smokers' Helpline. Statistical techniques (e.g., GLM and SEM) were used to link program data such as relapse frequency, duration in program, number of counseling sessions, and quit rate to differences in FTND scores. Calibration procedures revealed, for instance, that for every increase in FTND point, probability of abstinence at seven months, on average, decreased by 13%. Calibration can identify programmatic areas that require flexibility based on tobacco dependence.
Value-added Analyses of Teacher Performance: Maybe Good in General, Maybe Troublesome in Practice
Mei-kuang Chen, University of Arizona, kuang@email.arizona.edu
Value-added performance measures for teachers are rapidly proliferating. Although they have not yet been fully evaluated, it seems likely that, in general, they will prove to have some utility in guiding policies about teacher evaluation. It is not so clear that such measures can be easily and uniformly applied in individual schools and to individual teachers. Analyses of a very large data set representing data on about 3000 teachers and more than 150,000 students suggest that the problems at school and teacher levels will be formidably challenging. Getting from "significant" findings about value-added teacher scores to decisions about training requirements, salary increments, and retention may result in resistance that will defeat the entire enterprise. Evaluators will need to be proactive in helping decision makers (and other stakeholders) to understand value-added measures and how they need to be used as part of the process of improving education.

Session Title: Identifying and Engaging Needs Assessment Stakeholders
Multipaper Session 793 to be held in Sunset on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Needs Assessment TIG
Chair(s):
Maurya West Meiers,  World Bank, mwestmeiers@worldbank.org
Discussant(s):
Hsin-Ling Hung,  University of North Dakota, sonya.hung@und.edu
Tried and True Strategies for Identifying and Valuing all Stakeholder Perspectives in a Large-scale Needs Assessment
Presenter(s):
Kara Smith, Kids Included Together, kara@kitonline.org
Abstract: Kids Included Together (KIT) is a non-profit organization that specializes in providing training for out-of-school time organizations committed to including children with and without disabilities into their programs. In 2011 KIT was contracted by the Department of Defense to provide their services to child, youth and teen programs on all United States military bases in the world. The initial goal of the contract was to conduct a needs assessment that was comprehensive of all branches and unique to each as well. KIT's evaluator included perspectives from all levels of the field were included and valued. This paper provides a model for identifying and valuing all stakeholders in a large scale, multi-level needs assessment. It discusses the challenges faced when identifying all stakeholders, managing challenges faced when soliciting opinions and being reflective of all perspectives in the subsequent evaluation plan. The resulting needs assessment was useful and respected by all stakeholders.
Measuring Collaborative Integration to inform Needs Assessment: The Massachusetts Medication Safety Alliance Promotes Responsive Regulation in Nursing Homes
Presenter(s):
Teresa Anderson, University of Massachusetts, terri.anderson@umassmed.edu
Michael Hutton, Woodland Associates, michaelhuttonwoodland@gmail.com
Abstract: The Massachusetts Medication Safety Alliance (Alliance) is a fifteen member inter-organizational collaborative of state regulators, the Center for Medicare and Medicaid Services and health professional organizations, purposefully developed to design a systems approach to safe medication administration in nursing homes. The Alliance is engaged in a strategic planning effort to promote a 'responsive regulation' (Stone, et al 2009) approach to addressing medication events in nursing homes. Responsive regulatory mechanisms rely on leadership, shared decision making and open communication. With National Council of State Boards of Nursing funding, University of Massachusetts Medical School evaluators used Woodland's Collaboration Evaluation Improvement Framework (CEIF) (Gajda, 2004) to measure both the Alliance's current level of integration and that needed to sustain its Nurse Employer Safety Partnership Model (Anderson, et al, 2011). State regulatory members have demonstrated full partnership. Further collaboration development across the Alliance is needed to introduce the responsive regulatory model as planned.
Listening to Ordinary People
Presenter(s):
Zoe Barley, zbarley Consulting LLC, zbarley@earthlink.net
Abstract: This presentation discusses a revised approach to needs assessment necessitated by changes in the USDOE's Regional Educational Laboratory contracts. The portfolio of work was to strongly emphasize research and to be based on specific regional needs. For us (central US REL) this rethinking was coupled with a new awareness of the issues Kellerman framed for needs assessment in 1987 How can ordinary people make themselves heard and participate in decision making? And for needs assessors: Who are we as assessors? How can we get respondents to articulate their own agendas? And Do we have a responsibility for follow-up? We selected a three pronged approach: framing a series of applied research projects based on the broad needs; holding discussions with key groups to probe more deeply, and collaborating with constituents to use data they had on hand, or would collect. Ordinary people made themselves heard and expected us to respond.

Session Title: Formative and Summative Assessment Across Educational Contexts
Multipaper Session 794 to be held in Ventura on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Mya Martin-Glenn,  Aurora Public Schools, mlmartin-glenn@aps.k12.co.us
The Problems of Test and Items Bias in Context of Sociopolitical and Educational Values in Russian National Public Examinations
Presenter(s):
Victor Zvonnikov, State University of Management, zvonnikov@mail.ru
Marina Chelyshkova, State University of Management, mchelyshkova@mail.ru
Abstract: Our national examinations by tests have been begun in 2001, but debates concerning justice and validity their results do not die away till this time. The purpose of our research is connected with problem of justice in a context of sociopolitical and educational values for such examinations. Our approach is based on the analysis of bias in selection and items bias. We spent first type of analysis on the left end of a scale, where the results of weakest graduates are located. We tried to reduce risk of wrong decision and to develop the selection model for assignment the school certificate. Second type of analysis we spent on the right end of a scale. We analyzed items bias because there are many national schools in Russia, where teachers do not use Russian language in training till 10th grade. So we tried to exclude items which discriminate the national school graduates.
Raising the Bar for Career and Technical Education Standards and Assessment: The Case of Tennessee's Technical Skills Attainment Rubric
Presenter(s):
Shira Solomon, CNA Education, solomons@cna.org
Gay Burden, Tennessee Department of Education, gay.burden@tn.gov
Abstract: The face of secondary Career and Technical Education (CTE) is changing. Once considered to be the path for students not going to college, today's CTE programs are charged with preparing students for the high-wage, high-skill careers that generally require post-secondary education and training. Tennessee has taken a unique approach to meeting the federal accountability requirements for technical skills attainment by creating proficiency definitions for CTE competencies that mirror NCLB proficiency categories. Unlike many states that purchased third-party exams or developed their own assessments for CTE program areas, Tennessee developed a Competency Attainment Rubric to be used by all CTE teachers in the state. Will this Rubric help CTE teachers lead the state's effort to be First to the Top? In this presentation, we share lessons learned from Tennessee in the context of challenges all states face to increase the rigor of CTE teaching and improve the assessment of CTE learning.
Everything Matters: Understanding the Impact of Context on Formative Assessment
Presenter(s):
Leigh Tolley, Syracuse University, lmtolley@syr.edu
Abstract: In the evaluation of educational programs, the context in which an intervention is being implemented greatly impacts its outcomes. Changes within an educational context that are relatively common, such as teacher turnover, a transient student population, or varying attitudes toward novel teaching strategies, have the potential to influence the determination of whether or not a program should continue. How does context, whether it is static or emerging, affect formative assessment, which involves teachers using evaluative skills and strategies to improve student learning? This paper will explore studies of the implementation of formative assessment in PreK-12 schools, and how the context in each study affected its perceived efficacy. Factors will be examined such as student demographics and ability levels, teachers' instructional strategies, and administrative support of formative assessment and its use in the classroom. Implications for the application of this research to formative evaluation and program evaluation will also be considered.
The Formative-Assessment Process for Teachers at Schools Under Review
Presenter(s):
S Marshall Perry, Dowling College, perrysm@dowling.edu
Abstract: This paper concerns the use of a formative-assessment process for teachers at low-performing schools, as indicated by student mastery levels on standardized assessments. Two researchers worked with teachers at one high- and one middle-school over the course of a school year. Teachers in the Mathematics, English Language Arts, Social Studies, and Science departments were asked to create assessments using several elements suggested by W. James Popham and others. These assessments were intended to be only five questions long, but consistent within departments, so that cross-case analysis was possible. Teachers received professional development in formative versus summative assessments, higher-order thinking strategies, item analysis, and instructional strategies for students not at mastery. While the findings are tentative, the results support the promise of formative assessment as a tool but demonstrate the difficulty in moving towards a formative-assessment process within the context of high-stakes accountability.

Return to Evaluation 2011
Search Results for All Sessions