2010 Banner

Return to search form  

Session Title: Cultural Competency in Evaluation: Discussion of the American Evaluation Association's Public Statement on the Importance of Cultural Competence in Evaluation
Think Tank Session 242 to be held in Lone Star A on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Presidential Strand
Presenter(s):
Cindy Crusto, Yale University, cindy.crusto@yale.edu
Discussant(s):
Katrina Bledsoe, Walter R McDonald and Associates Inc, katrina.bledsoe@gmail.com
Karen E Kirkhart, Syracuse University, kirkhart@syr.edu
Elizabeth Whitmore, Carleton University, ewhitmore@connect.carleton.ca
Jenny Jones, Virginia Commonwealth University, jljones@vcu.edu
Katherine A Tibbetts, Kamehameha Schools, katibbet@ksbe.edu
Abstract: This highly interactive think tank is the capstone event in the Association’s member review of the AEA Public Statement on the Importance of Cultural Competence in Evaluation. The Statement is the result of five years of work by an AEA task force in consultation with a range of experts and members. The think tank discussion will help shape the final version of this statement and create a vision for moving toward wider attention to culture in evaluation. After a brief introduction to the statement and its history, participants will be asked to provide feedback on the statement and create a vision of how best to support evaluators in translating it into action. A technical known as graphic facilitation will be used along with traditional recording of the discussions to capture the main concepts and ideas generated. (Graphic facilitation is a tool for capturing and organizing a group’s ideas with pictures.)

Session Title: Improving the Practice of Theory-driven Evaluation: Understanding the Role of Stakeholders and Context in Evaluation Settings
Multipaper Session 243 to be held in Lone Star B on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Chair(s):
Cindy Gilbert,  United States Government Accountability Office, gilbertc@gao.gov
Uda Walker,  Gargani + Company, uda@gcoinc.com
Learning for and From Change: Application of Theory of Change based Monitoring and Evaluating (M&E) to Fast-Tracking Capacities of African Women Scientists in Agricultural Research and Development
Presenter(s):
James Kakooza, African Women in Agricultural Research and Development, j.kakooza@cgiar.org
Margaret Kroma, African Women in Agricultural Research and Development, m.kroma@cgiar.org
Zenda Ofir, African Evaluation Association, zenda@evalnet.co.za
Abstract: Learning and capacity building for transformative change in the African Women in Agricultural Development (AWARD) is the focus of this paper. First, it describes the projects’ overall aims, objectives and outcomes as creating a reliable pool of scientifically proficient, visible women scientists with strong leadership capacities who can play influential roles in African agricultural research and development. The paper presents a critical interpretive analysis of the measuring learning in the context of diverse stakeholdership. Through an innovatively developed AWARD project Theory of Change, the paper answers the question, “how are the intended outcomes in terms of targeted behavioural changes realised while learning from the changes themselves by articulating AWARD’s innovative M&E hybrid model that takes account of the multi-faceted and complex character of the project’s intervention that puts behaviour change outcomes at the centre of its efforts. It concludes that ideal situations don’t hold and that for desired learning to occur there must be in-built flexibility to allow for mediation between real and the ideal situations
Theory-based Stakeholder Evaluation
Presenter(s):
Morten Balle Hansen, University of Southern Denmark, mbh@sdu.sam.dk
Evert Vedung, Uppsala University, evert.vedung@ibf.uu.se
Abstract: This paper introduces a new approach to program theory evaluation called Theory-based Stakeholder Evaluation (TSE model). Most theory-based approaches are program-theory driven and some are stakeholder-oriented as well. Practically all of the latter fuse the program perceptions of the various stakeholder groups into one unitary program theory. Our TSE model keeps the program theories of the stakeholder groups apart from each other and from the program theory embedded in the institutionalized intervention itself. This represents, we argue, an important clarification and extension of the standard theory-based evaluation. The TSE model is elaborated in order to enhance theory-based evaluation of interventions characterized by conflicts and competing program theories.
Issues on Social Science Theory Versus Stakeholder Theory-based Interventions: Leasons Learned From Evaluating an Environmental Tobacco Smoke Intervention Program
Presenter(s):
Huey Chen, Centers for Disease Control and Prevention, hbc2@cdc.gov
Nannette Turner, Mercer University, turner_nc@mercer.edu
Abstract: Social science theory and stakeholder theory are two major sources of program theory. Researchers prefer social science theory by arguing that interventions based upon social science theories are rigorous and more likely to be effective. On the contrary, stakeholders prefer stakeholder theory, because interventions based upon stakeholder theory reflect the real world, fit local needs, and are more likely to work in local situations. However, few empirical studies are available to substantiate these arguments. The proposed paper illustrates that an evaluation of a community-based, environmental tobacco smoke prevention program provides empirical information to address these issues since the program contains interventions based on both social science and stakeholder theories. By systematically comparing the implementation and outcomes of these two types of interventions, the evaluation provides an empirical understanding of the pros and cons of these two types of interventions and contributes in further advancing program theory.
Evidence-based Interventions in the Context of Program Evaluation: A Critique and Alternative Perspective
Presenter(s):
Huey Chen, Centers for Disease Control and Prevention, hbc2@cdc.gov
Paul Garbe, Centers for Disease Control and Prevention, plg2@cdc.gov
Abstract: Advocates of evidence-based interventions forcefully urge practitioners to apply interventions with impactful evidence provided by RCTs (efficacy evaluation) in their practice. The pros and cons of RCTs and evidence-based interventions have been intensively debated among evaluators. However, logic steps that link evidence-based interventions to real-world applications have not been systematically identified and empirically examined. This paper uses the conceptual framework of program theory to identify a set of logic steps underlying evaluation of evidence-based interventions. The truthfulness of these logic steps is empirically assessed by using evidence from evaluating asthma and other health promotion programs. Lessons learned from these evaluations indicate that the current evidence-based interventions mainly focus on maximizing internal validity and neglect real-world issues on implementation, such as viable validity and external validity. The paper proposes a comprehensive perspective of evidence with three components (viability, effectiveness, and generalizability) and a bottom-up approach to systematically address issues on these three components.

Session Title: What We Don’t Say Can Hurt Us: Working with Undiscussables
Think Tank Session 244 to be held in Lone Star C on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Alexis Kaminsky, Kaminsky Consulting, akaminsky@comcast.net
Discussant(s):
Hazel Symonette, University of Wisconsin, hsymonette@odos.wisc.edu
Jennifer Dewey, James Bell Associates, dewey@jbassoc.com
Virginia Dick, University of Georgia, vdick@cuiog.uga.edu
Maggie Dannreuther, Stennis Space Center, maggied@ngi.msstate.edu
Ranjana Damle, Albuquerque Public Schools, damle@aps.edu
Elena Polus, Iowa State University, elenap@iastate.edu
Abstract: Silence--what we don’t, won’t or can’t say--is one means of controlling information, reflection, and action in evaluation. Frequently, silences grow out of issues related to gender, race, and class but not always. They can happen any time people from different backgrounds, organizations, aptitudes, and interests come together. However, the greater the differences, the more pronounced the challenge to break the silence. How do we help ourselves and our evaluation participants open up spaces to explore the silences that keep us “safe” but stuck? Participants in this session will explore disrupting silences in small groups based on short practice-based vignettes shared by the session’s presenters. Small group discussion will examine: (1) strengths and limitations of different approaches, (2) identification of additional approaches, and (3) identification of critical contextual factors that foster or inhibit movement. The whole group will reconvene to share observations and elucidate questions that merit further attention.

Session Title: Evaluating Social Service Programs for Government and Foundations
Multipaper Session 245 to be held in Lone Star D on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Beth Stevens,  Mathematica Policy Research, bstevens@mathematica-mpr.com
Foundation Requests for Rigorous Evaluation and the Response of Their Community-based Grantees
Presenter(s):
Beth Stevens, Mathematica Policy Research, bstevens@mathematica-mpr.com
Daniel Finkelstein, Mathematica Policy Research, dfinkelstein@mathematica-mpr.com
Jung Kim, Mathematica Policy Research, jkim@mathematica-mpr.com
Michaella Morzuch, Mathematica Policy Research, mmorzuch@mathematica-mpr.com
Cicely Thomas, Mathematica Policy Research, cthomas@mathematica-mpr.com
Abstract: Foundations, government agencies, and the other funders of community-based health and social service programs are increasingly asking grantees to provide evidence that their programs work or at least are making progress towards achieving their goals. But do such programs have the capacity to provide such evidence? Can such organizations, often underfunded and overburdened, generate the rigorous and credible evidence that is now desired? As part of the evaluation of the Local Funding Partners Program (LFP) of the Robert Wood Johnson Foundation (RWJF), we surveyed the eighty-six community-based social service agency grantees in order to gather information on their experience in using evaluation to build evidence that their programs work. This paper reports on whether the projects commissioned evaluations, the forms of technical assistance they received to assist with them, the types of evaluation that were carried out, and how the evaluation results were used.
Evaluating Programs Funded by Government and Delivered by Nonprofits: A Grounded Model for More Accurate and Useful Evaluation of Contracted Social Services
Presenter(s):
Christopher Horne, University of Tennessee, Chattanooga, christopher-horne@utc.edu
Abstract: To better understand the increasingly common evaluation context of nonprofit social service programs provided under government contract, 30 in-depth interviews with a broad range of government and nonprofit administrators were analyzed, following the grounded theory approach, to identify factors that affect respondents' program outcomes. Analysis revealed that some of the most important factors are specific to purchase-of-service contracting and would not typically be captured in conventional program logic models. The most important of these factors can be categorized as components of either the formal or emergent government-nonprofit relationship. If evaluators are to contribute to the improvement and accountability of contracted social services, we should broaden program models to include these key factors, which may otherwise be overlooked. This paper presents one such model and accompanying recommendations, grounded in data, that evaluators may use to better pursue program improvement, accountability, and social betterment goals in evaluations of contracted social service programs.
Successes and Challenges of a Nonprofit Organization’s Effort to Improve Evaluation Quality by Adopting a Client Information System (CIS)
Presenter(s):
Adrienne Adams, Michigan State University, adamsadr@msu.edu
Nidal Karim, Michigan State University, buenalocura@gmail.com
Sue Coats, Turning Point Inc, 
Sallay Barrie, Michigan State University, sallayb08@gmail.com
Nkiru Nnawulezi, Michigan State University, nkirunnawulezi@gmail.com
Cris Sullivan, Michigan State University, sulliv22@msu.edu
Katie Gregory, Michigan State University, katieanngregory@gmail.com
Abstract: This paper presents the findings from pre, post, and 1-year follow-up interviews conducted with the staff of a large non-profit organization that implemented a CIS to improve the quality of the information they collect and utilize to meet their extensive internal and external evaluation needs. In this presentation, we will describe staff expectations going into the implementation process and how the outcomes matched or diverged from that vision. We will also discuss the impact of the CIS on how staff do their work and on their perceptions of the utility of the information collected. Finally, we will share the challenges posed by contextual forces such as limited resources and staff turnover, as well as share important lessons learned that could benefit other non-profit organizations considering or actively implementing their own CIS.

Session Title: Quantitative Methods Theory and Design TIG Business Meeting and Presentation: What is New in Multiple Comparison Procedures
Business Meeting Session 246 to be held in Lone Star E on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
TIG Leader(s):
Patrick McKnight, George Mason University, pmcknigh@gmu.edu
George Julnes, University of Baltimore, gjulnes@ubalt.edu
Karen Larwin, University of Akron, Wayne, drklarwin@yahoo.com
Raymond Hart, Georgia State University, rhart@gsu.edu
Chair(s):
Dale Berger, Claremont Graduate University, dale.berger@cgu.edu
Presenter(s):
Roger Kirk, Baylor University, roger_kirk@baylor.edu
Abstract: Prof. Kirk is a renowned author in statistics and experimental design. He will trace the development of multiple comparison procedures from Fisher's work though current research on the false discovery rate.

Session Title: Internal Evaluation TIG Business Meeting and Presentation: A Decade of Internal Evaluation in One School District - How Times Change
Business Meeting Session 247 to be held in Lone Star F on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the
TIG Leader(s):
Boris Volkov, University of North Dakota, bvolkov@medicine.nodak.edu
Wendy DuBow, National Center for Women and Information Technology, wendy.dubow@colorado.edu
Presenter(s):
Jean A King, University of Minnesota, kingx004@umn.edu

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Grad Students on Grad Students: Evaluating Peers in a Professional Context
Roundtable Presentation 248 to be held in MISSION A on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Graduate Student and New Evaluator TIG
Presenter(s):
Matthew Linick, University of Illinois at Urbana-Champaign, mlinic1@gmail.com
Marjorie Dorime-Williams, University of Illinois at Urbana-Champaign, dorime1@illinois.edu
Seung Won Hong, University of Illinois at Urbana-Champaign, hong29@illinois.edu
Abstract: The College of Education at a major research I university will be holding its first college-wide graduate student conference this spring. The conference was developed and organized by an interdisciplinary planning committee composed of College of Education graduate students. This committee commissioned a separate team of graduate students to complete a formative and summative evaluation of the conference. Conducting an evaluation of a fellow graduate student initiative in the same college created an interesting context for the evaluation team that offered unique challenges and opportunities. As graduate students and beginning evaluators we struggled with assumptions about our expertise and roles as evaluators, as well as negotiating a professional evaluator-client relationship with our peers, friends, and colleagues.
Roundtable Rotation II: A Student-Generated Collaborative Approach to Developing New Evaluator Competencies
Roundtable Presentation 248 to be held in MISSION A on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Graduate Student and New Evaluator TIG
Presenter(s):
Jason Black, University of Tennessee, Knoxville, jblack21@utk.edu
Pam Bishop, University of Tennessee, Knoxville, pbaird@utk.edu
Shayne Harrison, University of Tennessee, Knoxville, sharrison1976@comcast.net
Susanne Kaesbauer, University of Tennessee, Knoxville, skaesbau@utk.edu
Thelma Woodard, University of Tennessee, Knoxville, twoodar2@utk.edu
Abstract: The purpose of this discussion is to provide an effective method for improving new evaluator skills through collaborative efforts between new and advanced graduate students. Traditional academic models for training in evaluation often include coursework, simulations, role-play, and a practicum (Trevisan 2004). In some programs, evaluation students are taught evaluation fundamentals and simultaneously required to conduct evaluation projects independently from start to finish during the first year of graduate school. Although knowledgeable about evaluation competencies, the knowledge, skills and abilities involved in conducting an entire evaluation are often beyond new evaluators’ expertise. Nadler and Cundiff (2009) assert that these skills can’t be sharpened through academic-based training alone. In this roundtable discussion, students will discuss alternative qualitative and quantitative approaches to the same evaluation problem as a collaborative approach to improving their evaluator competencies.

Session Title: Issues and Models: Evaluating Universal Design for Learning
Panel Session 249 to be held in MISSION B on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Special Needs Populations TIG
Chair(s):
Bob Hughes, Seattle University, rhughes@seattleu.edu
Abstract: Universal Design for Learning (UDL) is a framework that provides principles to guide teaching practices and curriculum development. Both P-12 and higher education UDL projects are being developed through grants and initiatives to improve materials and teaching practices which support a range of learner needs. UDL has been formally evaluated since its development in the mid-1990s. However, this work has been completed by a few organizations and the work is often reported to funding agencies without general distribution. This panel will bring together four evaluators with varied levels of experience with UDL. Panelists will describe their work designing or implementing both higher education and P-12 UDL evaluations; how they have approached the confluence of UDL principles and the measurement of learning outcomes; and how they have evaluated UDL’s impacts on teacher behaviors and perspectives, as well as the impact on organizational issues that impact instruction.
Assessment of Learning in Universal Design for Learning
Tracey Hall, Center for Applied Special Technology, thall@cast.org
Assessment is one component of instruction, and indelibly linked to three others: goals, methods and materials. Tracey will describe how UDL informs assessment at all levels, from classroom progress monitoring to summative assessments. Tracey Hall’s research and project work focuses on alternative assessment and instructional design grounded in effective teaching practices. These experiences are applied in the development and implementation of UDL projects, collaborative partnerships, and professional presentations. She directs CAST’s initiatives to create and evaluate digital supported environments across content areas. Her work in assessment spans the areas of curriculum-based measurement, teacher professional development, special-needs instruction and curriculum design, progress monitoring, and large-scale assessments. Tracey will bring to the panel experience from work on assessments across the K-12 grades and range of abilities and disabilities including alternate and modified assessments. She will share information from national, state, and local education efforts in assessment influenced by UDL principles.
Logic Model and Evaluation Plan of a Universal Design for Learning Project at the University of Vermont
David Merves, Evergreen Educational Consulting LLC, david.merves@gmail.com
EEC research associates are the external evaluators for an OSEP funded project, Universal Design for Learning at University of Vermont: “Supporting Faculty to Teach All Students: A Universal Design Consulting Team Model.” The project provides: technical assistance and support to UVM faculty; creation of web-based resources for promoting and supporting UDL practices in UVM courses, student life and services; orientation in UDL for new faculty and graduate teaching fellows; infusion of UDL practices in faculty professional development and student support systems; and is identifying and developing a UVM UDL resource guide for faculty, students and visitors. David Merves will review the logic model and evaluation plan and discuss formative outcomes/data for the project. EEC’s business focus is on Education and Evaluation, primarily in the area of special education program evaluation and professional development.
Evaluating the Impact of Universal Design for Learning in Early Childhood Education
Donald Smith, Texas Early Childhood Education Coalition, docii4096@gmail.com
Many at risk children enter the education system and fall into the ‘margins’ The needs of these at risk children require that teachers not only recognize the concerns of the child but also utilize creative teaching strategies that can engage the student in the education process. UDL has great potential to benefit these high risk children from chaotic environments. Questions that arise from consideration of the theory of UDL include 1) Can these children show improvement in their academic performance and behaviors? 2) Can this approach result in improved classroom conditions? 3) What are the potential ramifications and benefits? 4) What is the overall potential of this teaching model for these children? Dr. Smith has spent more than a decade examining factors that can influence the growth, development and education of young children, designing and evaluating programs, conducting research and developing policy that impacts children from high risk situations.
Addressing the Complexity: A Model for Designing Universal Design for Learning Evaluations
Bob Hughes, Seattle University, rhughes@seattleu.edu
The process for evaluating a UDL project is familiar to any evaluator: Identifying the evaluand and determining the goals, objectives, and measures of a project; working with multiple stakeholders. Evaluating UDL also requires determining how its core principles are implemented, how the implementation impacts learning, and how shifts in practice impact instructors and the systems in which they operate. This requires evaluators to be familiar with UDL, with the intended learning outcomes within a given project, and with how instructional practices and systems evolve. Bob Hughes teaches both an evaluation methods course and a course in implementing UDL in adult learning. Over the past 15 years, he has conducted multiple evaluations of UDL projects within family literacy, K-12 classrooms, universities and community colleges. Based on this work, he will offer a matrix that will assist evaluators of UDL projects to address the multiple complexities of designing an evaluation.

Session Title: Systems Theories in Evaluation Planning: Differentiating Planning Process from Evaluation Plan
Think Tank Session 250 to be held in BOWIE A on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Systems in Evaluation TIG
Presenter(s):
Claire Hebbard, Cornell University, cer17@cornell.edu
Discussant(s):
William M Trochim, Cornell University, wmt1@cornell.edu
Thomas Archibald, Cornell University, tga4@cornell.edu
Monica Hargraves, Cornell University, mjh51@cornell.edu
Abstract: Systems evaluation has many definitions, some themes common among include identifying multiple stakeholder perspectives, viewing a program as a dynamic system nested within larger systems, and defining a program’s boundaries. Many evaluators have identified how a systems approach to evaluation may affect the process of planning an evaluation. But does the process of planning an evaluation get revealed in the evaluation plan? Is there a way to differentiate a systems based evaluation plan from evaluation plans created from other? We will outline some of our thoughts and struggles regarding assessing evaluation plans developed by programs going through our Systems Evaluation Protocol, then facilitate one or more workgroups to identify and discuss developing a rubric for identifying systems based evaluation plans.

Session Title: Race, Class, and Power: Bringing The Issues Into Discussion and Evaluation
Panel Session 251 to be held in BOWIE B on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Dawn Smart, Clegg & Associates, dsmart@cleggassociates.com
Abstract: Part of the jigsaw puzzle in making change in communities is the effect of race, class and power differences among groups. Historical and present-day inequities translate into significant barriers to participation in decision-making affecting people’s lives. Bringing these issues into the conversation can be difficult and uncomfortable, but without direct attention to them, community change efforts often stall. Evaluating race, class and power — what these issues look like in their community context, how people feel about them, and how the dynamics may change over the course of an initiative — is equally challenging. Putting race, class and power on the discussion table is a first step. Learning together what is relevant and how to measure it is a second. Examining the findings and interpreting their meaning is a third. This session will look at the efforts of two national organizations engaged in this process and provide a forum for further exploration.
Race, Class, and Power in Rural Development
Tracey Greene-Dorsett, National Rural Funders Collaborative, tracey@nrfc.org
With its focus on expanding resources for families and communities in regions of persistent poverty, the National Rural Funders Collaborative recognizes that the work of building rural economies also entails confronting structural barriers that foster racial disparities and discrimination. Transforming poor rural communities into viable living environments ultimately requires creation of a rural movement for social and economic equity. Projects funded by NRFC include those in the South, with African American families and communities; the West, with Latino and immigrant families and communities; and the Northern Great Plains, with Native American families and communities. NRFC’s theory of change is based on an understanding of the relationship among local and regional economic strategies, policy development and advocacy, and addressing equity issues. Establishing sustainable measurement systems with their grantees, often small and grassroots organizations, around this theory of change, and specific to local race and power issues, has been an enlightening venture.
Developing Tools to Measure Race, Class and Power
Jessica Anders, NeighborWorks America, janders@nw.org
In 2005, the Mary Reynolds Babcock Foundation supported creation of a set of racial justice indicators and tools for NeighborWorks America’s Success Measures project using a participatory approach including practitioner, researcher and foundation input. Managing the participation from various parties required difficult decision-making directly related to power issues. Whose voice should be privileged in the process if the outcome was to be a set of useful tools designed to improve programs and attract funding. For the racial justice tools, we privileged practitioners’ voices for direction and content and brought in research partners to craft tools that would stand up to rigorous evaluation standards. Originally, the project was to capture outcomes for racial justice organizing, policy and advocacy work. Through the participatory process the emphasis shifted to focus on the issues the groups were directly addressing, racial equity in the classroom and individual racism rather than a broader reach.
Evaluating Race, Class and Power
Dawn Smart, Clegg & Associates, dsmart@cleggassociates.com
As an evaluation coach for NeighborWorks America’s Success Measures project and for the National Rural Funders Collaborative and its grantees, learning to really listen to and build evaluations from the ideas of organizations working on social justice issues has been a humbling and remarkable experience. Understanding how race, class and power play out for each group and their communities is one aspect of this learning. Figuring out with each group how to measure these issues in ways that are respectful, feasible and meaningful is yet another. I’ve come to recognize how my perceptions of what’s important to measure can sometime be incorrect, my suggestions for how to measure the issues sometimes inadvisable, and how much I have to learn. But it has been through this learning that my skills as an evaluator have developed and my practice has grown with more opportunities to explore race, class and power through evaluation.

Session Title: Culturally Responsive Evaluation: Three Cases From Aboriginal Peoples and First Nations in Canada
Multipaper Session 252 to be held in BOWIE C on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Indigenous Peoples in Evaluation TIG
Chair(s):
Andrea LK Johnston,  Johnston Research Inc, andrea@johnstonresearch.ca
Discussant(s):
Andrea LK Johnston,  Johnston Research Inc, andrea@johnstonresearch.ca
The Aboriginal ActNow Health Promotion Initiative: The Role of Context
Presenter(s):
Kim van der Woerd, Reciprocal Consulting, kvanderwoerd@gmail.com
Donna Atkinson, University of Northern British Columbia, datkinson@unbc.ca
Abstract: The health status of First Nations, Métis and urban Aboriginal peoples in British Columbia (BC) is widely recognized as unacceptable and unsustainable compared to BC’s non-Aboriginal population. Aboriginal ActNow is an endeavour to improve the personal and community health through a multi-year, partnership-based, community-oriented health promotion initiative. Aboriginal ActNow brought together three organizations (First Nations Health Society, Métis Nation BC, and BC Association of Aboriginal Friendship Centres) to support healthy living projects in communities across BC. This presentation reviews a comprehensive process and outcome evaluation of Aboriginal ActNow’s program. Preliminary findings suggest that relationship building is one of the most important factors in the success of Aboriginal-specific health promotion initiatives. Other lessons learned in the development and implementation of the program include the importance of working within a community context, and using wholistic, community-oriented, strength-based approaches that address underlying social determinants of First Nations, Métis and urban Aboriginal health.
Deconstructing and Rebuilding Evaluation Practices With Indigenous Communities: Examples From the Canadian Context
Presenter(s):
Larry K Bremner, Proactive Information Services Inc, larry@proactive.mb.ca
Abstract: This paper will provide a basis for generating discussion among evaluators and others working with Indigenous communities. The paper will begin with a brief presentation grounding the discussion in both evaluation theory and practice. However, the focus will be on effective and promising evaluation practices in First Nations, Aboriginal, and Inuit communities in Canada. The paper will draw upon the author’s experiences working in a range of urban, remote and Northern communities to answer questions such as: How do different world views affect evaluation theory and practice? What appear to be promising approaches for de-colonizing evaluation practice? What are the considerations for Indigenous and non-Indigenous evaluators working with these communities? The paper will conclude with reflections on what can be considered “quality” evaluation in various cultural contexts.
A Journey Toward Understanding Evaluation Quality and Complexity in Aboriginal Communities: A Discussion of Three National Evaluation Initiatives
Presenter(s):
Jill Anne Chouinard, University of Ottawa, jchou042@uottawa.ca
Katherine Moreau, University of Ottawa, kmoreau@cheo.on.ca
Abstract: We have been fortunate in the past year to have had the opportunity to work on three separate evaluation initiatives in Aboriginal communities, in the areas of mental health, youth suicide and educational-based programs. All three evaluation initiatives were based on principles of participatory, community-based methodologies that reflect the cultural context of the communities and recognize the inherent right of Aboriginal people to be agents of research. As such, all three evaluations were designed to involve community members in all stages of the evaluation, from the development of culturally appropriate assessment methods, to data collection, analysis and the interpretation of findings. In this paper, we discuss our experiences in attempting to apply participatory, community-based and culturally appropriate methodologies in these settings, the challenges we faced and the obstacles we navigated through, as well as the lessons that we learned along the way.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Theory of Change Evaluation in the Real World: Lessons Learned from Applying (and Modifying) the TOC Approach in the Evaluation of the Tobacco Policy Change Program
Roundtable Presentation 253 to be held in GOLIAD on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Advocacy and Policy Change TIG
Presenter(s):
Andrea Anderson-Hamilton, Anderson Hamilton Consulting, andersonhamilton@gmail.com
Abstract: This roundtable will discuss the challenges and lessons learned from applying the Theory of Change approach to the evaluation of the Robert Wood Johnson Foundation’s Tobacco Policy Change Program (TPC), which provided grants to 75 tobacco advocacy coalitions during the 2004 to 2008 grant period. The TPC evaluation was designed to produce lessons for several audiences: the public health field; the philanthropic community; the staff at RWJF; and the grantees themselves. We now understand that this evaluation can offer important lessons to our field as well, particularly around how to modify the commonly understood "theory of change approach" to accommodate the reality of evaluating a program with multiple sites, multiple goals, multiple definitions of success and multiple theories of change operating at different levels.
Roundtable Rotation II: Evaluating Foundation Advocacy Strategies: When Theory and Practice Collide
Roundtable Presentation 253 to be held in GOLIAD on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Advocacy and Policy Change TIG
Presenter(s):
Catherine Borgman-Arboleda, City University of New York (CUNY), cborgman.arboleda@gmail.com
Rachel Kulick, City University of New York (CUNY), rkulick@brandeis.edu
Abstract: Grantmakers committed to social change are faced with the need to support work that moves beyond short-term policy change to expanding involvement in the process of making change, and more broadly in democracy. Tension often results in practice as program officers attempt to support an ecosystem of work with often different goals, values and timeframes. Based on evaluations conducted for the Robert Wood Johnson Foundation, the Funding Exchange and the Social Science Research Council, we will explore these challenges and discuss how foundation theories of change can inform evaluations and in turn how findings can shift and refine these theories, potentially leading to more effective grantmaking. We will also examine some important factors to consider in the assessment of foundation support of social movement building work, including grantee selection criteria and internal foundation decision-making processes as they relate to building coalitions, network capacities, power sharing, education, and leadership

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Choices of Research and Development (R&D) Evaluation Approaches in Chinese Academy of Sciences (CAS) Institutes: We Reap What We Sow?
Roundtable Presentation 254 to be held in SAN JACINTO on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Research on Evaluation TIG and the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Xiaoxi Xiao, Chinese Academy of Sciences, xiaoxiaoxi@casipm.ac.cn
Changhai Zhou, Chinese Academy of Sciences, chzhou@cashq.ac.cn
Tao Dai, Chinese Academy of Sciences, daitao@casipm.ac.cn
Abstract: What kind of R&D evaluation approaches to choose is one major decision for research institute’s manager, which reflects the managers’ orientation for the institutes in the future. Then one question arises: will the institute develop according to the orientation? Or as the saying says, we reap what we sow? We chose institute A and B from CAS as two cases. The two institutes were initially both founded on basic researches. But in recent decades, evaluation approaches of the two institutes differentiated gradually, and application orientation and pure basic researches orientation was formed in institute A and B respectively. In this empirical study, firstly we retrospectively identify the management orientations reflected in the evaluation approaches and the major achievements for two institutes by methods of experts’ seminar, text coding, and historical analysis; then, we conduct the correlation analysis between the orientations and achievements to explore the above question.
Roundtable Rotation II: Producing Evidence of Effectiveness Data in the Real World of Early Childhood Education
Roundtable Presentation 254 to be held in SAN JACINTO on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Research on Evaluation TIG and the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Cindy Lin, HighScope Educational Research Foundation, clin@highscope.org
Marijata Daniel-Echols, HighScope Educational Research Foundation, mdaniel-echols@highscope.org
Abstract: In this session, the presenters will lead a discussion on “How can early childhood education program evaluators balance the demand for data about what works with real world issues within the field that present challenges to study design and data analysis?” Current policy focuses on investments in education has increased funding in early childhood education. As part of these new and increased funding streams is a requirement to collect information on the evidence of improvements in children’s school readiness. During the roundtable session, the presenters will use 15 years of experience evaluating Michigan’s state funded preschool program for at-risk four year olds to discuss the evaluation design and data analysis challenges to producing accountability data for early childhood programs. Current work using both Regression Discontinuity design and a quasi-experimental design and two-level hierarchical linear modeling from a statewide random sample of programs will be the basis of the discussion.

Session Title: Assessing Student Learning Outcomes II: Three Sides of the Coin
Multipaper Session 255 to be held in TRAVIS A on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Audrey Rorrer,  University of North Carolina at Charlotte, arorrer@uncc.edu
Discussant(s):
Jeanne Hubelbank,  Independent Consultant, jhubel@evalconsult.com
Assessing Student Engagement in the Classroom: Pilot Studies of an Instrument for Studying Student Engagement and Its Determinants in Higher Education
Presenter(s):
Rick Axelson, University of Iowa, rick-axelson@uiowa.edu
Arend Flick, Norco College, arend.flick@rcc.edu
Abstract: Student engagement is widely regarded as an essential condition for student learning. And, thanks to the availability of instruments like the National Survey of Student Engagement (NSSE) and the College Student Experiences Questionnaire (CSEQ), many institutions have undertaken studies to assess their students’ level of involvement in effective learning behaviors and practices. Although such research provides useful benchmarks regarding the prevalence of engagement, it generally does not provide the detailed diagnostic information instructors need to make courses more engaging for their students. To fill this gap, we are developing and testing a classroom-focused engagement survey instrument to be administered to students. The survey items are designed to assess student’s cognitive, affective, and behavioral engagement as well as aspects of the class that are inhibiting and enhancing these components of engagement. In this session, we will discuss results from pilot studies conducted to test the instrument’s usefulness.
End of Course Evaluations: Enhancing the Curriculum Evaluation Process Through Faculty Feedback
Presenter(s):
Katherine Shaw, Westwood College, katherineshaw@yahoo.com
Abstract: Typical end of course evaluations provide students the opportunity to reflect on the design of their course and effectiveness of their faculty. The evaluations are often one of many criteria in continuous improvement efforts. For the purposes of informing decision making, the addition of a formal faculty feedback process further enhances the review and evaluation of courses. Although students are told that course activities should contribute to the desired learning outcomes, it is the faculty who has a better understanding of the relationships between curriculum features and the desired outcomes. The Westwood College End of Course Evaluation Form for Faculty was developed to fill an indentified gap in the program evaluation process. Aspects of courses are standardized across Westwood’s 18 campuses, which presents unique challenges as well as advantages when conducting curriculum and program review. The broad benefits of developing online end of course evaluation forms for faculty are discussed.
Organizational Readiness for Outcomes Assessment in Higher Education
Presenter(s):
Yukiko Watanabe, University of Hawaii, Manoa, yukikow@hawaii.edu
Abstract: Accreditation in higher education advocate that student learning outcomes assessment should be part and parcel of organizational structure and regular function, yet culture of assessment use and leanring is rare (Banta, 2002). This paper presents the results of a multiple case study that explores organizational readiness factors (e.g., infrastructure for assessment and decision-making, preconceived notions of assessment, etc.) that enhance or hinder engagement in outcomes assessment at college programs. Organizational factors were elicited via assessment capacity and perception survey, assessment planning meetings, and interviews with the chairs/directors. Findings indicate that pro-active leadership, history of and mechanism for collaboration across faculty level and domain-boundaries, and intellectual connectedness with assessment impact the way faculty engage in assessment.

Session Title: Visualizing Data for Strategic Planning
Multipaper Session 256 to be held in TRAVIS B on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Integrating Technology Into Evaluation
Chair(s):
Metta Alsobrook,  University of Texas, Dallas, metta.alsobrook@utdallas.edu
Creating a Comprehensive Dashboard for Strategic Planning: Inception to Implementation
Presenter(s):
Jennifer Reeves, Nova Southeastern University, jennreev@nova.edu
Barbara Packer-Muti, Nova Southeastern University, packerb@nova.edu
Candace Lacey, Nova Southeastern University, lacey@nova.edu
Abstract: This session will illustrate how a large, non-profit university conceptualized and created a comprehensive Dashboard to house various data points across the university. Nova Southeastern University (NSU) has 14 different academic colleges awarding undergraduate, graduate, and professional degrees. The University Provost was interested in creating a tool that would assist central administration with strategic planning. He envisioned this tool as a single source of data that would incorporate multiple data sources throughout the university. These sources would include enrollments, Quality Enhancement Plan (QEP) benchmarks, constituent engagement scores, academic program review, assessment of student learning outcomes, accreditation, budget and financial information, and success rates. A small research team was charged with visioning and creating a tool that would incorporate these data. The result was a comprehensive online dashboard. The presentation will discuss the design and implementation of this project including a demonstration of a sample dashboard.
Listserv Activity as an Indicator of Vitality and Growth in International Community of Practice Networks
Presenter(s):
Stacey Friedman, Foundation for Advancement of International Medical Education & Research, staceyfmail@gmail.com
Page Morahan, Foundation for Advancement of International Medical Education and Research, pmorahan@faimer.org
Williams Burdick, Foundation for Advancement of International Medical Education and Research, wburdick@faimer.org
Deborah Diserens, Foundation for Advancement of International Medical Education & Research, ddiserens@faimer.org
Avinash Supe, Georgia State Medical College, avisupe@gmail.com
Tejinder Singh, Christian Medical College, cmcl.faimer@gmail.com
Thomas Chacko, PSG Institute of Medical Sciences and Research, drthomasvchacko@gmail.com
Eliana Amaral, Universidade Estadual de Campinas, amaraleli@gmail.com
Henry Campos, Universidade Federal do Ceará, camposh2002@yahoo.com.br
Vanessa Burch, University of Cape Town, vanessa.burch@uct.ac.za
Gboyega Ogunbanjo, University of Limpopo, gao@intekom.co.za
Abstract: Listservs are a common mode for communication among individuals with shared interests. Listserv activity data can provide useful insight into extent of sustained engagement, growth, and vitality of online communities over time. Utility of findings is enhanced by analyses aligned with program structure/goals and contextual interpretation of data. Planning for listserv data storage, retrieval, and analysis resources is critical. The FAIMER Institutes consist of six fellowship programs for health professions faculty from developing regions of the world. Listservs (one per program) are central, serving as the primary conduit for distance learning and forums for collaboration, social support, and information-sharing among program graduates, current Fellows, and program faculty. Examples of listserv analyses include trajectories of activity and engagement over time, homo/heterogeneity of poster population, production function, and listserv cross-posting. Qualitative data, including both content of posts and user perceptions of the listserv, can offer a useful complement to listserv activity data.
Public Comprehension of Published Data: Technology to the Rescue?
Presenter(s):
Paul Lorton Jr, University of San Francisco, lorton@usfca.edu
Abstract: There is a need for the public to monitor the efficacy of what the public funds. This certainly was the view in 1989 when the voters of California passed Proposition 98 – the Classroom Instructional Improvement and Accountability Act. Since the passage of Proposition 98, every public K-12 school in California has published and distributed a School Accountability Report Card (SARC) as hard copy sent home to the parents and, as the revised law compelled, made available on the internet. Did the use of this technology (i.e., the internet and the infrastructure it requires) realize its purpose – to inform? What information transmitted via the technology and how easily can it be used? From this mandate, does the public know how its schools are performing and can it make informed judgments about directions and support to be given to its public schools? This presentation is focused on answering these questions.

Session Title: Measuring the Immeasurable: Lessons for Building Grantee Capacity to Evaluate Hard-to-assess Efforts
Panel Session 257 to be held in TRAVIS C on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Lande Ajose, BTW Informing Cchange, lajose@informingchange.com
Abstract: Nonprofit organizations, especially those engaged in policy and advocacy, play a critical role in advancing fundamental social reform, but too often they are limited by their capacity to reflect on whether and how they are making progress towards their goal. This session will present the highlights from the newly published monograph Measuring the Immeasurable: Lessons for Building Grantee Capacity to Evaluate Hard-to-Assess Efforts, which details the William and Flora Hewlett Foundation’s innovative efforts to design an evaluation process that would generate a high sense of ownership for grantees and high engagement in the evaluation process, with the goal that the evaluation data would have a better chance of being used for organizational improvement. Hear from the Hewlett Foundation, an evaluator and a grantee about their perspectives on how this approach to evaluation capacity building, and a newly developed tool, enables grantee organizations to reflect on and improve their work.
The Hewlett Foundation’s Education Program Grantee Evaluation
Kristi Kimball, William and Flora Hewlett Foundation, kkimball@hewlett.org
Jennifer Curry Villeneuve, BTW Informing Change, jvilleneuve@btw.informingchange.com
Lande Ajose, BTW Informing Change, lajose@informingchange.com
Kim Ammann Howard, BTW Informing Change, kahoward@btw.informingchange.com
This presentation will provide background on the Hewlett Foundation and how they came to become involved in enhancing grantees’ capacity to evaluate their work and to collect more meaningful data. The presentation will include the Foundation’s goals, the experience of selecting a group of evaluators to work with portfolio grantees, and the benefits and challenges of the process, including the ways in which it resulted in more meaningful reporting of data to the Foundation. The presentation will also include the Foundation’s lessons for working with outside evaluators, and some considerations for how to support evaluation sustainability in nonprofit organizations.
Evaluation Capacity Building: A Grantee’s Perspective
John Affeldt, Public Advocates, jaffeldt@publicadvocates.org
Building on the first presentation, this grantee, who has received evaluation capacity building support, will describe Public Advocate’s work and speak about how the evaluation support has improved their ability to track outcomes, adjust their plans, and improve their impact over time.
Measuring the Immeasurable: Lessons for Building Grantee Capacity to Evaluate Hard-to-assess Efforts
Lande Ajose, BTW Informing Change, lajose@informingchange.com
Kim Ammann Howard, BTW Informing Change, kahoward@btw.informingchange.com
Ellen Irie, BTW Informing Change, eirie@btw.informingchange.com
Lande Ajose will present the ten key lessons (from Measuring the Immeasurable: Lessons for Building Grantee Capacity to Evaluate Hard-to-Assess Efforts) that funders, evaluators and grantees ought to be attentive to when engaged in an evaluation capacity-building effort based on the monograph developed by BTW, and discuss the evaluation capacity diagnostic tool that was developed to help grantees get started with evaluation.

Session Title: Evaluation in Action: A Sampler of Tracking and Timing Methodologies in Museums, Culturals, and Informal Education Settings
Panel Session 258 to be held in TRAVIS D on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Evaluating the Arts and Culture TIG
Chair(s):
Kathleen Tinworth, Denver Museum of Nature & Science, kathleen.tinworth@dmns.org
Discussant(s):
Kathleen Tinworth, Denver Museum of Nature & Science, kathleen.tinworth@dmns.org
Abstract: Following a successful and well-attended panel session in 2009, members of the Visitor Studies Association (VSA) will return to AEA to showcase a variety of studies used to evaluate the visitor experience in museum and cultural settings. In particular, tracking and timing methodologies will be presented, illustrating the utility of this vehicle across disciplines. While not necessary to attend both sessions, this panel complements a proposed demonstration session where several tracking and timing methodologies will be exhibited. Participants will have the opportunity to both hear case studies where tracking and timing was utilized in a visitor studies setting as well as experiment first-hand with collecting and analyzing the resulting data and applying it to their own work.
Tracking Among the Ruins: Informing Interpretive Planning at Eastern State Penitentiary
Cheryl Kessler, Independent Consultant, kessler.cheryl@gmail.com
Cheryl Kessler, Independent Consultant in informal learning environments, VSA Board member, and former Research Associate with the Institute for Learning Innovation (ILI) will present a timing and tracking study conducted by ILI for the Eastern State Penitentiary Historical Site. The purpose of the study was to help inform the master and interpretive planning process for the site, which originally opened to the public in 1994. Using a detailed map of the site, Kessler and other ILI researchers followed 70 visitors through the site noting the overall time the visitor spent at the site and in what specific areas or cell blocks; the path taken around the site – what was attended, what was skipped; visitors’ level of engagement at audio stops, images, displays, and interpretive materials; and the frequency of social interactions observed between visitors or between visitors and staff members. ILI researchers coded the maps, and entered and analyzed the data using SPSS.
Reflective Tracking: When It’s Simply too Large
Joe E Heimlich, Ohio State University, heimlich.1@osu.edu
Tracking provides a powerful tool into understanding visitor engagement, activity, and interest. Tracking has historically been used for specific exhibits or special areas but its value serves well a full institution, as it is in the whole of the visit that a visitor reveals how they fully engage. Many facilities are simply too large to allow for full visit tracking in an efficient, cost effective, and appropriate manner. One approach developed for zoos and aquariums, and since applied to historical museums, science centers, and others is Reflective Tracking, in which the questions guiding the study relate to decision processes, engagement, disengagement, interest, and social interactions and role changes. This paper will present the constructs of reflective tracking (focus on object other than evaluator; dialogue versus question/answer; guiding questions), results of applications, and criticisms of the tools. Dr. Joe E. Heimlich, the developer of the approach, will present this session.
Unspoken Narratives: What Visitor Behavior Reveals About Exhibit Usage and Selection
Carey Tisdal, Tisdal Consulting, ctisdal@sbcglobal.net
Carey Tisdal, Director, Tisdal Consulting and Visitor Studies Association Board Member will present findings from the remedial and summative evaluation Star Wars: Where Science Meets Imagination. This National Science Foundation funded project developed a national traveling exhibition on science and technology themes depicted in the Star Wars movies. Data were collected at two sites, the Museum of Science, Boston, and COSI in Columbus, Ohio. Tracking and timing observations with matched exit interviews (of the same respondents) provided data sets from which findings about the reasons for exhibit usage and selection could be developed. For example, step-wise multiple regression was used to identify patterns in visitors choice and use of exhibit components that influenced visitor satisfaction. In addition, explanations for adult non-use of interactive were identified. Data sets that included the time data, visitor demographics, and visitor perceptions provided a richer opportunity for understanding choices and behaviors that either method alone.
Analysis and Visualization of Timing and Tracking Data: Examples From Two Exhibit Evaluations
Cláudia Figueiredo, Institute for Learning Innovation, figueiredo@ilinet.org
Cláudia Figueiredo, Research Associate with the Institute for Learning Innovation (ILI) and member of the VSA 2010 Conference Committee, will present examples of how timing and tracking data can be analyzed and presented. These examples will be drawn from evaluations done at two exhibits, the Sant Ocean Hall (National Museum of Natural History) and Skyscraper! (Liberty Science Center). They will show some of the common ways to present timing and tracking information, including number of areas visited, time spent, path, and visitor behavior, as well as illustrate ways these data can be visualized through heat maps.

Session Title: Implementing Quality Randomized Control Trials in Human Service Evaluations: Applications Addressing Challenges and Barriers
Multipaper Session 259 to be held in INDEPENDENCE on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Human Services Evaluation TIG and the Quantitative Methods: Theory and Design TIG
Chair(s):
Todd Franke,  University of California, Los Angeles, tfranke@ucla.edu
Discussant(s):
Tania Rempert,  University of Illinois at Urbana-Champaign, trempert@illinois.edu
Determining the Impact of Communities In Schools, Inc (CIS) Programs: Student-level Randomized Controlled Trials
Presenter(s):
Heather Clawson, ICF International, hclawson@icfi.com
Allan Porowski, ICF International, aporowski@icfi.com
Felix Fernandez, ICF International, ffernandez@icfi.com
Christine Leight, ICF International, cleicht@icfi.com
Abstract: Communities In Schools, Inc. (CIS) is a nationwide initiative to connect community resources with schools to help at-risk students successfully learn, stay in school, and prepare for life. CIS is currently in the midst of a comprehensive, rigorous five-year national evaluation, culminating in a two-year multi-site randomized controlled trial (RCT) to ascertain program effectiveness. In this presentation, we will draw from our experience working with Austin, TX, Wichita, KS, and Jacksonville, FL public schools, present our overall study design, and the process involved in conducting a student-level RCT. We will also be presenting on the final results for the multi-year, multi-cohort RCT studies.
Perils, Pitfalls, and Successes: The Implementation of a Randomized Controlled Trial (RCT) to Examine the Effects of Alternative Response in Child Welfare
Presenter(s):
Madeleine Kimmich, Human Services Research Institute, mkimmich@hsri.org
Linda Newton-Curtis, Human Services Research Institute, lnewton@hsri.org
Abstract: A randomized controlled trial (RCT) was used to examine the effects across multiple sites in Ohio, of an Alternative Response to the traditional investigative approach taken when a report of child maltreatment is made. This presentation opens with a short discussion of the ethics involved in using RCTs when evaluating interventions in the child welfare system, followed by a discussion of the experiences and lessons learned in the development of our RCT implementation. Key aspects include an overview of the challenges associated with establishing an RCT within a context where sites have a localized service delivery system and yet where random assignment is centralized. Our presentation will also address the problems (and solutions) we have found in monitoring the screening process to randomization, tracking changes after random assignment, establishing environmental comparability (surrounding the intervention) and making the whole process as user-friendly, simple and straightforward as possible.
Impact Evaluation for Youth Empowerment: Lessons From a Systematic Review and a Randomized Controlled Trial
Presenter(s):
Matthew Morton, University of Oxford, matthew.morton@gtc.ox.ac.uk
Abstract: Practitioners and policy-makers increasingly promote youth empowerment as a strategy to improve a range of psychosocial outcomes. Yet, little is known about the demonstrated impacts that participatory programming has on young people. This paper summarizes the state of impact evaluation for youth empowerment programs based on an international systematic review. Considerations for the design and implementation of an impact evaluation of youth empowerment are illustrated through the example of a randomized controlled trial in Jordan. The trial assessed the effects of a participatory education program for out-of-school youth. The case illustrates the merits of stakeholder involvement—including youth—to strengthen evaluation, as well as the importance of complementary process research to improve the role of impact evaluation in facilitating organizational learning.

Session Title: Including Everyone: Lesbian, Gay, Bisexual, Transgender, People of Color, and Double Winners
Multipaper Session 260 to be held in PRESIDIO A on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Lesbian, Gay, Bisexual, Transgender Issues TIG
Chair(s):
Kathleen McKay,  Connecticut Children's Medical Center, kmckay@ccmckids.org
Discussant(s):
Kathleen McKay,  Connecticut Children's Medical Center, kmckay@ccmckids.org
The Meeting of Minds: Integrating Collaborative and Anthropological Evaluation to Improve the Lives of MSM of Color in Tampa,Fl
Presenter(s):
Candace Sibley, University of South Florida, csibley@health.usf.edu
Matthew Hart, University of South Florida, matthewhart@mail.usf.edu
Abstract: This paper examines the efficacy and effectiveness of a Centers of Disease Control and Prevention evidence based intervention (3MV) aimed toward lowering HIV/STI rates among Men Who Sleep with Men (MSM) of color (Hispanic and African American. Literature on this population suggests that many MSM of color do not identify as gay, and also have disproportionately high rates of HIV/STI’s. Although a major study (Wilton et al, 2009) has evaluated the efficacy of the program, this evaluation adds to the corpus of evaluation knowledge through the use of mixed methodology (quantitative surveys and focus groups) and anthropological observation techniques to go beyond merely assessing the efficacy of the program itself. This evaluation will examine the participants levels of satisfaction in the program, and combine participant and staff insights to understand how the program can be better improved, and more readily adapted to the cultural context of Tampa, Fl . Moreover, the combination of collaborative evaluation and anthropological evaluation lend themselves well to improving the functionality of evidence based intervention for marginalized populations.
Engaging the Transformative Paradigm to Assess the HIV Prevention Needs of Young Gay and Bisexual Black Men
Presenter(s):
Robin Lin Miller, Michigan State University, mill1493@msu.edu
Miles McNall, Michigan State University, mcnall@msu.edu
Abstract: A challenge in assessing needs concerns how best to reflect the values of those who will ultimately be the recipients of services. Taking account of the values and perspectives of those whose needs are gauged may improve the relevance of the programs decision makers determine to support. We applied Mertens’ Transformative Paradigm to design and conduct a mixed-method statewide needs assessment of Black gay and bisexual men, ages 13 to 24. The needs assessment was co-led by a team of 6 young men, each of whom resided in distinct, high HIV incidence areas of the state. Young men co-directed all phases of the needs assessment, including selection of focus topics, protocol development, interviewer selection and training, data interpretation, and dissemination to decision makers and young men’s communities. The paper will highlight how a transformative perspective and young men’s leadership affected the ways in which their realities were portrayed.
Inclusion of Lesbian, Gay, Bisexual, and Transgendered (LGBT) Stakeholders in Evaluations of Health Initiatives
Presenter(s):
Natasha Wilder, Claremont Graduate University, natasha.wilder@cgu.edu
Michael Szanyi, Claremont Graduate University, michael.szanyi@cgu.edu
Abstract: While demographic data on race, gender, and socio-economic status are routinely considered in evaluations, in general data on sexual orientation have not been included. This study aims to address the extent to which sexual orientation is considered in the context of the evaluation, both while collecting demographic data and while involving stakeholders in the evaluation. A sample of evaluation reports on health initiatives were examined for the absence, presence, or elaboration of questions related to sexual orientation identity and sexual behavior. Additionally, the study examined the level of inclusion of sexual minorities in evaluations, specifically identifying whether LGBT stakeholders were included solely as data sources or involved in the evaluation design and implementation. Knowing the degree to which sexual orientation is or is not addressed in evaluations may inform future evaluation endeavors that aim to embody the tenets of social justice and inclusion.

Session Title: Building, Enhancing, and Sustaining Evaluation Quality for Organizational Learning
Multipaper Session 261 to be held in PRESIDIO B on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Beverly A Parsons,  InSites, bparsons@insites.org
Discussant(s):
Beverly A Parsons,  InSites, bparsons@insites.org
Enhancing Evaluation Quality and Use for Dynamic Organizations: Lessons Learned from a Developmental Evaluation of a Competency-based Medical Education Innovation
Presenter(s):
Cheryl Poth, University of Alberta, cpoth@ualberta.ca
Shelley Ross, University of Alberta, shelleyross@med.ualberta.ca
Rebecca Georgis, University of Alberta, georgis@ualberta.ca
Mike Donoff, University of Alberta, mike.donoff@ualberta.ca
Paul Humphries, University of Alberta, phumphries@ualberta.ca
Ivan Steiner, University of Alberta, ivan.steiner@ualberta.ca
Abstract: This paper reports the lessons learned from a two-year developmental evaluation of a medical education innovation, Competency Based Achievement System (CBAS). Developmental evaluation has been described as useful to support innovation implementation and organizational decision-making. However, few empirical studies have documented the influence of a developmental approach on evaluation quality and use during the pilot implementation of an innovation. The study used a series of four focus groups with the organizational team to capture the perspectives of the five medical educators and the evaluator during the evaluation. Findings from the analysis using diffusion theory revealed that each evaluation team member played a unique role that led to the identification of seven guiding CBAS principles and that building confidence in the validity and reliability of the findings was crucial for informing implementation decisions. Implications for developmental evaluation practice include effective strategies for building organizational relationships and mitigating emerging challenges.
Sustainable Quality Evaluation: Evaluating Continuous Quality Improvement Processes
Presenter(s):
Matthew Galen, Claremont Graduate University, matthew.galen@cgu.edu
Deborah Grodzicki, University of California, Los Angeles, dgrodzicki@gmail.com
Abstract: This paper will explore a theoretically-grounded, practical framework for implementing an evaluation of a Continuous Quality Improvement (CQI) program. CQI is an intra-organizational, customer-oriented approach to evaluating systems of production and service delivery. A successful evaluation of CQI programs requires a broad set of evaluation tools drawing from program evaluation theories as well as the conceptual thinking surrounding meta-evaluations. The paper will include resources for relevant CQI evaluation design and methodology, as well as techniques for involving program stakeholders in a collaborative effort to optimize their CQI process. A proposed evaluation of county-wide Emergency Medical Services CQI process will be presented to illustrate this process.This paper will explore a theoretically-grounded, practical framework for implementing an evaluation of a Continuous Quality Improvement (CQI) program. CQI is an intra-organizational, customer-oriented approach to evaluating systems of production and service delivery. A successful evaluation of CQI programs requires a broad set of evaluation tools drawing from program evaluation theories as well as the conceptual thinking surrounding meta-evaluations. The paper will include resources for relevant CQI evaluation design and methodology, as well as techniques for involving program stakeholders in a collaborative effort to optimize their CQI process. A proposed evaluation of county-wide Emergency Medical Services CQI process will be presented to illustrate this process.
Building Evaluation Culture and Evaluation Quality in Brazil
Presenter(s):
Daniel Brandão, Fonte Institute, daniel@fonte.org.br
Martina Otero, Fonte Institute, martina@fonte.org.br
Abstract: The relevance of program evaluation is recognized by all players engaged in social initiatives in Brazil. Despite this recognition the country has a fragile structure to promote evaluation quality, such as lack of knowledge about the evaluation field, rare courses targeted to social leaders and evaluators or concrete spaces to promote the exchange of experiences among people interested in this area. This paper aims to present the efforts and strategies developed in a partnership between a private foundation and a civil society organization to strengthen the evaluation culture and quality in the country. Project strategies, results, and challenges will be discussed and the relationship between evaluation culture and evaluation quality will be addressed.

Session Title: Analytic and Measurement Approaches in Substance Abuse and Mental Health Evaluation
Multipaper Session 262 to be held in PRESIDIO C on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Roger Boothroyd,  University of South Florida, boothroyd@fmhi.usf.edu
Evaluating Intervention Effects On Marijuana Use Among Youth by Comparing Alternative Structural Equation Modeling: Decision-Trees Based On Formal Model Comparisons
Presenter(s):
Marlene Berg, Institute for Community Research, mberg_84@yahoo.com
Emil Coman, Institute for Community Research, comanus@netscape.net
Abstract: This is a systematic method of testing for true estimated post-treatment differences between intervention and control groups, by testing all assumptions regarding group differences in actual data from a quasi-experimental design through nested model SEM model comparisons and advancing from basic all-different baseline model to more parameters constrained to be equal nested models. 1. We will use AMOS to show how a small simple 2 group model can be specified, and alternative settings. 2. How a test of nested model comparison can be conducted, by adding a constraint to a baseline model. 3. We will illustrate the decision-tree paths and their meaning (significant difference testing). 4. We will interpret the meaning of various model comparisons and the impact on the evaluation final judgment. The decision-tree SEM indicate that some test of post-intervention equality constraint test of means with different assumptions as baseline model show that the equality of means does not hold.
Finding Meaning in Numbers: How Well Do Our Measures Capture What We Evaluate?
Presenter(s):
Ann Doucette, George Washington University, doucette@gwu.edu
Abstract: The research literature supports therapeutic alliance and motivation to change, as key determinants of favorable treatment outcomes for individuals receiving care for mental health and substance abuse disorders. In response, models of therapeutic alliance and stages and willingness to make changes within the therapeutic setting have been generated to reflect how these constructs function and relate to mental health and substance abuse treatment outcomes. This paper focuses on the measurement of these constructs and investigates the implications of unexamined measurement properties in modeling behavioral health treatment effectiveness and consumer outcomes. Secondary analyses using existing data from Project Match, a multi-site clinical trial for alcohol treatment, the Depression Collaborative and a cocaine treatment study question whether the assumptions about measurement dimensionality, response scale adequacy, and invariance across diverse samples (diagnostic, age, gender, race/ethnicity, etc.) hold, and whether findings accurately reflect treatment effectiveness.
Barriers to Treatment of the Chronic Homeless with Co-occurring Disorders
Presenter(s):
M David Miller, University of Florida, dmiller@coe.ufl.edu
Abstract: The STAR (Seeking Treatment and Recovery) program provides treatment for the chronically homeless with co-occurring disorders (substance abuse problem and mental illness). The program provides a range of services through intensive case management. The program includes treatment of mental illness, substance abuse problems, and establishing housing and employment. In our context evaluation, we have found that considering why chronic populations have not taken advantage of treatment in the past is an important part of understanding the population and being able to better serve them. Consequently, we developed an instrument to measure the barriers to treatment (BTI). This paper shows the results of that instrument development including reliability and validity, and use of the instrument (N=136). Clients tend to not receive treatment due to personal issues rather than program or societal issues.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Evaluation as a Management and Learning Tool for the Successful Development and Scaling of Innovative Program Models
Roundtable Presentation 263 to be held in BONHAM A on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Helen Davis Picher, William Penn Foundation, hdpicher@williampennfoundation.org
Sandra Adams, William Penn Foundation, sadams@williampennfoundation.org
Abstract: Demonstration of outcomes, while a key ingredient in the development and scaling of promising social programs, is not enough to ensure success. Replication and scalability are critical components of a successful model. The William Penn Foundation will present a framework developed to assess which pilot demonstrations are primed for success and which may fail, if mid-course corrections are not made. A discussion will consider the usefulness and application of the framework in a variety of sectors and how the framework can be operationalized for a program or cluster of programs targeting the same system change.
Roundtable Rotation II: Evaluating Enterprising Nonprofits: The Social Return on Investment
Roundtable Presentation 263 to be held in BONHAM A on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Goutham Menon, University of Texas, San Antonio, goutham.menon@utsa.edu
Maureen Rubin, University of Texas, San Antonio, maureen.rubin@utsa.edu
Abstract: Today, running a non-profit has become more complicated than it used to be. With funding cuts, raising demands of performance measures from foundations, corporations asking for better partnerships to meet their responsibilities, some non-profits are taking the issue of becoming social enterprises more seriously. Leaders of these organizations take their roles as social entrepreneurs in their stride and provide a vision and direction for their organization. Social entrepreneurs look out to make sure that their organization is fiscally strong but at the same time needs to make sure that their social mission is left intact. One way to keep track of this and prevent mission drift is to evaluate the social return on investment (SROI) for any venture. This paper will provide an overview of SROI and how it can be used in organizations of any size. It will highlight the role of SROI in evaluating the functioning of non-profits through examples.

Session Title: Using Evaluation to Improve the Quality of the Initial Implementation of a Statewide Community and State Level Policy and Systems Change Initiative
Panel Session 264 to be held in BONHAM B on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Astrid Hendricks, California Endowment, ahendricks@calendow.org
Discussant(s):
Marie Colombo, Skillman Foundation, mcolombo@skillman.org
Abstract: TCE’s BHC has largely successfully integrated evaluation knowledge and skills into the initial implementation of BHC. This has helped improved the performance of TCE, the communities and grantees while also providing valuable information about the factors needed to prepare communities for implementing policy and systems change strategies. We will provide lessons learned that will be valuable to the field.The Aspen Institute’s (2009) recent review of CCIs notes that we lack evidence that community-based work alone can trigger large-scale systems change. The TCE BHC seeks to develop synergies by addressing strategies at the Place level, and also at the State and statewide levels through integration strategies based upon use of goal and outcome-oriented logic models.
Positioning the Foundation to Use Evaluation for Learning and Improved Performance
Astrid Hendricks, California Endowment, ahendricks@calendow.org
Mona K Jwahar, California Endowment, mjhawar@calendow.org
Lori M Nascimento, California Endowment, lnascimento@calendow.org
Foundations are increasing investing in policy and systems change efforts at the local, state, and federal levels. Community change initiatives are a Place based strategy for changing policies, systems and community environments. These approaches to grantmaking lead to changes in the roles of the Foundation and its staff. The California Endowment has focused almost all of its resources on their Building Healthy Communities (BHC) strategic vision over the next 10 years. BHC ‘s strategy is to promote policy and systems changes in 14 communities, across the state, and state government levels in order to strategically improve the health of all children. BHC is transforming the foundation and presents the opportunity to integrate the knowledge and methods of evaluation into the implementation of BHC. This paper describes the strategies, challenges, and opportunities to advance and integrate evaluation for the purposes of decision-making, learning, accountability, and performance among board, staff, and grantees.
The California Endowment’s Building Healthy Communities: Lessons From the Site-selection Process
Hanh Cao Yu, Social Policy Research Associates, hanh_cao_yu@spra.com
The California Endowment’s experience in planning its new strategic approach—and selecting target communities in particular—represents an important opportunity for generating a much clearer understanding of what goes into creating effective place-based initiative. Among the thousands of distinct neighborhoods/communities in the state of California, TCE needed to develop a process and clear set of criteria for selecting the 14 that would be the sites of TCE’s work over the next decade and then carry out the research and decision making involved in arriving at the final list. By examining the site selection process—and the planning process that preceded it—with a critical and analytical eye, it is possible to identify the specific challenges that TCE faced to understand how the foundation’s approach to strategic planning influenced the site-selection process, to discern exactly what worked and did not work, and to develop lessons that can be applied in the future. What we learn through such analysis not only informs TCE decision making as the organization moves forward with implementing Building Healthy Communities, it also points to a number of lessons for other philanthropic organizations engaged in place-based grantmaking.
Use of Logic Models to Structure a Cohesive Planning Approach and an Evaluable Cross-site Comprehensive Community Initiative
Jared Raynor, TCC Group, jraynor@tccgrp.com
One of the challenges of evaluating multi-site comprehensive community initiatives is establishing the common basis for comparison. While logic modeling is not a new concept for CCI projects, using it as a comprehensive basis to drive the planning process is relatively novel. This approach sets up a framework for evaluation, ex ante, as opposed to more traditional ex post approaches. This presentation will discuss how The California Endowment utilized logic modeling across all its sites to establish the common basis for aggregating across communities. The presenter will share how the logic modeling process works in the communities, what needed to be in place at the funder in order to secure buy-in, how TCE used "logic model coaches" to provide technical assistance, and some of the pitfalls associated with using this approach.
The Planning Phase Evaluation: Getting Ready for Implementation of the California Endowment Building Healthy Communities Comprehensive Community Initiative
Denise L Baer, Community Science, dbaer@communityscience.com
David M Chavis, Community Science, dchavis@communityscience.com
The Planning Phase Evaluation for the California Endowment (TCE) Building Healthy Communities is intended to better prepare both TCE and the 14 communities for implementation of policy and systems change strategies. TCE has used relationship-building capacity building and evaluation as tools to improve foundation, community, and grantee performance. This paper reports on the evaluation process and lessons learned based upon a mixed methods approach that integrates qualitative in-person and telephone interviews, and document reviews within a cross-case study approach. This approach is helpful for contextualizing and answering “how” and “why” questions grounded in the experience of the Community Collaboratives and Places. The paper will provide the foundation, community, and contextual factors that contributed to the readiness of communities to implement policy and systems change strategies. Examples will also be presented as to how the evaluation team’s information and relationship were used to improve foundation and community performance.

Session Title: Group and Cluster Randomized-Control Experimental Interventions in Educational Evaluation Studies
Multipaper Session 265 to be held in BONHAM C on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Anane Olatunji,  Fairfax County Public Schools, aolatunji@fcps.edu
Discussant(s):
Melissa Chapman,  University of Iowa, melissa-chapman@uiowa.edu
Lessons Learned from the Implementation of Randomized Trials in the Field of Education
Presenter(s):
Jessaca Spybrook, Western Michigan University, jessaca.spybrook@wmich.edu
Anne Cullen, Western Michigan University, anne.cullen@wmich.edu
Monica Lininger, Western Michigan University, monica.lininger@wmich.edu
Abstract: Group randomized trials as means for establishing “what works” in education have become quite common in the field. For example, between 2002 and 2006, the Institute of Education Sciences (IES), the research branch of the US Department of Education, funded 55 studies which involved a group randomized trial, or a study in which classrooms, schools, or entire districts were randomly assigned to either receive the treatment or control condition. The current study examines the challenges research teams face when implementing group randomized trials in the field. The study is based on interviews with 35 Principal Investigators of group randomized trials funded by IES. The primary challenges researchers faced included recruitment of the sample and attrition of students and teachers. This session will discuss the lessons learned from these studies and is geared toward evaluators interested in improving the planning and implementation of group randomized trials.
Effectiveness of Selected Supplemental Reading Comprehension Interventions: Impacts on Two Cohorts of Fifth-Grade Students
Presenter(s):
Susanne James-Burdumy, Mathematica Policy Research, sjames-burdumy@mathematica-mpr.com
John Deke, Mathematica Policy Research, jdeke@mathematica-mpr.com
Julieta Lugo-Gil, Mathematica Policy Research, jlugo-gil@mathematica-mpr.com
Abstract: The National Evaluation of Reading Comprehension Interventions is testing the effectiveness of 4 reading comprehension interventions in 10 school districts and 89 schools across the United States. The study is based on a rigorous experimental design that involved randomly assigning schools to one of four interventions or to a control group. We administered reading tests to fifth-grade students at baseline and follow up, collected school records data, conducted surveys of teachers, and observed classroom instruction. Over 6,000 students and 250 teachers were included in the study’s first cohort and over 4,000 students and 180 teachers were included in the study’s second cohort. The presentation will focus on the study’s second year of findings, after the reading comprehension interventions had been implemented for one school year with both cohorts of students. For the first cohort, we will also present findings one year after the end of the intervention implementation.
Rigor Is Only Half the Story: Design Decisions and Context Influences in a Cluster-Randomized Evaluation Study
Presenter(s):
Andrea Beesley, Mid-continent Research for Education and Learning, abeesley@mcrel.org
Abstract: This presentation will describe the design and methodology of a two-year cluster-randomized effectiveness evaluation of a teacher professional development program in classroom assessment, and then the effects (intended and otherwise) of both the design decisions and the implementation context on the study. It will include discussions of: considerations for doing experimental research in school settings; consequences of random assignment of intact groups; challenges with multiyear implementations and experimental evaluation studies of interventions; strategies for integrating student and teacher outcomes; and issues with assessing implementation fidelity in an effectiveness evaluation. The presentation will detail the rigorous design, but also highlight the elements inherent in the school environment (context) that arguably had as much effect on the study outcomes.
Comparative Advantage in Teaching: A Randomized Experiment
Presenter(s):
Steven Glazerman, Mathematica Policy Research, sglazerman@mathematica-mpr.com
Jeffrey Max, Mathematica Policy Research, jmax@mathematica-mpr.com
Ali Protik, Mathematica Policy Research, aprotik@mathematica-mpr.com
Abstract: In this paper, we examine if high performing teachers from higher achieving schools can transfer their skills to more challenging settings when they are put in low achieving schools. In ten diverse school districts throughout the country, we identified the top 20 percent of elementary, middle school math, and English teachers and offered them $20,000 to transfer to a pool of low performing schools with teaching vacancies in the targeted grades and subjects. We randomly assigned half of these schools to a treatment group eligible for hiring one of the top-tier teachers and the other half to a control group that would fill their vacancies the way they normally would. We focus on the direct impact of high performing teachers on their own students in their new settings relative to the new hires in the control group, indirect impacts on their peers to measure spillover and within-school distribution effects.

Session Title: Integrating High Quality Evaluation Into a National Integrated Services in Schools Initiative
Panel Session 266 to be held in BONHAM D on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Keith McNeil, University of Texas, El Paso, kamcneil2@utep.edu
Abstract: Elev8 is a national initiative funded by Atlantic Philanthropies that provides wrap-around services in selected middle schools to students and their families. The one national quantitative evaluator and four local qualitative evaluators will participate in the panel and will address three topics that surfaced. The first topic is the need for relationship building. Relationships need to be built not only with school staff, but also with a large number of project staff from a large number of organizations that are providing services to students and their families. The second topic is the press between maximizing integration and maximizing effects of the program. The third topic is the press for outcomes to support and sustain the program. The evaluation was designed without any kind of comparison group, so the effectiveness of the program has to rely on the quantitative collection of participation rates and the qualitative collection of data.
Overview of the Elev8 Model and Evaluation Plan
Jacqueline Williams Kaye, Atlantic Philanthropies, j.williamskaye@atlanticphilanthropies.org
A brief overview of the Elev8 model will be accompanied by a brief discussion of the two major aspects of the model that at are seldom in other wrap-around models. These two aspects are Integration and Sustainability, With respect to Integration, the interest of Atlantic Philanthropies was to have the various providers work together so that their efforts with the students and families would result in more than the efforts of each individual provider. (The sum of the whole is more than the sum of the parts.) Sustainability is the purposeful effort to plan for continued services after AP funding was over at the end of four years. Funding was earmarked for this effort. Finally, the evaluation plan will be overviewed. Jacqueline Williams Kaye is at Atlantic Philanthropies and part of her responsibility is to oversee the evaluation of the multisite implementation of Elev8 in four sites across the U.S.
Crucial Role of Relationship Building in a Multilayered Evaluation
Maria Luisa Gonzalez, University of Texas, El Paso, mlgonzalez6@utep.edu
The Elev8 wrap-around services for middle school students and families include a health center on campus, extended learning programming, and support services for families. At many school sites there are several agencies providing extended learning programming, family supports, and health services. For instance, the health center may have a dentist funded by one agency, a family nurse practitioner funded by another, and a health educator funded by another. The need for the evaluators develop close relationships with each of these providers, as well as the school staff, is crucial. Examples will be provided of both success and challenges of such relationship building. Maria Luisa Gonzalez was on faculty at New Mexico State University for 19 years and directed numerous training grants that prepared both Hispanic and Native American students for leadership roles. She has conducted numerous local program evaluations and has worked on the qualitative evaluation of Elev8-NM since 2007.
Relationship Between Maximizing Integration and Maximizing Effects of the Program
Lauren Rich, Chapin Hall, lrich@chapinhall.org
Elev8 funding required the various providers to integrate at the school site and with the school site. Some Providers feel that the Integration effort reduces the potential effectiveness of their programming, sometimes resulting in providers creating silos and not working together. Lauren Rich is a senior researcher at Chapin Hall. Her primary interest is in conducting research that will contribute to improving the life chances of children living in poverty, particularly through the realm of education. She conducts longitudinal study of patterns of service utilization among low-income families, and the relationship of service use to family functioning, child development, and school readiness. She recently completed a five-year project examining outcomes among disadvantaged children and youth attending residential schools. She has also conducted and published research on youth employment, teen childbearing, welfare reform, child support enforcement, the educational attainment of teen mothers, and the economic status of low-income, noncustodial fathers.
Making the Case for Qualitative, Formative Evaluation in a Quantitative, Outcomes-driven Environment
Stephen La France, LFA Group, steven@lfagroup.com
Today’s environment often demands proof of effectiveness, yet local Elev8 evaluators are charged with conducting a qualitative, formative evaluation. Local Elev8 evaluation teams have worked hard to demonstrate the usefulness of the approach and to make the case that qualitative data are capturing the “true story.” Elev8 Oakland’s evaluation Project Director will present on these issues given his training in qualitative methods. Lessons from the first year informed strategies for the second year, including restructuring the field team and designing an intricate process for communicating findings to the lead agency in real time. These changes, among others, have increased the relevance and value of the evaluation for program implementers in their goal of sharing promising practices across schools and addressing challenges as they arise. Evaluation quality also has increased with ongoing hypothesis-testing through multiple perspectives, and in making a careful distinction between internal and external validity.

Session Title: Environmental Education Evaluation: Examining Citizen Collected Data, Mixed Method Designs, and Professional Development
Multipaper Session 267 to be held in BONHAM E on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Environmental Program Evaluation TIG
Chair(s):
Annelise Carleton-Hug,  Trillium Associates, annelise@trilliumassociates.com
A National Study of the Professional Development Needs of Formal, Informal, and Nonformal Environmental Educators
Presenter(s):
M Lynette Fleming, Research, Evaluation & Development Services, fleming@cox.net
Abstract: What professional development is needed by environmental educators who work in formal, informal and nonformal learning settings throughout the United States? Find out about the methods employed in this needs assessment and what the study revealed about priorities, gaps, and work needed to advance environmental literacy over the next five years. This session will summarize the steps involved in conducting the first national needs assessment of environmental educators’ professional development priorities; focus on the different professional development delivery preferences, needs and priorities identified by environmental educators teaching for formal, informal and nonformal learning; and discuss the implications of the results for professional development in EE and future work for the field in general and for state and federal natural resources agencies.
A Grounded Theory Exploration of Science Identity in Informal Contexts
Presenter(s):
Tina Phillips, Cornell Univeristy, cbp6@cornell.edu
Elizabeth Danter, Institute for Learning Innovation, danter@ilinet.org
John Fraser, Institute for Learning Innovation, fraser@ilinet.org
Richard Bonney, Cornell Univeristy, reb5@cornell.edu
Abstract: Citizen science was originally conceived as an opportunity to create new knowledge through a distributed community of interested “citizens” who would generate reliable data for study by scientists. Quickly adopted by the Informal Science Education community as a tool for educating the general public and increasing science literacy, many projects have been framed in terms of learning outcomes that measure collective change in knowledge, skills, attitudes, and behaviors of the population of participants (Bonney et al. 2009). We will present results from a qualitative analysis of an online citizen science “data coding” project that explored the often over-looked dimensions of science identity, altruism, and motivations for prolonged engagement in an informal science activity. The findings demonstrate that continued participation reinforces pre-existing science identities, helps them feel part of a community within their scope of justice, and provides them a forum to directly contribute to the advancement of science knowledge.
Walking in the Footsteps of Aldo Leopold: Evaluation of an Environmental Education Initiative Using a Mixed Method Design
Presenter(s):
Nancy Carrillo, Albuquerque Public Schools, carrillo_n@aps.edu
Abstract: The ‘Walking in the Footsteps of Aldo Leopold” Centennial was a weekend-long conference incorporating g sessions about Leopold’s contributions and current environmental conditions as well as activities designed for enjoying and learning about Arizona’s White Mountains, Leopold’s stomping grounds early in his career. The conference brought together scholars, activists, and nature enthusiasts. Though a short intervention, the conference organizers hoped to foster better environmental political action both by individuals and environmental groups by increasing knowledge, fostering networking, and enhancing participants’ appreciation for wilderness. We evaluated the success of the intervention using session observation forms to measure audience participation and reactions, a participant questionnaire, a follow-up participant on-line survey six months later, and telephone surveys with environmental group leaders eight months later. These qualitative and quantitative data sources as well as the extended evaluation timeline provided an excellent opportunity for a ‘double helix’ mixed methods design.

Session Title: International Approaches to Government Evaluation
Multipaper Session 268 to be held in Texas A on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Government Evaluation TIG
Chair(s):
Susan Berkowitz,  Westat, susanberkowitz@westat.com
How has Twenty Years of Educational Evaluation Contributed to Lifting the Quality of Government Evaluation in New Zealand?
Presenter(s):
Carol Mutch, Education Review Office, carol.mutch@ero.govt.nz
Kathleen Atkins, Education Review Office, kathleen.atkins@ero.govt.nz
Abstract: The Education Review Office (ERO) in New Zealand was established just over 20 years ago in October 1989. It was one of the outcomes of the education reforms of the 1980s. Although similar reforms were considered in many other countries at that time, New Zealand is seen as having undertaken the most comprehensive set of reforms. Over the last 20 years successive governments have devolved educational governance to individual school boards, decentralised curriculum decision making to schools themselves and put a greater emphasis on school self evaluation. One function the government did retain was the responsibility for external evaluation of the quality of education provided by each school. This paper traces the history of the Education Review Office and explores how the notion of quality evaluation evolved over this time and the influence that ERO was to have on wider developments in government evaluation.
Recommendations That Catch the Eye, Stimulate the Grey Cells and Generate Change: What Makes a Good Evaluation Recommendation? Lessons from the United Kingdom’s (UK) Department for International Development (DFID)
Presenter(s):
Kerstin Hinds, Department for International Development, k-hinds@dfid.gov.uk
Abstract: The UK Government’s agency with a mandate for reducing global poverty and responsible for spending a budget of almost £8 billion per year on international development (DFID) has been taking strenuous steps to improve the quality of evaluations and ensure that evaluation findings are taken forward within the organisation. In implementing a system for tracking recommendations, it has become clear that some recommendations better lend themselves to follow up than others. A review of recommendations – and their traction - was undertaken to understand the key features of ‘good’ recommendations and hence improve our guidance and practice. This paper considers issues of recommendation targeting, length, complexity, wording and number and also discusses whether all key evaluation lessons can be framed into actionable recommendations and how else key findings from evaluations can be taken forward. Issues of institutional culture and context which are also significant are discussed.
Implementing Government of Canada Evaluation Policy Requirements: Using Risk to Determine Evaluation Approach and Level of Effort
Presenter(s):
Courtney Amo, National Research Council Canada, courtney.amo@nrc-cnrc.gc.ca
Shannon Townsend, National Research Council Canada, shannon.townsend@nrc-cnrc.gc.ca
Abstract: The 2009 Government of Canada Evaluation Policy is transitioning the evaluation function away from a risk-based approach to planning which evaluations to do, towards a risk-based approach to determining how each evaluation will be done. Within this context, evaluations are expected to assess organization-specific risk criteria in order to calibrate the approach and level of effort to be put towards each study. This calibration is aimed at ensuring that departments and agencies are able to plan for and meet the requirement for full evaluation coverage of all programs over a five-year cycle. This paper presents the five-step approach developed by the National Research Council of Canada (NRC) to apply risk in determining a study’s approach and level of effort, while ensuring that minimum standards for evaluation are met in the context of “low-risk” programs. The use of the approach in the context of three evaluation studies is also presented.
Government, Implementation And Evaluation: The Viability and Evaluability of National Policy Programs
Presenter(s):
Anna Petersén, Orebro University, anna.petersen@oru.se
Lars Oscarsson, Orebro University, anna.petersen@oru.se
Christian Kullberg, Orebro University, christian.kullberg@oru.se
Ove Karlsson Vestman, Malardalen University, ove.k.vestman@dh.se
Abstract: In many countries, the government is using state subsidies to promote local authorities to improve, e.g., social welfare services. But evaluations of these initiatives often show small effects compared to the politicians´ goals. In the paper we present results from a Swedish study including eight larger national programs aimed at promoting local authorities’ social services. The aim is to analyze and discuss if success or failure to achieve the national goals of the programs can be related to unrealistic political ambitions, to the implementation process, to limitations or deficiencies in the evaluations, or to all three. The first two issues are analysed within a political science model, and in the latter case data availability and evaluation designs are focused. In the light of the results, program theoretical, ethical and qualitative issues in evaluation are discussed.
Whose Fault is It? A Federal Government's Effort to Improve Evaluation Quality
Presenter(s):
Laura Tagle, Italy's Ministry for Economic Development, laura.tagle@tesoro.it
Laura Tagle, Italy's Ministry for Economic Development, laura.tagle@tesoro.it
Massimiliano Pacifico, Evaluation Unit of Region Lazio, massimiliano.pacifico@gmail.com
Abstract: In a recent blog post(1), Davidson wonders about clients’ responsibilities on bad evaluation and her role as an evaluator. We take on the same issue from a government’s standpoint. The quality of evaluations critically depends on clients’ engagement and investment in evaluation. From this starting point, we discuss which choices influence the quality of evaluations--evaluation policy, what is evaluated, which evaluation questions are asked, and, mainly, the style and mode of the evaluation management. We analyze ways to induce government engagement in evaluation. Available options include formal and on-the-job training, institutional building, and the setting up of incentive systems. The paper is based on the experience of the Italian National Evaluation System, a coalition of public Evaluation Units which collectively provides guidance and services to public authorities responsible for planning and implementing regional development policies--and for evaluating them. (1) http://genuineevaluation.com/whos-responsible-for-un-genuine-evaluation/

Session Title: Circular Dialogue and Other Dialectical Methods of Inquiry
Skill-Building Workshop 269 to be held in Texas B on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Qualitative Methods TIG
Presenter(s):
Richard Hummelbrunner, OEAR Regional Development Consultants, hummelbrunner@oear.at
Abstract: Circular Dialogues are systemic forms of communication among people with different perspectives. They use the perspectives and views of participants as a resource e.g. for appraising/validating different experiences or identifying joint solutions. After introducing the principles and rules to be followed in a Circular Dialogue, participants will be invited to break into sub-groups, agree on an issue per group and identify three relevant perspectives. Then two rounds of dialogue sessions will be run in parallel to provide participants with hands-on experience, which will be briefly reflected at the end. Circular Dialogue is an example of a dialectical method, which uses opposing viewpoints to gain meaning. In the final session, other dialectical methods of inquiry will briefly be outlined, namely: Convergent Interviewing, Contradiction analysis and Option One-And-A-Half. The session will end with a final round on the utility of dialectical methods in evaluation and by providing sources of further information.

Session Title: Better Evaluation - A Toolbox of Evaluation Methods and Applications That Supports Quality and Methodological Diversity
Demonstration Session 270 to be held in Texas C on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the
Presenter(s):
Patricia Rogers, Royal Melbourne Institute of Technology, patricia.rogers@rmit.edu.au
Suman Sureshbabu, Rockefeller Foundation, ssureshbabu@rockfound.org
Abstract: Evaluation quality depends on having both skilled evaluators and savvy evaluation commissioners. For both groups ongoing learning is needed about how to select, adapt and implement evaluation methods to suit a range of situations. This session will demonstrate a new on-line resource “BetterEvaluation” that supports evaluators and evaluation commissioners to make better choices of evaluation approaches and methods, to implement these methods better, and contribute to learning from practice by sharing their experiences. It is built explicitly on Web 2.0 principles of interactive information sharing, user-centered design (including the use of folksonomies, or user-developed classifications), dynamic content, and scaleability. These align with and reinforce the project’s emphasis and commitment to methodological diversity, adaptation and innovation to suit diverse evaluation requirements. The session will present the main features of the site, its main purposes and discuss challenges such as avoiding paradigmatic schisms, and ensuring the quality, accessibility and utility of material.

Session Title: Analysis and Evaluation of Research Portfolios Using Quantitative Science Metrics: Theory
Panel Session 271 to be held in Texas D on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Israel Lederhendler, National Institutes of Health, lederhei@od.nih.gov
Discussant(s):
Gretchen Jordan, Sandia National Laboratories, gbjorda@sandia.gov
Abstract: This panel addresses the question of the theoretical gaps and opportunities to enhance the understanding scientific research using portfolios as a unit of analysis. Increasing attention is being given to analyzing, managing, and evaluating scientific investment using a quantitative portfolio approach. Many of the analyses seem to lack theoretical reasons for using particular methods to answer particular questions. The presentations are intended to raise the issue of how portfolio analysis, research evaluation, and scientometrics are complementary.
How Can Portfolio Analysis Assist Government Research Agencies to Make Wise Research Investments?
Wagner Robin, National Institutes of Health, wagnerr2@od.nih.gov
Matthew Eblen, National Institutes of Health, eblenm@od.nih.gov
Over the last several decades, government agencies have been increasingly required to demonstrate the value and impact of the research projects they fund. Agencies have utilized a variety of research evaluation methods and tools to assess individual research projects and programs. However, it has proven challenging to assess the entire portfolio funded by research agencies due, in part, to the diversity of program areas (e.g., how can one balance investments in cancer versus HIV/AIDs?). In addition, it has been difficult to define, predict, and measure risks and opportunities associated with research projects. We will examine the theory of portfolio analysis, which has its origins in the field of finance, and evaluate the degree to which financial concepts can be adapted to portfolios of research projects. We will conclude with suggestions of what data, tools, and metrics need to be developed to advance the field of portfolio analysis for research.
Limits of Portfolio Analysis to Address Evaluation Questions
Brian Zuckerman, Science and Technology Policy Institute, bzuckerm@ida.org
As part of a feasibility study of an outcome evaluation of the National Cancer Institute-funded Childhood Cancer Survivor Study (CCSS), the study analyzed the portfolio of NIH-funded childhood cancer survivorship research. After completing descriptive analyses of the portfolio itself, the analysis compared the publication output from the CCSS and the balance of the portfolio to assess differences in publication output and journal quality between CCSS publications and the remainder of the portfolio. While these analyses had quantitative answers, the key finding from the analysis was that due to the nature of the portfolio, the comparisons – between a single award mechanism for conducting many research studies on a large cohort of patients and multiple individual studies conducting research on smaller cohorts – weren’t analytically meaningful. This paper will summarize the study results in the context of an analysis of the limits of the utility of portfolio-analytic methods to address evaluation questions.
Reinventing Portfolio Analysis at the National Institutes of Health: Explorations in the Structure and Evolution of Biomedical Research
Israel Lederhendler, National Institutes of Health, lederhei@od.nih.gov
Kirk Baker, National Institutes of Health, bakerk@od.nih.gov
Archna Bhandari, National Institutes of Health, bhandara@od.nih.gov
Carole Christian, National Institutes of Health, cchristi@od.nih.gov
Recent investigations initiated by the NIH Portfolio Analysis Group are defining questions that blur some lines with research evaluation. The questions that are being asked are fundamental to understanding science but have only been addressed with difficulty using traditional evaluation approaches. The combination of new methods including scientometrics, text mining, concept matching, and visualizations have stimulated interesting computationally-based studies: (1) emerging areas of biomedical interest by analysis of grant content over time, (2) identification of causal relationships through knowledge discovery of grants and publications, (3) portfolio comparisons between organizations exploring the recent evolution of topics within a research field such as breast cancer research, or (4) development of methods to promote knowledge gap analysis by looking at knowledge structures of opposing theories. These studies address fundamental issues about science, they also provide an evidence base for tackling important research evaluations in the policy, organizational, and programmatic domains.
Intersections Among Scientometrics, Science Portfolio Analysis, and Research Evaluation: Does Complex Systems Science Offer Workable Theory?
Caroline Wagner, Science-Metrix Corp, caroline.wagner@science-metrix.com
Research evaluation suffers in comparison with other types of evaluation (financial, medical) because of lack of quantitative techniques based in theory. Data about research dynamics and outcomes have been difficult to acquire and even more difficult to evaluate. The emphasis on counting inputs, outputs, and outcomes, with little attention to the dynamics within knowledge creation is limiting. The ability of analysts to understand the dynamics of research as it unfolds and evolves becomes more important. Without monitoring tools based on an understanding of scientific knowledge creation, a portfolio analysis cannot track progress, and is relegated to tracking products. Recent developments in complex adaptive systems theory—being applied specifically within network analysis and pioneered by physicists and biologists--may profitably be explored to help fill the need for a theoretical basis to understanding quantitative tools. This paper addresses that possibility by discussing complex systems theory in a research context.

Session Title: Mainstreaming Evaluation in Diverse Organizational Contexts
Panel Session 272 to be held in Texas E on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Evaluation Use TIG , the Organizational Learning and Evaluation Capacity Building TIG, and the Research on Evaluation TIG
Chair(s):
Gloria Sweida-DeMania, Claremont Graduate University, gloria.sweida-demania@cgu.edu
Abstract: Jim Sanders defined mainstreaming evaluation as "…making evaluation a part of the work ethic, the culture, and the job responsibilities of stakeholders at all levels of the organization" (2009, p. 1). Organizational efforts toward mainstreaming evaluation depict the ultimate level of evaluation use. The presenters in this panel will describe methods of embedding evaluation in a variety of contexts. Sam Held, program evaluator for Oak Ridge Institute for Science and Education, will discuss a backward design approach. Ellen Iverson and Randahl Kirkendall, evaluators at Carleton College’s Science Education Resource Center (SERC), will focus on their efforts to mainstream evaluation within their organization, and assist other organizations to do the same. Rachel Muthoni, evaluator Pan African Bean Research Alliance (PABRA), will describe mainstreaming in the agricultural development context. Amy Gullickson will discuss her dissertation research, presenting on four organizations that exemplify evaluation mainstreaming. Reference: Sanders, J. R. (2009, May). Mainstreaming evaluation. Keynote address at the Fourteenth Annual Michigan Association of Evaluators Annual Conference, Lansing, MI.
Mainstreaming Evaluation: Practices and Innovations
Amy Gullickson, Western Michigan University, amy.m.gullickson@wmich.edu
Amy Gullickson conducted her dissertation research on National Science Foundation Advanced Technological Education Centers that are mainstreaming evaluation. Four centers were chosen for site visits based on survey responses, input from the NSF Program Officers, and initial interviews with the Principal Investigators. Interviews with staff and key stakeholders from each center focused on evaluation practices, innovations and resulting benefits. Ms. Gullickson also explored the organizational characteristics and personnel traits that made mainstreaming initially possible and ultimately sustainable. In this panel presentation, she will discuss the literature basis for mainstreaming evaluation, and present her research findings.
Mainstreaming Evaluation Into Faculty Professional Development Programs
Randahl Kirkendall, Carleton College, rkirkend@carleton.edu
Ellen R Iverson, Carleton College, eiverson@carleton.edu
Drawing from their experiences working at Carleton College’s Science Education Resource Center (SERC), evaluators Ellen Iverson and Randahl Kirkendall will demonstrate the utility of integrating evaluation components into an organization’s operations. This session will outline strategies that evaluators can use to work with website designers and workshop facilitators, specifically for incorporating evaluation activities into the workflow of professional development and education programs. SERC supports educators across a broad range of disciplines and at all educational levels through web resources and workshops, funded primarily through National Science Foundation grants. It has kept SERC current, efficient, and responsive with its comprehensive and dynamic mixed methods approach to combining evaluation and operations.
Mainstreaming Evaluation in Agricultural Research and Development
Rachel Muthoni, International Center for Tropical Agriculture, r.muthoni@cgiar.org
In her work for the Pan African Bean Research Alliance (PABRA), Rachel Muthoni promotes the use of evaluation in agricultural research to improve management of agricultural programs toward the goal of reducing poverty. PABRA implements a dynamic mix of monitoring and evaluation systems across the institutional levels, following the impact pathway from target beneficiary to the network and alliance. This systematic project management system includes Internal and external evaluation processes, multi-stakeholder platforms, and participatory evaluation approaches. The evaluation framework also provides an approach for working with partners and managing the complexity that characterizes the delivery of benefits needed to alleviate poverty. Muthoni’s presentation will showcase the complexity for evaluation utilization in this context, focus on results from the application of evaluation at the level of individual and project, and identify the major factors guiding mainstreaming in this agricultural development context.
Working Backwards: Using the Evaluation Report to Write Evaluation Questions
Sam Held, Oak Ridge Institute for Science and Education, sam.held@orau.org
In his capacity as a program evaluator, Sam Held has been mainstreaming evaluation in the U.S. Department of Energy's Office of Workforce Development for Teachers and Scientists. He led the effort that reversed the usual order of working from goals to methods to an evaluation report. His evaluation team worked backwards from the evaluation report to better facilitate use and influence. Together with managers from multiple STEM Research Internship programs, outreach and recruitment programs, and the National Science Bowl©, an outline of chapters was developed for reports and communication pieces. Each chapter was a common theme to be addressed by each program. These themes were dubbed the Six Critical Indicators of Success (SCIoS). Mr. Held will describe SCIOS and the process of involving users in development of the framework with special attention to facilitating evaluation use though this interaction. Limitations, implications, and suggestions for future research will be presented.

Session Title: Truth, Beauty, and Justice: Thirty Years Later
Multipaper Session 273 to be held in Texas F on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the AEA Conference Committee
Chair(s):
Timothy Cash,  University of Illinois at Urbana-Champaign, tjcash2@illinois.edu
Vitality and Disorder Thirty Years Later: Reflections Inspired by House (1980)
Presenter(s):
Melvin Mark, Pennsylvania State University, m5m@psu.edu
Abstract: In 1980, House (1980, p.11) stated, “The current evaluation scene is marked by vitality and disorder.” This talk will reflect some of today’s, and perhaps tomorrow’s disorder. Reflecting on House’s trio of truth, beauty, and justice, some seemingly contradictory observations will be offered, with an attempt at reconciliation. For example, I will review an argument for (a) the potential benefit for social justice from having good answers about the effects of alternative means to achieve desired ends, while also (b) acknowledging the potentially limited reach of such answers in light of contextual limits, plus (c) the possibility of detracting attention from the values-based discussion of ends by emphasizing the evaluation of means. Another topic to be addressed briefly is the complexity of thinking about justice and participation in evaluation when a trans-generational perspective is taken.

Session Title: The Role of Evaluation During Tough Fiscal Times: Sage Advice From Evaluation Leaders
Panel Session 274 to be held in CROCKETT A on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Extension Education Evaluation TIG
Chair(s):
Joseph Donaldson, University of Tennessee, jldonaldson@tennessee.edu
Discussant(s):
Nancy Franz, Virginia Tech, nfranz@vt.edu
Abstract: Fiscal challenges affect program evaluation. One evaluator scales-back an evaluation design mid-stream to accommodate reduced budgets. A second evaluator becomes a strategic planner because stakeholders want to capitalize on the evaluator’s expertise to make the best use of public funds. A third evaluator is taxed to build individual capacity for evaluation within an organization. Yet, while the American economy drags through a recession of historic proportions, fiscal challenges are not unique for the country’s economy, nor to evaluation leaders. This panel will offer advice for evaluation practice, training, and utilization in the context of concurrent events including, downsized University budgets, the rise of the Internet, and increased accountability at all levels of government. Panel members will discuss roles, responsibilities and reactions of evaluators during tough fiscal times. They will discuss their varied experiences, with an emphasis on evaluation in Land Grant University settings, especially Cooperative Extension and Outreach.
Evaluator’s Role in Building Capacity and Demonstrating Public Value
Joseph Donaldson, University of Tennessee, jldonaldson@tennessee.edu
This panelist will discuss varied challenges and opportunities of evaluation practice during tough fiscal times. As a Cooperative Extension program evaluator, he is increasingly asked to facilitate strategic plans because stakeholders want to capitalize on the evaluator’s expertise to make the best use of public funds. In this case, budget constraints have created an opportunity to build organizational capacity for program theory and logic model utilization. The evaluator has also been asked to demonstrate public value for various programs, and he will share the evaluation protocols he has employed.
Evaluator’s Role in Organizational Transformation
Molly Engle, Oregon State University, molly.engle@oregonstate.edu
The role of the evaluator has been to help faculty strengthen their individual programming through evaluation. This panelist will share examples from the watershed education team, and the efforts to meld individuals with focused programs to a team with integrated programs using systems theory. In addition, the role of evaluation in organizational transformation will be addressed.

Session Title: Evaluating Leadership Development in Organizations
Multipaper Session 275 to be held in CROCKETT B on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Business and Industry TIG
Chair(s):
Eric Abdullateef,  Directed Study Services, eric.abdullateef@mac.com
Ray Haynes,  Indiana University, rkhaynes@indiana.edu
Assessing a High Performing Organization’s Leadership Selection and Development: The Use of the Leadership3 Instrument as an Evaluation Tool in Organizational Development
Presenter(s):
Darryl Jinkerson, Abilene Christian University, darryl.jinkerson@coba.acu.edu
Phil Vardiman, Abilene Christian University, phil.vardiman@coba.acu.edu
Abstract: Leadership3 is a 40 item forced-choice leadership development assessment that provides organizations and decision makers with key insights including: (1) PERSPECTIVES - the instrument provides both individual and organizational insights into the culture and leadership development opportunities as perceived by both leadership and the rank and file; (2) POTENTIAL - the instrument identifies internal leadership talent and desirability beyond the typical viewpoints of making leadership selections for development based only on top management’s perspective; and (3) PROFILES - the instrument provides graphical presentation of the convergence and divergence of leadership development perspectives between leadership and workers. The current study presents a case study of the evaluation application of this unique instrument in assessing the effectiveness of an organizational transition towards becoming a high performing organization.
Evaluation and Performance of University Boards: A Framework for Analysis
Presenter(s):
Zita Unger, Evaluation Solutions Pty Ltd, zitau@evaluationsolutions.com
Abstract: Higher Education represents a huge growth industry in the US and Australia, comprising public sector universities and community colleges, private non profit schools including elite research universities, and a rapidly increasing private profit sector. Although evaluation of board performance is now used widely in both the private and public sectors, less attention has been given to its application in university contexts. This paper describes a comparative analysis of various criteria used to conduct board assessments with a framework developed in Australia to guide evaluation of a university council’s performance. In particular, it describes the components of the University Council Assessment Questionnaire and reflects on the use of evaluation to improving Council performance.
Leader Development and Leadership Development Programs: The Must Haves in Determining Program Value
Presenter(s):
Ray Haynes, Indiana University, rkhaynes@indiana.edu
Barbara Bichelmeyer, Indiana University, bic@indiana.edu
Abstract: This paper presentation explains the role of evaluation in establishing the worth, merit, and value of leadership development programs in organizational settings. It proposes that construct definitions can serve as guiding lights that specify program content and control program outcomes. Consequently, we focus on construct distinctions between leader development and leadership development programs. Further, we propose an integrative framework that incorporates leader development and leadership development. We then discuss how this integrative, construct-centered framework can serve as a guide for executing high-quality evaluations of leadership development programs by presenting a logic model that serves as a foundation for comprehensive leadership development in organizational settings.
Evaluation Quality as Authentic Alignment Between People and Goals
Presenter(s):
Eric Abdullateef, Directed Study Services, eric.abdullateef@mac.com
Abstract: This paper proffers a methodology designed specifically to help individual evaluators and evaluation service organizations to be authentic. Authenticity in this construct implies that when an evaluator’s personal goals are strategically aligned with an affiliated evaluation organization’s goals, an enabling condition for the production of quality goods and services is created. This system, if launched, will align human capital to business success. Herein we focus on change at the organizational level although in practice our methodology stipulates a two-tiered approach to organizational change in which both the organization and its people simultaneously pursue behavior change and goal alignment. This paper presentation will walk readers through the methodology and demonstrate how the Washington Evaluation Association deployed it in its 2010 endeavor to assure member value for money and to improve the Association’s performance.

Session Title: National Evaluation Capacity Development
Panel Session 276 to be held in CROCKETT C on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Hallie Preskill, FSG Social Impact Advisors, hallie.preskill@fsg-impact.org
Discussant(s):
Michael Quinn Patton, Utilization-Focused Evaluation, mqpatton@prodigy.net
Abstract: Within the social policy reform debate occurring in several countries, much attention has been given to policy advising and formulation, as well as policy (and budget) decision making. However, it appears the real challenge is implementing policy reforms to “translate” policy statements into development results for vulnerable population, including poor children and women. Strengthening national social systems to implement policies is therefore paramount. A strong national evaluation system is crucial to provide essential information and analysis to ensure that such policies are being implemented in the most effective and efficient manner, to review policy implementation and design, and to detect bottlenecks and inform adjustments to enhance systemic capacities to deliver results. While more and more countries are designing and implementing national evaluation systems, often technical capacity to develop evaluation systems that meet international quality standards is weak. Therefore, national strategies to strengthen evaluation capacities are needed. These strategies should be comprehensive and integrated, addressing both the technical and political side, as well as the three different levels of capacity development: individual, institutional and the enabling environment.
Towards a Conceptual Framework for National Evaluation Capacity Development
Marco Segone, United Nations Children's Fund, msegone@unicef.org
While explaining how country-led M&E systems are instrumental in facilitating policy reform implementation, the presentation focuses on a conceptual framework for national evaluation capacity development. The presentation highlights how an evaluation capacity development strategy should address the enabling environment, institutional framework and individual level, while taking into consideration both demand and supply side. The presentation is developed taking into consideration lessons learned and good practices by several stakeholders, including UN agencies, Banks and national and regional professional evaluation organizations. It builds upon a draft paper developed by United Nations Evaluation Group on evaluation capacity development.
Successes and Failures in Development of Citizens’ and Their Representatives’ Evaluative Capacities
Christina Bierring, United Nations Children's Fund, cbierring@unicef.org
Citizens’ demand for better social services is an important trigger of government efforts to improve its monitoring and evaluation systems in order to better analyse and assess its performance for development results. In a democratic context, governments are accountable to their citizens and these in turn express their concerns and demand their rights through their representatives such as traditional leaders, elders, parliamentarians, or civil society organizations. International development organizations play an important role in encouraging and developing citizens’ capacity for evidence based influencing to claim better service delivery performance. Drawing on examples from West Africa, this presentation will explore some successes and failures of international development organizations’ efforts to support the development of citizens’ and their representatives’ evaluative capacities. It will discuss how they are strengthening the ability of citizens and their representatives to access data, to critically evaluate the services they receive and to encourage governments to provide high quality and relevant services for a better quality of life. The presentation will highlight actual opportunities and threats in the institutional context and culture of West African countries to using critical, transparent evidence that is at the core of evaluative capacity.
Use of Training in Evaluation Capacity Building
Alexey Kuzmin, Process Consulting Company, alexey@processconsulting.ru
This presentation is focused on the use of training in evaluation capacity building (ECB) in organizations. Evaluation training is an important component of ECB. A good utilization-focused evaluation training contributes both to the development of evaluation competence and to the use of evaluation competence in the ongoing practice. To maximize effectiveness of training in ECB intervention it is important to consider the following groups of factors that affect the use of evaluation training in building sustainable evaluation capacity in organizations: 1. Factors that cause involvement of organizations in evaluation training; 2. Factors related to the utilization focus of the training; 3. Factors related to the training itself; 4. Factors related to complementary learning that enforces the use of training; 5. Factors related to the sustainability of the training outcomes. We’ll present a theoretical framework for an ECB intervention that includes five components identical to the groups of factors described above

Session Title: National Aeronautics and Space Administration (NASA) Office of Education’s Portfolio Evaluation Approach: Focus on Questions That Provide High Value Answers
Multipaper Session 277 to be held in CROCKETT D on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Government Evaluation TIG
Chair(s):
Joyce Winterton,  National Aeronautics and Space Administration, joyce.l.winterton@nasa.gov
NASA’s Requirements for and Uses of Program Evaluation
Presenter(s):
Brian Yoder, National Aeronautics and Space Administration, brian.yoder@nasa.gov
Abstract: NASA’s evaluation requirements arise from its responsibility to support effective programs and its accountability for these programs. Audiences for evaluation findings on the implementation and impacts include: 1) Congress; 2) OMB; 3) NASA’s Office of Education; 4) Program staff; and 5) local stakeholders who are involved in the programs, who use findings to improve their programs. In this presentation, NASA will discuss its evaluation needs in light of these audiences, and the process of working with its evaluation contractors in developing its overall evaluation plan and in designing specific evaluation studies for particular programs or subsets of programs. NASA will discuss the advantages and disadvantages of certain approaches and will describe how the data and findings from the evaluations have been useful in, or have fallen short of, meeting the needs of the intended audiences.

Session Title: Evaluating Education, Health Education and Agriculture Education Around the Globe
Multipaper Session 278 to be held in REPUBLIC A on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Satisfaction of Employers for Agricultural Graduates: Perspective for the Future
Presenter(s):
Virginia Gravina, University of the Republic, virginia@fagro.edu.uy
Abstract: In order to respond to needs of constituents who hire graduates and to plan for future curriculum, the College of Agriculture, in Uruguay, established a research plan to garner their perceptions. Q methodology was used to explore these perceptions. The study was structured around five essential issues to achieve the accreditation status: the current role of the University, the future role of the University, the actual professional graduate, and the potential future graduate, and agricultural development. Three different views emerged: The Specialized Integrative, emphasizing the role of a professional as specialized with a rigorous knowledge, innovative, and hard working. The Sustainable view made a strong point for sustainability embracing ecology, sociology, and economy as a main axis. The Educational view reported a good education, strong scientific knowledge, critical thinking, study habits, and ability to solve problems, as the engine to any kind of development.
A Cross-Case Analyses of Different Stakeholders’ Perceptions About the Quality of an Evaluation Program: Colombia’s National Bilingual Program
Presenter(s):
Alexis Lopez, University of Los Andes, Colombia, allopez@uniandes.edu.co
Abstract: As part of the National Bilingual Program, the Colombian National Ministry of Education formulated a nation-wide project to strengthen the teaching of English as a foreign language in Colombia. One of the goals of this program is to evaluate all undergraduate programs, which train prospective English teachers, to highlight the programs’ strengths and weakness in order to improve the quality of teacher training programs in Colombia. This paper focuses on how different stakeholders view the quality of this evaluation program. An exploratory mixed-method study was conducted in order to gain insights about how different stakeholders (administrators, teachers, students and alumni) view the quality of this evaluation program. Preliminary findings suggest that stakeholders’ have different views on how to judge the quality of this evaluation program. Although they focus on similar aspects, their expectations vary on some degree.
Balancing Global Focus With Local Perspectives: A Comparative Study of Innovative Teaching and Learning in Four Countries
Presenter(s):
Corinne Singleton, SRI International, corinne.singleton@sri.com
Savitha Moorthy, SRI International, savitha.moorthy@sri.com
Linda Shear, SRI International, linda.shear@sri.com
Abstract: This paper describes the Innovative Teaching and Learning (ITL) research project, a multiyear global research program that investigates the factors promoting innovative teaching practices and their impact on student learning across several country contexts. In this paper, we discuss two components of our methodology from the pilot year of the study, namely (a) a strategy for conducting a distributed evaluation that allows for consistency across countries as well as local adaptation of methods and (b) Learning Activities and Student Work (LASW), a unique method that uses authentic artifacts from the classroom to draw conclusions about teaching and learning of competencies relevant to life and work in the 21st Century. Through these strategies, ITL research contributes to the development of a concrete set of globally relevant definitions and locally-implemented methods that further our ability to identify, measure, and implement “innovative teaching and learning” around the world.
Use of the Task Analysis Methodology to Strengthen Education of Nurses, Midwives, and Physician Assistants in Liberia
Presenter(s):
Mary Drake, Jhpiego, mdrake@jhpiego.net
A Udaya Thomas, Jhpiego, uthomas@jhpiego.net
Marion Subah, Jhpiego, msubah@jhpiego.net
Abstract: Liberia has among the highest maternal and child mortality in the world (1) . To help reduce these rates, Liberia prioritized a basic package of health services to be provided to all Liberians seeking care. Critical for implementing these services is the system for educating providers who will deliver them. A task analysis was conducted among nurses, midwives and physician assistants to determine which of these tasks were being provided, which tasks they had not been trained for, and where they learned to perform the tasks. The findings are essential for updating the curricula, core competencies and job descriptions of these cadres to ensure a streamlined, competency-based education process that is linked to job readiness. This paper will: 1. Describe the classic task analysis methodology and key adaptations made 2. Discuss key findings 3. Present strengths and opportunities for improving the methodology 4. Make recommendations for strengthening the methodology and tools

Session Title: Keep an Eye on the Basics: The Importance of Evaluating Public Health Program Infrastructure
Panel Session 279 to be held in REPUBLIC B on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Leslie Fierro, SciMetrika, let6@cdc.gov
Abstract: When presented with evaluation questions about how a public health program is improving population health, many public health practitioners involved in evaluative activities naturally gravitate towards designing an evaluation focused on one or more public health interventions. Despite the potential important contribution of interventions to improving population health, there are many other activities conducted as part of public health programs. High-quality evaluation portfolios of public health programs will include evaluations of partnerships and surveillance efforts intended to provide a solid infrastructure that practitioners can draw upon to plan, implement, evaluate, and sustain interventions. Presenters will discuss collaborative efforts between the Centers for Disease Control and Prevention and representatives from state asthma programs to develop and disseminate materials helpful for designing and implementing evaluations focused on infrastructure. State asthma program evaluation experiences will be shared with a special emphasis on how information was used to improve public health practice.
Evaluation in Support Of High-quality Asthma Partnerships
Carlyn Orians, Battelle Memorial Institute, orians@battelle.org
Shyanika Wijesinha Rose, Battelle Memorial Institute, rosesw@battelle.org
Linda Winges, Battelle Memorial Institute, winges@battelle.org
Sarah Gill, Cazador, sgill@cdc.gov
Robin Shrestha-Kuwahara, Centers for Disease Control and Prevention, rbk5@cdc.gov
There is a rich literature on the use of partnerships to pursue health goals. Asthma is the type of public health issue for which partnerships are believed to be advantageous due to its high prevalence and the distribution of responsibility for its prevention and management across multiple individuals and organizations. Evaluation is a critical task to strengthen the partnership and help it reach its objectives. To focus potential evaluation designs, we formed a CDC-State workgroup to frame the role of partnerships within State Asthma Programs funded by the Air Pollution & Respiratory Health Branch at CDC, develop evaluation questions, and identify potential indicators and resources. Each partnership concept was linked to one or more indicator and each indicator was linked to one or more method and tool. This presentation will describe the systematic framework and supporting tools designed to promote high-quality evaluations of state asthma program partnerships.
The Value and Utility of Evaluating Public Health Partnerships: An Example From Utah
Rebecca Giles, Utah Department of Health Asthma Program, rgiles@utah.gov
The Utah Department of Health Asthma Program contracted with the Center for Public Policy & Administration at The University of Utah to evaluate the effectiveness of the Utah Asthma Task Force as a partnership. The evaluation question is: How effective is the Utah Asthma Task Force as a partnership? Answering this involved an analysis of how the partnership is structured, who the members are, what they do, and how the partners work to achieve the goals of the Utah Asthma Program. This presentation reports the results of the evaluation. The findings suggest that in general the partnership is effective in working towards the goals of the state Asthma plan. There are some areas for improvement, and recommendations proposed are reviewed to show progress towards filling those gaps.
Evaluation as a Program Improvement Tool: Evaluation of Asthma Surveillance Systems
Amanda Savage Brown, Cazador, abrown2@cdc.gov
Leslie Fierro, SciMetrika, let6@cdc.gov
Linda Winges, Battelle Memorial Institute, winges@battelle.org
Surveillance plays a central and important role in state asthma programs to: help states and their partners design and refine program activities, understand the progression of asthma and its associated risk factors, and generate awareness among key stakeholders about the impact of asthma in their state and how to decrease its burden. Hence, states should evaluate whether asthma is being monitored effectively and efficiently within the state, and whether asthma surveillance activities are leading to intended outcomes. Current CDC guidelines for evaluating surveillance using the CDC Evaluation Framework are best suited to infectious diseases; modifications to this framework are often necessary when evaluating asthma surveillance systems. To assist states, CDC’s Air Pollution and Respiratory Health Branch convened a surveillance evaluation workgroup to generate resources and identify core surveillance indicators. Reference materials to be shared include a logic model, corresponding evaluation questions, and examples of surveillance evaluation tools used by states.
The Value and Utility of Evaluating Public Health Surveillance: Asthma Mortality Rates Among Seniors in Minnesota
Wendy Brunner, Minnesota Department of Health, wendy.brunner@state.mn.us
When we reported that the asthma mortality rate among seniors in Minnesota was twice the national average, our surveillance advisory group recommended that we conduct a mortality review, based on the difficulty in diagnosing asthma in older persons and the known inaccuracies of death certificates, to see if the trend was real. Over a year, we collected death certificates and medical records for individuals age 55 or older whose deaths had been classified as asthma. An expert panel reviewed each case and determined that only 6% of the deaths were likely due to asthma. Further review of death certificates indicated that 40% of the misclassifications could be attributed to errors in death certificate completion. We now know that asthma mortality rates for seniors in Minnesota are overestimated using our current methods and can take this into account in future public health activities.

Session Title: A Checklist for Planning, Implementing and Evaluating Implementation Quality
Multipaper Session 280 to be held in REPUBLIC C on Thursday, Nov 11, 10:55 AM to 12:25 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Jason Katz,  University of South Carolina, katzj@email.sc.edu
Discussant(s):
Sandra Naoom,  National Implementation Research Network, sandra.naoom@unc.edu
The Getting to Outcomes (GTO) Implementation and Process Evaluation Tools: Conceptual Basis and Overview
Presenter(s):
Jason Katz, University of South Carolina, katzj@email.sc.edu
Victoria Chien, University of South Carolina, victoria.chien@gmail.com
Duncan Meyers, University of South Carolina, meyersd@mailbox.sc.edu
Jonathan Scaccia, University of South Carolina, scacciaj@email.sc.edu
Annie Wright, University of South Carolina, patriciaannwright@yahoo.com
Sheara Fernando, University of South Carolina, fernando@mailbox.sc.edu
Pamela Imm, University of South Carolina, pamimm@windstream.net
Abraham Wandersman, University of South Carolina, wandersman@sc.edu
Abstract: We will begin by providing a summary of the conceptual basis for our work -- Getting To Outcomes (GTO), a comprehensive programming framework that integrates planning, implementation, and evaluation. Next, the need for systematic implementation and process evaluation tools, which is part of a larger strategy of expanding the utility of GTO for practitioners, will be discussed. We will present and describe the “value-added” of two implementation tools: an implementation checklist and an implementation measurement template. Then, we will highlight how the implementation tools fit with GTO’s emphasis on integrating planning, implementation, and evaluation by describing the extension of the implementation tools to the 1) prospective use of the implementation tools for planning a program, 2) real-time use of the implementation tools for program monitoring, and 3) retrospective use of implementation tools for evaluating the role of quality of implementation for process evaluation and outcome evaluation purposes.

Return to Evaluation 2010
Search Results for All Sessions