2011

Return to search form  

Session Title: Recent Developments in Theory-driven Evaluation: Understanding and Using the Integrative Validity Model
Multipaper Session 251 to be held in Avalon A on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Chair(s):
John Gargani,  Gargani + Company Inc, john@gcoinc.com
Applying The Integrative Validity Model and the Bottom-Up Approach in the Case of Phase II of the Noyce Scholarship Program
Presenter(s):
David Urias, Drexel University, dau25@drexel.edu
Sheila Vaidya, Drexel University, sheila.rao.vaidya@drexel.edu
Abstract: Using Drexel University's Phase II of the Noyce Scholarship Program as a case-study, this paper reports on research designed to improve the gap between intervention research and practice by illustrating the application of Chen's (2010) integrative validity model and bottom-up approach to validity. In the real-world, stakeholders organize and implement an intervention program. Thus, they have real viability concerns. Viability alone does not guarantee an intervention's efficacy or effectiveness, but in real-world settings, viability is essential for an intervention's overall success. In other words, irrespective of an intervention's efficacy or effectiveness, unless the intervention is practical, suitable for implementation, and acceptable to stakeholders and implementers, it has little chance of survival in a community. Our research answers the question how does one design and implement viable, effective, and generalizable real-world programs?
Emerging Strategies for Revitalizing Basic Evaluation Concepts: Recent Developments in Theory-Driven Evaluation
Presenter(s):
Huey Chen, Centers for Disease Control and Prevention, hbc2@cdc.gov
Abstract: Recent developments in a new perspective that incorporates the integrative validity model and bottom-up approach from the theory-driven evaluation tradition (Chen, 2010, Chen and Garbe, 2011) provide evaluators with a realistic and useful way to address validity issues in outcome evaluation systematically. Due to its comprehensiveness and real-world orientation, this perspective has broad implications for advancing evaluation concepts and methods to better serve stakeholders' needs. This paper attempts to use insights provided by this new perspective to examine problems or controversies surrounding basic evaluation concepts and offer possible solutions. The discussion covers the following areas: the controversy of fidelity versus reinvention in process evaluation, problems with use of the traditional goal-attainment model to define evaluation scope, confusions on the concept of external validity issues, and problems with neglecting stakeholder theory-based interventions. Based on the discussion, strategies implied from the new perspective to address these problems are proposed and discussed systematically.

Session Title: Data and Information Visualization Throughout the Evaluation Life Cycle for Participatory Evaluation and Evaluation Capacity Building
Demonstration Session 253 to be held in California A on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Data Visualization and Reporting TIG
Presenter(s):
Myia Welsh, Innovation Network, mwelsh@innonet.org
Johanna Morariu, Innovation Network, jmorariu@innonet.org
Veena Pankaj, Innovation Network, vpankaj@innonet.org
Melisa March, Innovation Network, mmarch@innonet.org
Abstract: In this session presenters will share approaches and examples of how to incorporate innovative data and information visualization techniques throughout each stage of the evaluation life cycle to support participatory evaluation and build evaluation capacity. In the planning and design phase mind mapping can be used to promote brainstorming and idea generation. In the data collection stage, evaluators can use creative visuals to improve stakeholder understanding of and participation in data collection, and evaluators can adhere to good design principles to create effective data collection instruments. During the analysis and reflection stage of the evaluation life cycle, tools such as data place mats and media tracking increase stakeholder comprehension and involvement. For the fourth and final phase of the evaluation life cycle-action and improvement-presenters will provide examples of dashboards and other tools for effectively communicating findings and managing performance.

Session Title: There is no 'I' in Team: Understanding Roles and Dynamics in Metaevaluation Teams
Skill-Building Workshop 254 to be held in California B on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
SeriaShia Chatters, University of South Florida, schatter@mail.usf.edu
Eun Kyeng Baek, University of South Florida, ebaek@mail.usf.edu
Abstract: Effective team work can determine the overall effectiveness of a metaevaluation. In any group activity, team members assume roles. It is important for any metaevaluation team leader to be aware of team member roles and learn to use each role to increase the overall effectiveness of the metaevaluation process. Failing to recognize harmful team member roles may undermine the metaevaluation process, cost a team leader to lose current and future clients, miss important deadlines, and lose team members due to team conflict. The purpose of this proposal is to discuss the various roles team members assume during a metaevaluation and the various roles a team leader must assume to maintain control of the metaevaluation and ensure project completion. We will also discuss methods to build team cohesiveness, encourage buy in, delegate effectively based on team member roles, and ultimately get the job done.

Session Title: Utilizing Item Analysis to Improve the Evaluation of Student Performance
Expert Lecture Session 255 to be held in  California C on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Cristian Gugiu, Western Michigan University, crisgugiu@wmich.edu
Presenter(s):
Mihaiela Gugiu, Central Michigan University, gugiu1mr@cmich.edu
Abstract: One of the cornerstones of teaching is the evaluation of student performance. Traditionally, such evaluations are performed through the administration of exams, quizzes, research papers and group projects. Although faculty are accustomed to evaluating students, rarely do they evaluate the quality of the aforementioned methods used to assess student knowledge. The present study will illustrate how certain measurement theory techniques (i.e., item difficulty, index of discrimination, Cronbach's alpha, and point biserial correlation) can be utilized to investigate the reliability and validity of student performance and what their impact is on grade distribution. Additionally, a new method for performing item analysis is proposed. To demonstrate the applicability of measurement theory in estimating the reliability and validity of student performance, I draw from my experience in teaching introductory courses in political behavior.

Session Title: Putting Values Back Into Evaluation; A Catalog of Methods for Identifying and Incorporating Diverse Values
Demonstration Session 256 to be held in Pacific A on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Presidential Strand
Presenter(s):
Patricia Rogers, Royal Melbourne Institute of Technology, patricia.rogers@rmit.edu.au
Abstract: Although evaluation is intrinsically involved with values, there is often little guidance provided to evaluators or evaluation commissioners about how to identify and incorporate diverse values that should legitimately be addressed in an evaluation. These values can relate to what are seen to be desirable and undesirable standards of performance, outcomes/impacts, processes and distribution of costs and benefits (for example, whether it is better to choose an option which has the best average impact or the one which is most beneficial for the most disadvantaged). This demonstration will show a range of methods for identifying and clarifying values (including success ranking, benchmarking, dotmocracy, photovoice, critical reference group) and for incorporating diverse values into an overall evaluative judgment (including arithmetic weighting, qualitative weight and sum, cost utility, co-existive evaluation). The demonstration draws on material from BetterEvaluation - an international collaboration that generates and shares information about choosing and implementing appropriate evaluation methods.

Session Title: Valuing Case Study Methods in Evaluating the Implementation of Educational Programs
Multipaper Session 257 to be held in Pacific B on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Qualitative Methods TIG and the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Hannah Betesh, Social Policy Research Associates, hannah_betesh@spra.com
Abstract: Case study method has enormous potential to inform evaluations of school- and district-based educational programs specifically, as the institution of public schools in the United States represents a unique and diverse structure that compels careful consideration of contextual limitations and opportunities. This session will draw on implementation studies of two school-based initiatives to illustrate how the choice and execution of case study method can enhance the rigor of education research, particularly in a climate of increasing accountability and pressure to produce improved academic outcomes. In this era of high stakes assessments and outcomes-focused research and policy, it is important not to lose sight of the contributions that in-depth implementation research can provide. Case study research is an important approach to capturing the contextual factors specific to implementation of educational programs, and thereby enhancing the applicability of findings from outcome evaluations.
Resources, Context and Implementation Potential: Lessons Learned From Studying a Literacy Intervention in Five Urban School Districts
Hannah Betesh, Social Policy Research Associates, hannah_betesh@spra.com
In order to facilitate successful implementation of curricular interventions, especially in the complex ecology of school districts, many contextual factors need to be taken into account. This presentation will highlight findings from a case study investigation of an in-school reading intervention in five diverse school districts to describe the issues that affect program implementation, how they manifest, and how they are addressed in different settings of the study. A major challenge in achieving success with in-school interventions is the difficulty of implementing consistently and with fidelity across diverse, often challenging contexts, and using case study method to evaluate the program helped the research team articulate the range of contexts and experiences. In this evaluation, a major finding was the position of "intermediaries" - key players in the implementation of literacy reforms - and the variation, across our sample, in how this position is understood and articulated, and its effectiveness at supporting implementation.
Case Studies of the Implementation of Small Learning Communities in Three Urban High Schools
Nada Rayyes, Berkeley Policy Associates, nada@bpacal.com
Eric Barela, Partners in School Innovation, ebarela@partnersinschools.org
In this study, we evaluated the implementation of small learning communities (SLCs) in a large urban K-12 school district. Using case study methods, we examined three unique high schools implementing their distinct SLC models guided by a district plan. We investigated conditions and structures that supported, and presented challenges to, successful implementation. While our sample schools represented various levels of success, our data revealed some common findings across sites: successful implementation of SLCs requires strong school leadership willing to grant sufficient autonomy to SLCs, meaningful teacher collaboration, and professional development aligned with SLC goals. Other key findings included an increase in personalization among all schools implementing SLCs, but challenges with achieving true equity and parent engagement. The case study approach provided an in-depth understanding of the contextual factors that influence a large-scale educational reform effort, and provided lessons to schools and districts attempting similar interventions.

Session Title: Understanding and Evaluating Complex Programs and Policies: A Focus on New Approaches and Innovative Methods
Expert Lecture Session 258 to be held in  Pacific C on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Melvin Mark, Pennsylvania State University, m5m@psu.edu
Presenter(s):
Sanjeev Sridharan, University of Toronto, sridharans@smh.ca
Abstract: As a means of addressing social and health problems, complex policies and programs have continued to proliferate. This expert panel will review methods and designs that are increasingly used to evaluate COMPLEX interventions. The focus will be on quantitative methods and designs but also on how quantitative methods can be integrated with qualitative methods. The expert panel will focus on the following: (1) Types of methods that might be useful in describing and understanding the nature of complexity; (II) How such methods help understand the CONTEXT of impact; one of the features of the panel will be on understanding the dynamic and spatial nature of such contexts; (III) How best to integrate quantitative and qualitative methods to address problems of complexity; (IV) How such methods can be useful to understand impacts. This discussion will take place within a context of a framework that can help intervention planners to move from interventions that are very complex initially to a few well chosen components-a "learning from principled discovery approach" (Mark et al., 2000) will be discussed as a means of addressing problems of complexity

Session Title: Evaluating Federal and Philanthropic Research and Development Programs and Their Impacts on Public Understanding
Panel Session 259 to be held in Pacific D on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Sarah McDonald, University of Chicago, mcdonald-sarah@norc.uchicago.edu
Discussant(s):
Michael Feuer, George Washington University, mjfeuer@gwu.edu
Abstract: Philanthropic organizations and governmental agencies play a critical role in supporting research and development activities designed to generate evidence to inform public policy, decision-making, and understanding of social and scientific issues. Frequently this occurs through sustained programmatic initiatives that fund coherent yet diverse lines of inquiry. Portfolios of funded projects often span the R&D cycle; foster innovation and generate evidence on intervention impacts; target multiple audiences; and are themselves dynamic as new projects are funded over time. This panel explores the challenges for program evaluation in these settings. Panelists will describe new program and portfolio evaluative activities within a federal agency, and present empirical results from a recently-completed evaluation of a major philanthropic initiative. The session will foster discussion of methodological innovations that would further enhance evaluations of R&D programs in both federal and non-profit organizations.
Establishing Impacts of Philanthropic Grant-making Programs: Insights from the Evaluation of the Alfred P. Sloan Foundation's Workplace, Work Force and Working Families Program
Kathleen Christensen, Alfred P Sloan Foundation, christensen@sloan.org
The Alfred P. Sloan Foundation's Workplace, Work Force and Working Families program was established in 1994 to spur the development of the field of work-family scholarship. The resulting research revealed a workplace/workforce mismatch, a social and economic issue with profound impacts which the Foundation addressed in part by providing financial support for 324 separate projects. Together these projects were designed to build an academic research base, advance changes in policy or practice, and disseminate research findings to the general public. In 2010 the Sloan Foundation funded a team of researchers at NORC at the University of Chicago to assess the impact and influence of the programs' intellectual and applied contributions. This presentation presents results from the evaluation, highlighting important methodological issues in efforts to document the value-added of such programmatic initiatives.
Forging New Directions in Science Technology Engineering and Mathematics (STEM) Education Evaluation: The Case of the National Science Foundation
Janice Earle, National Science Foundation, jearle@nsf.gov
NSF's Directorate for Education and Human Resources is intensifying its focus on program, portfolio and project evaluation across the Directorate. A cross-directorate team has been created, chaired by this paper's author. Initial activities underway include: (1) Aligning EHR priorities with new requirements from the Office of Management and Budget. Activities include developing metrics for each program and exploring new areas such as how to develop measures for R & D activities. (2) Creating an R & D program to advance innovations in STEM education evaluation (PRIME); and (3) reshaping contractual activities so that new approaches such as evaluating themes that cut across programs, are explored.

Roundtable: Into the Wild: A Collaborative Evaluation of a Zoo's Home School Program
Roundtable Presentation 260 to be held in Conference Room 1 on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Paromita De, University of South Florida, pde2@mail.usf.edu
Chunhua Cao, University of South Florida, chunhuacao@mail.usf.edu
Walter J Rosales-Mejía, University of South Florida, wrosales@mail.usf.edu
Vanessa Vernaza-Hernandez, University of South Florida, vanessav@mail.usf.edu
Abstract: Environmental education has gained growing interest, and as a result the number and variety of stakeholders are increasing. One group of stakeholders, Home School families, are often faced with the challenge of providing their students with access to knowledge beyond the family's teaching capabilities. For this reason, entities like zoos and museums have created programs for Home School students to learn about special topics. At the Lowry Park Zoo's Home School program in Tampa, Florida, students and families met monthly to discuss different animal topics. This collaborative evaluation looked at program benefits for students and parents. The involvement of teacher and parent stakeholders in addition to external evaluators provided unique perceptions in assessing program impact. Empowering the stakeholders by involving them in the evaluation's implementation allowed our client to be more receptive to stakeholders' needs and suggestions for future improvements.

Roundtable: The Value of Evaluation of Clinical Practice in the Knowledge Translation Process
Roundtable Presentation 261 to be held in Conference Room 12 on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Presenter(s):
Evangeline Danseco, Ontario Centre of Excellence for Child and Youth Mental Health, edanseco@cheo.on.ca
Abstract: Knowledge translation in health and education involves the active ongoing use of credible evidence, usually based on research findings. In the field of mental health, effective treatments are moderated by the quality of therapeutic relationships, and influenced by practitioner experiences and intuition. Evaluation of mental health programs provides value in providing evidence on what works in real-life and enriches knowledge of what works in research settings. Evaluation of clinical practice also provides practitioners with an equal voice to those of researchers in determining credible evidence. This session will discuss the value of evaluation in closing the loop in the knowledge translation process and the value of evaluation in 'practice-based evidence.' We will present some of our experiences in evaluation at the Ontario Centre of Excellence for Child and Youth Mental Health. Attendees will be encouraged to share their strategies in situating the value of evaluation in the knowledge translation process.

Session Title: Thinking Again About Evaluation Capacity-Building
Think Tank Session 262 to be held in Conference Room 13 on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Extension Education Evaluation TIG
Presenter(s):
Autumn Guin, North Carolina State University, autumn_guin@ncsu.edu
Discussant(s):
Benjamin Silliman, North Carolina State University, ben_silliman@ncsu.edu
Abstract: This Think Tank will orient participants to the content and use of the 4-H National Evaluation for Impact Self-Assessment (SA) (Arnold, et al., 2008) used in capacity-building with community-based Extension professionals. Respondents tend to over-estimate skills at pre-test, then report lower skill levels at post-test following training and hands-on experience. Retrospective self-assessments generally result in lower estimates of initial skills and similar estimates (as pre/post assessment) of current skill. Qualitative comments from novice learners indicate unfamiliar with SA concepts, tendencies to rate on knowledge rather than skills, and to quickly lose skill competence without continued practice. Think Tank participants will explore 1) content validity of the SA; 2) potential refinements of the SA survey/resurvey procedure; 3) corroborative or alternative measures for evaluation capacity; 4) documentation for sustained vs. temporary skill mastery; and 5) links between individual and organizational capacity-building.

Session Title: Evaluating Basic Research in China: Two Very Different Models
Multipaper Session 263 to be held in Conference Room 14 on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Laurel Haak,  Discovery Logic, laurel.haak@thomsonreuters.com
Estimate Returns to Scale of the Basic Research Institutes in Chinese Academy of Sciences
Presenter(s):
Guoliang YANG, Chinese Academy of Sciences, glyang@casipm.ac.cn
Wenbin Liu, University of Kent, w.b.liu@kent.ac.uk
Xiaoxuan LI, Chinese Academy of Sciences, xiaoxuan@casipm.ac.cn
Abstract: The effectiveness and efficiency of scientific funding has become the focus of public concern in main developed and developing countries. In this paper, we are aimed to investigate the returns to scale (RTS) of the basic institutes in Chinese Academy of Sciences (CAS) in the period of the Knowledge Innovation Project (KIP). In the section of theoretical study, a new quantitative method in the framework of DEA is proposed for estimate the RTS of these institutes more accurately, which aims to introduce the standard concepts of RTS in Economics into the framework of DEA so that the accuracy of RTS estimation will be enhanced. In the section of empirical study, the research is conducted from lateral and longitudinal dimensions to analyze the changes of returns to scale of basic institutes in CAS through this period. Finally, several policy suggestions concerning resource allocation in CAS will be proposed also.
Evaluation of the National Science Foundation of China
Presenter(s):
Erik Arnold, Technopolis and University of Twente, erik.arnold@technopolis-group.com
Abstract: The National Science Foundation of China (NSFC) was established in 1986 using a Western institutional model as a way to develop basic research funding and discipline development in China. Supported by China's National Centre for Science and Technology Evaluation (NCSTE), an international panel is evaluating NSFC's role in developing the national research and innovation system and will report in June 2011. The report explores NSFC's role, developments in China's research performance, internationalisation, the funding instruments used and the problems peculiar to organising basic research funding in a very large scale in a developing system. The paper will summarise the report and draw lessons for evaluation from studying a Western-style funder in a Chinese context.

Session Title: Tools and Methods for Evaluating the Efficiency of Aid Interventions
Expert Lecture Session 264 to be held in  Avila A on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
Chair(s):
Markus Palenberg, Institute for Development Strategy, markus@devstrat.org
Presenter(s):
Markus Palenberg, Institute for Development Strategy, markus@devstrat.org
Abstract: We present the principal results of a two-year research effort funded by the German Federal Ministry for Technical Cooperation and Development (BMZ) on Tools and Methods for Assessing the Efficiency of Aid Interventions. The session is divided into four parts: 1. The motivation for our research is described by documenting the gap between what is expected and what is delivered in terms of efficiency analysis. 2. Existing understanding, definitions and misconceptions for the term "efficiency" are presented and put into context, ranging from simple transformation rates to more elaborate concepts in welfare economics and utility theory. 3. An overview of different methods for assessing efficiency is provided, highlighting their analytic power, their applicability, and their analysis requirements in terms of data, resources and skills. 4. From the above, four general recommendations for how to close the gap between expectation and delivery of efficiency analysis are derived.

Session Title: Follow-up Discussion With Keynote Speaker Kim Barker
Think Tank Session 265 to be held in Avila B on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the AEA Conference Committee
Chair(s):
Melvin Hall, Northern Arizona University, melvin.hall@nau.edu
Presenter(s):
Kim Barker, ProPublica Inc, 
Abstract: Join keynote speaker Kim Barker for an informal opportunity for questions and answers regarding her work as a foreign correspondent working in settings with significant cultural differences.

Roundtable: Developing an Evaluation Community of Practice for Science, Technology, Engineering, and Mathematics (STEM) Education and Workforce Development
Roundtable Presentation 266 to be held in Balboa A on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Organizational Learning and Evaluation Capacity Building
Presenter(s):
Alyssa Na'im, Education Development Center Inc, anaim@edc.org
Abstract: Since 2003, the National Science Foundation has funded the Innovative Technology Experiences for Students and Teachers (ITEST) program to address concerns about the growing demand for science, technology, engineering, and mathematics (STEM) professionals in the U.S. The ITEST program contributes to the STEM education and workforce pipeline by helping young people and teachers in formal and informal K-12 settings acquire the skills needed to succeed in a technology rich society. The ITEST Learning Resource Center (LRC) helps to build a Community of Practice among project staff and evaluators by focusing on successful strategies and lessons learned. This session will look at the variety of ways that the ITEST LRC supports this Community of Practice, and will explore participants' interest in and capacity for bridging and developing formal networks to cultivate a national evaluation Community of Practice focused on STEM education and workforce development.

Session Title: A Participatory Approach to Analyzing Secondary Data and Sharing Lessons Learned With the Field: A Foundation's Efforts to Evaluate Organizational Effectiveness Grants in a Transparent and Engaging Way
Panel Session 267 to be held in Balboa C on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Jared Raynor, TCC Group, jraynor@tccgrp.com
Abstract: Foundations (and many nonprofits) are frequently sitting on large piles of data that can provide rich learning opportunities to advance practice but are under-utilized. But how, if at all, can you use these data that have generally not been systematically or uniformly collected? This session discusses successes and challenges from a Foundation's efforts to learn from its "goldmine of data" and disseminate learnings for the field using a participatory approach. The David and Lucile Packard Foundation launched a research project to analyze existing data from its Organizational Effectiveness (OE) Program. The session will discuss the multi-part process that Packard has used to glean insights from the data. One of the most intriguing methodological approaches was the use of social media and other venues to "workshop" findings emerging from analysis of the data as a way to gather community insight regarding interpretation and implications and as an incremental dissemination pattern.
Making Meaning Together from New Organizational Effectiveness Research: Lessons Learned from a Foundation's Perspective
Kathy Reich, David and Lucile Packard Foundation, kreich@packard.org
The Packard Foundation's Organizational Effectiveness Program has been working to strengthen grantees of the Foundation for some time. While the OE Program gathered individual grantee findings as it went along, it did not systematically look at the information. After consulting with numerous different stakeholders, Packard decided to launch the OE Goldmine Research Project in April, 2010. After the Data Center collected and organized data on 1300 OE grants, the Packard Foundation made a grant to the TCC Group (a consulting firm that provides planning and evaluation services for foundations and nonprofits) in April, 2011 to analyze the data in a way that engages grantees, consultant and foundation stakeholders. Ms. Reich will discuss how the Foundation approached this project, what it ultimately learned and what challenges and lessons learned it drew from the process.
Opportunities and Challenges of Using a Participatory Approach to Analyze Secondary Data and Disseminate Findings
PeiYao Chen, TCC Group, pchen@tccgrp.com
Looking at someone else's dataset with fresh eyes presents an array of challenges and opportunities. As the contracted evaluator to look at Packard's OE data, we had the opportunity to engage around this broad dataset. Ms. Chen will discuss the methodology for looking at the dataset and the unique iterative approach of drawing small, incremental findings and using social media, webinars, and stakeholder convenings to gather insights about how to interpret the data, what are the implications and what additional questions does the finding raise. This approach was designed to engage a variety of important stakeholders such as Foundation staff, capacity-building consultants and grantees. Ms. Chen will discuss lessons learned from this approach to "workshoping" findings through a broader community, both in an attempt to better understand the findings as well as to disseminate findings in smaller, digestible chunks.

Session Title: Using Evaluation for Higher Education Program Intervention and Institutional Change
Multipaper Session 268 to be held in Capistrano A on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Tamara Bertrand Jones,  Florida State University, tbertrand@fsu.edu
An Evaluation of Pipeline Interventions for Minority Scholars
Presenter(s):
Jean Shin, American Sociological Association, shin@asanet.org
Roberta Spalter-Roth, American Sociological Association, spalter-roth@asanet.org
Olga Mayorova, American Sociological Association, mayorova@asanet.org
Abstract: For over a quarter century there has been significant concern about the small number of under-represented minorities in the science pipeline. One oft-proposed solution is to improve mentoring activities since mentoring is considered to be integral to increasing representation. Mentoring in this sense is usually thought of as a dyadic relationship. The purpose of this evaluation is to compare PhD alumni from two nationally-recognized funding programs, one for under-represented minorities and the other largely white, along with a randomly-selected control group. In the largely-white program, dyadic mentoring is the norm. In the under-represented minority program, mentoring through networks of scholars, teachers, and peers is the norm. The evaluation disentangles the effects of minority status and mentoring for the career trajectories, scientific productivity, and professional service for these groups. The evaluation's measures of effects include unobtrusive measures of academic employment, years to tenure, network analyses of co-authorship patterns, and service activities.
Learning from Student Voices: Engaging in an Empowering Needs Assessment to Motivate Higher Education Institutional Change
Presenter(s):
Divya Bheda, University of Oregon, dbheda@uoregon.edu
Abstract: This paper offers the results of a needs assessment at one Pacific North-Western University regarding international students' needs on campus. This evaluation actively sought the participation of the students as active stakeholders, and decision-makers in the evaluation process. The evaluative process was an internal, formative process, driven by feminist principles of beginning with the lived experiences of the marginalized stakeholders—international students-who were also the evaluators(These students' voices/input was not previously sought by the institution during policy-making that affected the services for these international students. The collaborative approach elicited responsiveness within the institution, resulted in greater buy-in for the evaluation process, and generated more meaningful recommendations for the institution. The process resulted in students feeling more empowered because they had more control over institutional adoption of recommendations than merely being subjects and informants within the evaluation process. Learnings from, and challenges of the evaluative process will also be discussed.

Session Title: Internal Evaluation Use
Multipaper Session 269 to be held in Capistrano B on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Internal Evaluation TIG
Chair(s):
Wendi Kannenberg,  Southcentral Foundation, wkannenberg@scf.cc
Discussant(s):
Dale Hill,  American Red Cross, hilldal@usa.redcross.org
When Government Funding is Not Enough-Utilizing Internal Evaluation for Non-Profit Program Development and Additional Funding
Presenter(s):
Suzanne Markoe Hayes, Volunteers Of America Los Angeles, smarkoehayes@voala.org
Abstract: Limited government funding impact non-profit organization's ability to effectively implement programs. Although debate encircles external versus internal evaluation, internal evaluation provides benefits to non-profit organizations. Use-focused forms of evaluation (e.g., Alkin, 2010; Patton, 1997) help facilitate program improvement and are an optimal framework for guiding internal evaluation activities. Given that government funding for non-profit programs is limited and focused in evaluations that are required to fulfill contract obligations, formative internal evaluation information can be used to leverage additional support for program operations and responsive growth and development. A well-known federally supported college preparedness program will be used to illustrate Volunteer's of America Los Angeles' (VOALA) progressive uses of both internal and external evaluation. VOALA used longitudinal data from surveys and focus groups as well as engaged alumni and other stakeholders to develop new program strategies, which would be piloted at five Upward Bound sites and reported on in this presentation.
Using an Evaluability Assessment for Internal Evaluation
Presenter(s):
Valerie Williams, University Corporation for Atmospheric Research, vwilliam@ucar.edu
Abstract: Evaluability assessment (EA)is a tool to assess a program's readiness for an impact evaluation. Based on a systematic assessment of the program's theory, goals, implementation and data collection practices, EA can prevent programs from investing in an evaluation that may provide inaccurate or inconclusive results. Considered useful in the early stages of a program, EA has experienced a resurgence and can be used for diverse purposes, ranging from planning evaluations to catalyzing organizational change. In this paper, the author recounts the first year as an internal evaluator for a worldwide science and education program that has been marked by considerable organizational change. Brought in primarily to help design and implement an impact evaluation, the author describes some of the challenges encountered as an internal evaluator and how an EA served as an invaluable tool for surfacing issues, creating dialogue, interrogating entrenched assumptions, and ultimately providing steps for moving forward.

Session Title: Using Values in the Evaluation of Culturally Specific Programs: A New Tool for Assessing the Impact of Programs That Seek to Align the Values, Attitudes, and Beliefs of Participants to Those of Role Models and Mentors
Demonstration Session 270 to be held in Carmel on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG and the International and Cross-cultural Evaluation TIG
Presenter(s):
Trena Anastasia, University of Wyoming, tanastas@uwyo.edu
Rodney Wambeam, University of Wyoming, rodney@uwyo.edu
Abstract: In this demonstration presenters will walk attendees through the steps used to create an evaluation tool that measures participant and mentor values, beliefs, and attitudes. Using a pre/post-test design, researchers analyze results to determine how much program participants move toward sharing the culture of their mentors and role models. Unlike traditional program evaluations that measure change in a predetermined direction defined by researchers or program developers, this technique allows cultural groups and mentoring programs to define positive change through the evaluation process. The strength of this tool lies in its ability to account for cultural differences and the values of program participants and leaders. Potential application includes peer mentoring, tribal, and minority group evaluations whose goal is to improve social outcomes by having youth embrace traditional cultures.

Session Title: Distance Education & Other Educational Technologies TIG Business Meeting
Business Meeting Session 271 to be held in Coronado on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Distance Ed. & Other Educational Technologies TIG
TIG Leader(s):
Talbot Bielefeldt, International Society for Technology in Education, talbot@iste.org

Session Title: Health Evaluation and Social Media: Evidence and Examples
Expert Lecture Session 272 to be held in  El Capitan A on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Health Evaluation TIG
Presenter(s):
William Evans, The George Washington University, wdevans@gwu.edu
Abstract: In this presentation we review evidence on social media and health behavior. We explore which types of social media are most utilized by different subgroups, how these different media resonate with consumers, and how consumers interact with the vast array of health messages. We then discuss a case example of how social media may be used as a communication tool to change health behavior and promote health care. Text4baby is a "free-to-end-user" (FTEU) service, utilizing Mobile Health and facilitated by a collaborative partnership between industry and public health practitioners. The aim of this social media intervention is to improve maternal and child health behaviors such as reduced smoking during pregnancy, taking pre-natal vitamins, improved nutrition and obtaining recommended health care. In this session, we report on evaluation methods and early results of a randomized trial of text4baby. We examine the intervention, theoretical assumptions, and discuss implications for mobile health technologies.

Roundtable: Pulling It All Together: Creating a Strategy to Transform the Complex to the Coherent
Roundtable Presentation 274 to be held in Exec. Board Room on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Mixed Methods Evaluation TIG
Presenter(s):
Kristina Moster, Cincinnati Children's Hospital Medical Center, kristina.moster@cchmc.org
Janet Matulis, Cincinnati Children's Hospital Medical Center, janet.matulis@cchmc.org
Abstract: Analysis has been defined as the process of bringing order to the mass of gathered data (Schattzman and Strauss, 1973; Marshall and Rossman, 1989). As complex as this may be in any given evaluation by any single evaluator, it is even more so when considering a mixed methods evaluation design involving multiple members of an evaluation team. This roundtable will discuss the process used by our evaluation team to craft and implement a data analysis framework and strategy for a multi-year, multi-method evaluation of a quality improvement training program for senior executives in a healthcare setting. Drawing primarily from the mixed methods and trandisciplinary science literatures, we created a matrix of data sources and potential analyses, which was then used to create a framework for data analysis and synthesis and to guide staff resources allocated to each. Challenges encountered throughout this process and proposed next steps will be discussed.

Session Title: Introduction to Website Evaluation: What Is It and How To Do It?
Demonstration Session 275 to be held in Huntington A on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Integrating Technology Into Evaluation
Presenter(s):
Andrew Hawkins, ARTD Consultants, andrew.hawkins@artd.com.au
Abstract: Among the tools available to government the internet provides a powerful means for supporting and more recently, constituting program delivery. This demonstration provides a starting point for those familiar with evaluating social policy programs, but unfamiliar with how to approach an evaluation of a program delivered wholly or partially over the internet. Four basic components of a successful website or online service are addressed: search engine optimisation, accessibility, use and useability. We provide guidance on how to answers such questions as; can people find the website? Can people (including those with a disability) access the website? Who is using the website, which parts, in what ways, how frequently and to what effect? And, how could the website be improved to better engage users? The demonstration will explain basic concepts, showcase tools, and provide links and resources for those wishing to commission, or try their hand at evaluating a website.

Session Title: Addressing the Public Policy Evaluation Imbalance: A Realistic Approach
Panel Session 276 to be held in Huntington B on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Government Evaluation TIG
Chair(s):
Steve Montague, Performance Management Network, steve.montague@pmn.net
Abstract: This panel session makes the case that evaluation in the public policy domain is out of balance and has been largely irrelevant, favoring usually small discretionary expenditures as evaluands over bigger and more important mechanisms of policy. This in turn has created a systemic bias in terms of not just what gets evaluated, but what types of evaluation approaches are deemed to be acceptable. The panelists will argue that part of the current malaise or imbalance which faces evaluation, dangerously leaving the review field over to theoreticians, auditors and score carders, has been self inflicted and that it is time for evaluators to drop some of their sacred cows and to get into the (public policy) game (Shepherd 2011). Some of the key 'new' principles to be discussed will be the idea of relevance before rigor, focusing on a few basic issues related to need, success and cost-effectiveness (alternatives), open discussion of theories of change and implementation, a realist approach to study design, data collection, knowledge accumulation and reporting - combined with a high engagement approach
Public Policy Evaluation: The Current Imbalance
Robert Shepherd, School of Public Policy and Administration Carleton University, robert_p_shepherd@carleton.ca
This panel session makes the case that evaluation in the public policy domain is out of balance and has been largely irrelevant, favoring usually small discretionary expenditures as evaluands over bigger and more important mechanisms of policy. This in turn has created a systemic bias in terms of not just what gets evaluated, but what types of evaluation approaches are deemed to be acceptable. This first panelists will argue that there is a malaise and imbalance in terms of the evaluation of public policy. He has authored articles on this subject and will be speaking from the perspective of having ben a former Director of Evaluation in a major regulatory agency, a consultant practitioner and from his current perspective as Assistant Professor in the School of Public Policy and Administration at Carleton University. Dr Shepherd will note that a part of the current malaise or imbalance which faces evaluation, dangerously leaving the review field over to theoreticians, auditors and score carders, has been self inflicted and that it is time for evaluators to drop some of their sacred cows and to get into the (public policy) game (Shepherd 2011). He will also contribute to the 'new principles' discussion of the second presenter.
Public Policy Evaluation: Time for New Principles of Practice
Steve Montague, Performance Management Network, steve.montague@pmn.net
This presentation will pick up on the situation of imbalance or malaise laid out by the first presenter - and suggest that it is time for new principles of public policy evaluation. The presenter will draw on his experience as a former public sector evaluator and current consultant practitioner to outline some new principles going forward. Some of the key 'new' principles to be discussed will include the idea of relevance before rigor, focusing on a few basic issues related to need, success and cost-effectiveness (alternatives), an emphasis and open discussion of theories of change as well as theories of implementation, a realist approach to study design, data collection, knowledge accumulation and reporting - all conducted using a high engagement approach.

Session Title: AEA's Journal Editors Discuss Publishing in AEA's Journals
Panel Session 277 to be held in Huntington C on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the AEA Conference Committee
Chair(s):
Thomas Schwandt, University of Illinois at Urbana-Champaign, tschwand@illinois.edu
Abstract: This session is aimed at those interested in submitting manuscripts for publication in either of AEA's sponsored journals, the American Journal of Evaluation or New Directions for Evaluation. We'll introduce the incoming editor of New Directions for Evaluation, and the journal editors will discuss the scope of each journal, the submission and review processes, and keys for publishing success.
Publishing in the American Journal of Evaluation
Thomas Schwandt, University of Illinois at Urbana-Champaign, tschwand@illinois.edu
Thomas Schwandt is the Editor of the American Journal of Evaluation
Publishing in New Directions for Evaluation
Sandra Mathison, University of British Columbia, sandra.mathison@ubc.ca
Sandra Mathison is the outgoing Editor of New Directions for Evaluation.

Session Title: Design and Implementation of Large-scale Evaluations: Lessons Relearned
Panel Session 279 to be held in Laguna A on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
George Julnes, University of Baltimore, gjulnes@ubalt.edu
Discussant(s):
Frederick L Newman, Florida International University, newmanf@fiu.edu
George Julnes, University of Baltimore, gjulnes@ubalt.edu
Abstract: Amidst the controversies over methodology across evaluation paradigms, there remains the imperative of conducting good evaluations. The presenters in this session, Len Bickman and Debra Rog, offer lessons learned and relearned in designing and implementing complex, large-scale evaluations.
Implementing a Complex Systems Evaluation
Debra Rog, Westat, debrarog@westat.com
This presentation will describe the evaluation of a comprehensive initiative intended to reform the housing and service delivery systems for homeless families in three counties. Designed as a highly formative evaluation with developmental features, the evaluation of the Gates Foundation Washington Families Fund Systems Initiative is intended to both provide ongoing guidance to the Foundation and assess the Initiative's outcomes at multiple levels. The presentation will review the design and its implementation, including a qualitative longitudinal assessment of each of the three target communities and two comparison communities, case studies of selected organizations, and a family impact study involving a baseline cohort, an intervention cohort, and a constructed comparison sample from state data at both baseline and intervention time frames. Opportunities that have been seized will be highlighted as well as challenges and strategies that have been used to deal with them.
Designing and Implementing a Complex, 28-Site Randomized Cluster Evaluation
Leonard Bickman, Vanderbilt University, leonard.bickman@vanderbilt.edu
Designing a complex multi-site evaluation is difficult enough; implementing it is even more so. Lessons learned will be presented and discussed. The presentation will deal with implementation lessons learned from this and other studies conducted by the author and will describe "salvage" strategies when the inevitable expected but unpredictable problems occur. Discussion will also include approaches that can be taken in grant proposals to ameliorate these predictably unpredictable problems.

Session Title: The Value of Voice: Gaining Access to Marginalized Populations
Demonstration Session 280 to be held in Laguna B on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Disabilities and Other Vulnerable Populations
Presenter(s):
Karen Vocke, Western Michigan University, karen.vocke@wmich.edu
Brooks Applegate, Western Michigan University, brooks.applegate@wmich.edu
Abstract: This demonstration focuses on evaluation issues related to evaluating marginalized populations. For example, known challenges include access and the complicated issues of ethical representation necessary for authentic evaluation. However, critical examination shows when the evaluation process includes members of marginalized populations, results are more tangible, valid, and generalizable, with increased participation of the sample under study. Because the needs of marginalized populations are nuanced and diverse, evaluators must carefully consider the procedures and analyses involving the evaluation participants, especially the need for authentic, and not token, participation. This demonstration offers a protocol for access, collaboration, and evaluation for working with marginalized subpopulations in the K-12 setting, namely children and families of migrant farm workers and students with disabilities. Strategies will be presented for planning evaluations, accessing populations, developing survey instruments, developing a collaborative team, data collection strategies, and data analysis. Session participants will receive materials depicting specific strategies and approaches.

Roundtable: Exploring the Concept and Practice of Staged Evaluation as a More Valuable Approach to Evaluating Large, Complex Education Initiatives
Roundtable Presentation 281 to be held in Lido A on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Presenter(s):
Laura Stokes, Inverness Research Inc, lstokes@inverness-research.org
Mark St John, Inverness Research Inc, mstjohnnverness-research.org
Abstract: The proposition underlying a staged approach to evaluation is that before investing in a full-scale evaluation of a complex, multi-site initiative, it is wise to first conduct a Stage One evaluation. We define Stage One study as systematic and exploratory ground-truthing. This roundtable will examine that proposition through analysis of a case of a Stage One study of the National Science Foundation's Undergraduate Research Collaborative in Chemistry. After an overview of the Stage One study, the discussion will concentrate on lessons learned for evaluation theory, design, and practice. Implications are related to: a) how a staged approach surfaces the varying and often competing audiences, goals, purposes, values, and criteria that drive evaluation; and b) the advantages and disadvantages of staged evaluation related to funders' valuing of evaluation. Our purpose is to further test our ideas about staged evaluation through lively discussion among members of the field.

Session Title: Do Existing Logic Models for Science and Technology Development Programs Build a Theory of Change?
Expert Lecture Session 283 to be held in  Malibu on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Gretchen Jordan, Sandia National Laboratories, gbjorda@sandia.gov
Presenter(s):
Gretchen Jordan, Sandia National Laboratories, gbjorda@sandia.gov
Abstract: This expert lecture will review logic models that have been developed for science, research and technology development (RTD) programs in the U.S. and internationally, based on results of a request made to the RTD evaluation community. These various models will be characterized by types of activities, customer/stakeholder groups, and outcomes. The lecture will then examine the extent to which these logic models reflect theories of change and what those are. Where might these logic models inform performance expectations and display the complexity of linking specific R&D to economic and social outcomes? Finally the lecture will outline the lessons that emerge from this review that inform R&D logic modeling, program evaluations, and policy studies. What light, if any, do these logic models shed on a Science of Science Policy roadmap and science and innovation policy study questions?

Session Title: Adapting Program and Evaluation Designs to Conform to Indigenous Culture and Values - Two Examples From the Pacific
Multipaper Session 284 to be held in Manhattan on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Indigenous Peoples in Evaluation
Chair(s):
Katherine Tibbetts,  Kamehameha Schools, katibbet@ksbe.edu
Discussant(s):
Katherine Tibbetts,  Kamehameha Schools, katibbet@ksbe.edu
'Pacific Friendly' Ways to Engage Stakeholders in an Evaluation
Presenter(s):
Faith Mahony, University of Auckland, f.mahony@auckland.ac.nz
Sarah Appleton Dyer, University of Auckland, sk.appleton@auckland.ac.nz
Sarah Andrews, University of Auckland, s.andrews@auckland.ac.nz
Kathryn Cairns, University of Auckland, k.cairns@auckland.ac.nz
Janet Clinton, University of Melbourne, janetclinton@xtra.co.nz
Abstract: While there are various frameworks to guide evaluation practice, not all capture the values of our communities, so adaptation is often necessary. In Auckland, New Zealand it was found necessary to modify an evaluation framework for public health to ensure the approach was 'Pacific friendly'. This paper will demonstrate how inclusion of the Kakala[1] model aided understanding and engagement for both the evaluator and stakeholders. The paper will also highlight a number of specific approaches that have supported evaluation design, implementation and feedback, including appropriate communication styles and a participatory approach. The implications of these findings for theory and practice will also be discussed. 1 Konai Thaman: Cultural considerations in student evaluation with specific references to Pacific Island Countries. Keynote address given at the 2006 Australasian Evaluation Society Conference in Perth, Australia. Sept 4-7 2006
Native STAND: The Trials and Tribulations of Adapting and Implementing a Peer Educator Program for Native American Youth in Indian Country
Presenter(s):
Sonal R Doshi, Centers for Disease Control and Prevention, sdoshi@cdc.gov
Mike Smith, Mercer University School of Medicine, smith_mu@mercer.edu
Lori de Ravello, Centers for Disease Control and Prevention, leb8@cdc.gov
Stephanie Craig-Rushing, Northwest Portland Area Indian Health Board, scraig@npaihb.org
Scott Tulloch, Centers for Disease Control and Prevention, sdt2@cdc.gov
Abstract: American Indian/Alaska Native (AI/AN) youth are at substantial risk for unplanned pregnancy and acquiring STDs and HIV. Compared to U.S. youth of all race/ethnicities, AI/AN youth are more likely to have ever had sex and to have had four or more lifetime sex partners. Very few culturally-appropriate and rigorously evaluated sexual risk-reduction programs focused on prevention of unwanted pregnancy and STI/HIV infection exist for Native youth. Native STAND is a novel adolescent comprehensive sexual health curriculum adapted from STAND (Students Together Against Negative Decisions), a program developed for rural youth with demonstrated success. The development and implementation of the pilot program involved inclusion of AI/AN culture and values to maintain cultural relevance. Native STAND was developed by a workgroup that consisted of AI/AN youth and elders, topic experts, and evaluation experts. This presentation will discuss the adaptation process, pilot implementation issues, and lessons learned during the development of Native STAND.

Session Title: AEA Student Case Competition: Values, Valuing, and Evaluation a Complex Case With the Girls and Boys Clubs
Think Tank Session 286 to be held in Oceanside on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the AEA Conference Committee
Chair(s):
Lyn Shulha, Queen's University, lyn.shulha@queensu.ca
Presenter(s):
Penny Black, University of Wisconsin, pdblack@wisc.edu
Brandi Gilbert, University of Colorado at Boulder, brandi.gilbert@colorado.edu
Jessica Jackson, Claremont Graduate University, jessica.jackson@cgu.edu
Ebun Odeneye, University of Texas, Houston, ebun.o.odeneye@uth.tmc.edu

Session Title: Influence of Evaluator's and Client's Values on Process Evaluation of Low Income Energy Efficiency Program
Multipaper Session 288 to be held in Palos Verdes A on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Environmental Program Evaluation TIG
Chair(s):
Kara Crohn, Research Into Action, karac@researchintoaction.com
Abstract: This session builds on previous research on the influence of evaluators' principles on resource allocation decisions. The first presentation will describe how the research was used to develop a post-evaluation self-assessment of the influence of the evaluator's values on an evaluation of the California Low Income Energy Assistance Program (LIEE) now known as Energy Savings Assistance Program (ESAP) and how the evaluator sought to balance those values with stakeholder needs and contextual factors. The second presenter is a client of the evaluation and is experienced in evaluation of energy efficiency programs. In addition to offering commentary on the evaluator's self-assessment, she will provide an understanding of utility and stakeholder evaluation needs, important contextual considerations, and a discussion of values and principles that influence her work at the utility.
A Self-Assessment of the Influence of Evaluator's Values on Process Evaluation of Low Income Energy Efficiency Program:
Kara Crohn, Research Into Action, karac@researchintoaction.com
This presentation builds on Crohn's doctoral research presented at AEA in 2009 on the ways in which evaluators' principles influence resources decisions. The premise is that, as evaluators, we hold certain principles (enacted values) that guide our practice, as do the clients we serve. Where the previous research was based on a case study, this presentation extends the research through a post-evaluation assessment of principles enacted during the process evaluation of the California Low Income Energy Efficiency program (LIEE), now known as Energy Savings Assistance Program (ESAP). Crohn will describe the various resources engaged during the evaluation and sources of influence on the evaluation and present a self-assessment of principles applied during the evaluation based on criteria developed for the dissertation research. Discussion will focus on how those principles influenced the evaluation process and ways Crohn sought to balance the influence of those principles with contextual and stakeholder needs.
Commentary on the Influence of Evaluator's and Client's Values on Process Evaluation of Low Income Energy Efficiency Program
Carol Edwards, Southern California Edison, carol.edwards@sce.com
Edwards holds a unique role as both a person engaged in evaluation activity at Southern California Edison (SCE) and as a client of this evaluation. Edwards' presentation will provide context for understanding the needs and demands of evaluation work at an investor-owned utility. She will offer commentary on Crohn's analysis and discuss the interplay between evaluator, client, and stakeholders in the LIEE evaluation. Edwards will also discuss the ways in which her values and principles influence her work on the LIEE program and her evaluation work at SCE more broadly. In her current role at SCE, Edwards draws on her rich background in educational program evaluation and doctoral research that explored the psychological, motivational, social influences on highly successful and creative people.

Session Title: Evaluation of Pro-equity and Gender Equality Policies and Programs
Panel Session 289 to be held in Palos Verdes B on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Michael Bamberger, Independent Consultant, jmichaelbamberger@gmail.com
Abstract: A focus on equity in National public policies and programmes has long been a moral imperative. In September 2010, UNICEF launched the publication" Narrowing the gap" making the argument of why it's important to achieve MDGs with equity. However, what are the methodological implications to evaluate pro-equity policies and programmes? What are the evaluation questions and methods appropriate to evaluate policies and programmes whose objectives are to narrow the gap between the best-off and worst-off populations? What is the role of evaluation in the improvement of policies to reduce inequalities and discrimination? This panel will present a recent work led by UNICEF in this field, highlighting the results of a study assessing existing methods to evaluate equity-based interventions in the areas of health, education, early childhood development, social and child protection, HIV/AIDS and social policy, and will present approaches and methods applicable to low and middle income countries. It will also present the work of UNWOMEN and the United Nations Evaluation Group in developing approaches and tools to evaluate the dimensions of gender equality and human rights in policies and programmes and the implications of their use in the evaluation practice. At this panel books on related topics published by UNICEF in partnership with the IDEAS, World Bank, UNDP, UNIFEM, WFP, ILO, IOCE and DevInfo, will be distributed to the participants free of charge.
Evaluating Gender Equality Interventions
Belen Sanz, UN Women, belen.sanz@unwomen.org
Sans will present the work of UNWOMEN and the United Nations Evaluation Group in developing approaches and tools to evaluate the dimensions of gender equality and human rights in policies and programmes and the implications of their use in the evaluation practice.
Tools and Techniques to Evaluate Pro-equity Interventions
Michael Bamberger, Independent Consultant, jmichaelbamberger@gmail.com
Bamberger will present what are the methodological implications to evaluate pro-equity policies and programmes; What are the evaluation questions and methods appropriate to evaluate policies and programmes whose objectives are to narrow the gap between the best-off and worst-off populations; and What is the role of evaluation in the improvement of policies to reduce inequalities and discrimination.

Session Title: Introduction to Evaluation and Policy
Expert Lecture Session 290 to be held in  Redondo on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Evaluation Policy TIG and the Government Evaluation TIG
Chair(s):
Patrick Grasso, World Bank Group, pgrasso@comcast.net
Presenter(s):
George Grob, Center for Public Program Evaluation, georgefgrob@cs.com
Abstract: Evaluation and public policy are intimately connected. Such connections occur at national, state, and local government levels, and even on the international scene. The interaction moves in two directions: sometimes evaluation affects policies for public programs, and sometimes public policies affect how evaluation is practiced. Either way, the connection is important to evaluators. This session will explain how the public policy process works. It will guide evaluators through the maze of policy processes, such as legislation, regulations, administrative procedures, budgets, re-organizations, and goal setting. It will provide practical advice on how evaluators can become public policy players-how they can influence policies that affect their very own profession, and how to get their evaluations noticed and used in the public arena

Session Title: The Advocacy Progress Planner: A Planning and Evaluation Tool for Advocates and Funders
Demonstration Session 292 to be held in San Clemente on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Advocacy and Policy Change TIG
Presenter(s):
Lisa Molinaro, The Aspen Institute, lisa.molinaro@aspeninst.org
David Devlin-Foltz, The Aspen Institute, david.devlin-foltz@aspeninst.org
Abstract: The tools used to evaluate service delivery projects are sometimes ill-suited to assessing the impact of policy advocacy. They lack the flexibility advocates need to respond to fast-moving changes in the policy climate. The Advocacy Progress Planner (APP) is a free online tool for advocates and funders who want to monitor and evaluate the impact of their advocacy efforts. While the tool itself is user friendly, the questions it asks challenge users to think strategically about how they might identify and track progress towards indicators of success. The APP allows for ongoing evaluation throughout the course of a project so that users may learn as they go. Presenters from the Advocacy Planning and Evaluation Program at the Aspen Institute will demonstrate the tool, highlight new features from the 2011 re-launch, and discuss examples of how funders and advocates have benefited from incorporating the tool into their work.

Session Title: Issues in Measurement of Adoption and Implementation
Multipaper Session 293 to be held in San Simeon A on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Independent Consulting TIG , the Government Evaluation TIG, and the Human Services Evaluation TIG
Chair(s):
Joanne Basta,  Jbasta Consulting Inc, joannebasta@gmail.com
Evaluation of an Initiative to Reform Government Contracting for Human Services
Presenter(s):
Susan Wolfe, Susan Wolfe and Associates LLC, susan.wolfe@susanwolfeandassociates.net
Kathryn Race, Race & Associates Ltd, krace@raceassociates.com
Abstract: Under a grant from the Wallace Foundation, the Donors Forum developed an initiative to overhaul the government contracting processes for human services in the State of Illinois and the City of Chicago. It is based on a framework of 15 principles in 6 areas and involves 40 practices. The focus of this paper is to describe the methodology used by two independent evaluators who collaborated to evaluate this effort. This methodology included administering a survey to key government respondents, using a modified Delphi method to obtain consensus and further explain survey results, site visits and interviews with key informants, document reviews, and a survey of stakeholders that included providers of human services, foundations and advocacy organizations. Our presentation will include discussion of the complexities and challenges encountered while conducting the evaluation, our collaborative approach, and highlight the applicability and generalizability of such a methodology for future evaluations of this nature.
Does What We Value Make a Difference in our Assessment of Implementation Fidelity?
Presenter(s):
Mary Styers, Magnolia Consulting LLC, mary@magnoliaconsulting.org
Abstract: Evaluators are continually tasked with making value decisions in the course of study design. In our decisions about implementation fidelity, we place value on specific observations (e.g., self-report, trained observer ratings) and measurement indicators (e.g., dosage, environment, observed use). Each value judgment can strongly impact how a study's implementation fidelity is conceptualized and understood. Yet across fields and within our field, researchers and evaluators tend to hold opposing values in the conceptualization and use of fidelity. As a consequence, there are different relations between fidelity and outcomes, leading to null or significant effects. Drawing on the literature and experiences from two elementary reading and mathematics program efficacy studies, this paper explores differences in fidelity measurements, fidelity variables, and offers recommendations for measuring fidelity.

Session Title: Evaluation Strategies to Promote Teacher Retention
Multipaper Session 294 to be held in San Simeon B on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Aarti Bellara,  University of South Florida, abellara@usf.edu
Discussant(s):
Sheila Robinson Kohn,  Greece Central School District, sbkohn@rochester.rr.com
Beyond Induction: Articulating the Long-Term Impact of an Induction Program on Beginning Teachers and Mentor Teachers
Presenter(s):
Susan Hanson, New Teacher Center, shanson@newteachercenter.org
Abstract: Long-term impact is an assumed goal of most educational programs. But, most evaluation designs do not include data collection beyond short-term outcomes. This paper focuses on the value of collecting data regarding the long-term outcomes and impact of a teacher induction program on beginning teachers and mentor teachers. The New Teacher Center provides comprehensive induction systems for new teachers to improve their effectiveness, retention, raise student achievement, and encourage leadership. Our induction programs are built around certain beliefs, one of which is that high quality induction impacts teachers and their practice far beyond their first years of teaching. Two studies were conducted to explore long-term outcomes and impact for both beginning teachers and the mentors who received ongoing professional development. Ways in which attention to long-term impact has many useful purposes in addition to providing evidence of positive impact beyond the life of the program will be discussed.
Development of the Teaching Opinion Survey (TOS): A Screening Tool for Alternative Teacher Preparation Programs
Presenter(s):
Brandi Trevisan, The Findings Group, brandi@thefindingsgroup.com
Shelly Engelman, The Findings Group, shelly@thefindingsgroup.com
Tom McKlin, The Findings Group, tom@thefindingsgroup.com
Abstract: Operation Reboot is an NSF-funded alternative teacher preparation program for former IT workers to become high school computer science educators. Each participant, paired with a classroom teacher, co-teaches high school classes for a school year, forming a partnership that enhances the pedagogical and content knowledge of both members. After the first two cohorts experienced high attrition rates, it became apparent that stronger screening measures and an examination of the participant values were necessary during the application process to identify those candidates most likely to become successful classroom teachers. For this reason, we developed the Teaching Opinion Survey (TOS) by drawing from extant measures of dispositions to teach, theory of intelligence, and success case interviews with exemplary high school teachers. The resulting instrument could be used by alternative teacher preparation programs to reduce attrition and select those candidates most likely to succeed.

Roundtable: Building a Scale to Measure Use of Brazilians' Higher Education Evaluation Reports
Roundtable Presentation 295 to be held in Santa Barbara on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Evaluation Use TIG
Presenter(s):
Sheizi Freitas, University of Sao Paulo, shecal@usp.br
Abstract: The literature on evaluation use rests on the assumption that through the utilization of evaluation activities and reports, institutions can improve their internal processes, make better decisions, better understand themselves, and increase the quality of their programs. Evaluation use has been studied by many researchers in varied contexts for about forty years. Many purposes and ideas have been presented in the most important journals and in many dissertations on program evaluation, concerning definitions of use, types of use, factors affecting use, and even occurrences of non-use or misuse. Yet, it is not so easy to find research that includes practical tools for measuring use in its multiple forms and intensities (Cousins, March 2011, email conversation). This presentation recounts the conceptual and practical development of a scale to measure evaluation use. The context for use of this scale is an accountability-oriented higher education evaluation system in Brazil.

Session Title: Cleaning Data: It's a Dirty Process, but Someone's Gotta do it - Tips and Tricks to Making Data Management More Manageable
Demonstration Session 296 to be held in Santa Monica on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Simone Erchov, George Mason University, sfranz1@gmu.edu
Caroline Wiley, University of Arizona, crhummel@email.arizona.edu
Julius Najab, George Mason University, jnajab@gmu.edu
Abstract: Evaluators frequently face diverse data management challenges - from acquisition to validation, cleaning, and analysis. Evaluation results are undoubtedly influenced by the data; well-managed data can therefore improve analyses and the interpretation of results. Data cleaning can be an undesirable and laborious task and so to ease the "pain" we will cover important conceptual considerations and practical, dependable techniques for managing both quantitative and qualitative data with more relative ease. Using two common software packages (Excel and SPSS), we will introduce time-saving techniques that ensure data integrity, accuracy, and quality (i.e. methods for data checking and cleaning, preparation for analyses, transformations, and overall data control). We aim to increase comfort and knowledge with this process by providing novice evaluators with basic data management principles allowing flexibility in navigating multiple programs that will optimize data performance and ensure quality of their results.

Session Title: Using Needs Assessments to Improve Early Stage Project Planning Decisions: Data First, Decisions Later
Panel Session 297 to be held in Sunset on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Needs Assessment TIG
Chair(s):
Maurya West Meiers, World Bank, mwestmeiers@worldbank.org
Abstract: The earliest decisions on projects are among the most critical in determining long-term success. The early decision phase - in which vague concepts are transformed into initial project plans - sets the stage for a variety of actions that will eventually lead (if all goes well) to desirable results. This "front end" of the project life-cycle is, however, often inadequately defined and supported in comparison to the subsequent project management activities. Needs assessments, including a variety of associated tools and techniques, can help provide the strategic linkages necessary for successful decisions during this early phase of any project. In this session the panelists will discuss needs assessments in relation to their essential, and often unique, role in guiding early decisions in project planning. Specific tools for both collecting information and making decisions will be described, and case examples will be discussed.
Mind the Gaps: Needs Assessments and Decision-making
Ryan Watkins, George Washington University, rwatkins@gwu.edu
As a potential project sponsor or other stakeholder, before project proposals are written or possible project teams are fully assembled, the early decisions you make are critical to the success of the project. Making good decisions about what to do is not easy, nor should the decision-making process be minimized or an after-thought. Needs assessments can play an important role in guiding the necessary decisions at the "front-end" of any project or program, leading to improved monitoring and evaluations results later on. This presentation will focus on how needs assessments can be used to guide decisions and the important role they play in most every project, whether we formally recognize them as needs assessments or not. The panelist, a professor of education and developer of www.needsassessment.org, has a forthcoming book on needs assessment and has conducted needs assessments in organization, state, national and international contexts.
Putting Needs Assessment Into Action
Maurya West Meiers, World Bank, mwestmeiers@worldbank.org
This presentation offers practical tips for professionals who want to use needs assessment tools to collect information, make decisions, and achieve results. It will provide both a foundation for applying needs assessment processes in guiding early project decisions, as well as a number of useful tools for a variety of situations. Case examples (from international development contexts) will be described and practical tools (such as focus groups, pair-wise comparisons, dual response surveys, etc.) will be discussed in the context of needs assessments that guide decisions. These tools have a variety of possible applications for use by professionals interested in M&E, needs assessment, organizational development and other related fields. Needs assessment resources (e.g., tip sheets on the tools discussed) that the panelists have developed will be shared and can be found at www.needsassessment.org. This panelist has taught needs assessments in international settings and has a forthcoming book on the topic.

Session Title: Mixed Methods Evaluation Approaches for Educational Reform Initiatives
Multipaper Session 298 to be held in Ventura on Thursday, Nov 3, 10:45 AM to 11:30 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG and the Mixed Methods Evaluation TIG
Chair(s):
Mika Yamashita,  World Bank, myamashita@worldbank.org
Discussant(s):
Leanne Kallemeyn,  Loyola University, Chicago, lkallemeyn@luc.edu
An Evaluation of Personalized Student Learning Pilot Programs in Sixteen New Jersey Schools
Presenter(s):
Charyl Yarbrough, The Heldrich Center for Workforce Development, cyarbrou@ejb.rutgers.edu
Bill Mabe, The Heldrich Center for Workforce Development, billmabe@ejb.rutgers.edu
Abstract: Today's employers expect high school graduates to enter the workplace with the 21st century skills needed to compete in a dynamic global market. Unfortunately, America's young people are not meeting employer expectations (The Conference Board Inc., 2010). Concerned policy makers, educators and parents are engaged in reform initiatives in a collective attempt to improve student engagement and success (Wolf, 2010). Consequently, several states are revisiting traditional education approaches and examining personalized learning models. Few studies have evaluated personalized learning initiatives across a variety of school settings. The New Jersey Department of Education partnered with the Heldrich Center for Workforce Development to conduct a two-year process evaluation of personalized learning programs at sixteen New Jersey schools. We used qualitative methodologies and multi-dimensional survey data to develop composite scores and rank school implementation success. This paper is a resource for evaluators looking to discuss approaches for measuring implementation effectiveness across diverse sites.
Using Mixed Methods to Evaluate a Program Designed to Develop Turnaround Leaders in Texas Schools
Presenter(s):
Jacqueline Stillisano, Texas A&M University, jstillisano@tamu.edu
Hersh Waxman, Texas A&M University, hwaxman@tamu.edu
Melanie Woods, Texas A&M University, mnwoods@neo.tamu.edu
Abstract: This paper showcases an evaluation of the Texas Turnaround Leadership Academies, a program established in 29 Texas schools with the goal of establishing a cadre of exceptional principals to turnaround chronically underperforming schools. Evaluators used a mixed-methods design to examine program factors that encouraged quality professional development for participants, promoted district support for campus leadership, and supported translation of the professional development into effective leadership practices. Each program school was compared to a similar low-performing school matched by variables such as socio-economic status, percent minority, and graduation rates. Survey data was obtained from district and campus educators involved in the project and from the comparison schools. Qualitative data was gathered through in-depth interviews with campus and district leaders, and four schools were chosen to be the site of mini-case studies designed to provide a comprehensive picture of each school's specific experiences and challenges related to implementing a successful turnaround effort.

Return to Evaluation 2011
Search Results for All Sessions