2010 Banner

Return to search form  

Session Title: President Obama's Evaluation Policies
Expert Lecture Session 542 to be held in  Lone Star A on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Presidential Strand and the Government Evaluation TIG
Chair(s):
Jennifer Greene, University of Illinois at Urbana-Champaign, jcgreene@illinois.edu
Presenter(s):
Invited Speaker, , 
Discussant(s):
Patrick Grasso, World Bank, pgrasso45@comcast.net
Abstract: This session will provide an explanation of and an update on where things stand with the evaluation policies of President Obama’s administration.

Session Title: Examples From the Field: Using Mixed Methodological Frameworks in Theory-Driven Evaluations
Multipaper Session 543 to be held in Lone Star B on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Chair(s):
Katrina Bledsoe,  Walter R McDonald and Associates Inc, katrina.bledsoe@gmail.com
The Mix of Methods: Towards a Framework to Anticipate Validity Threats When Evaluating Agricultural Value Chain Development Interventions
Presenter(s):
Giel Ton, LEI Wageningen UR, giel.ton@wur.nl
Marieke De Ruijter De Wildt, LEI Wageningen UR, marieke.deruijterdewildt@wur.nl
Abstract: Impact evaluations of value chain interventions are challenging: outcome indicators are often multi-dimensional, impact is generated in dynamic and open systems, and the social embeddedness constraints external validity. Therefore, there is a strong case for theory-based evaluation where a logic model indicates how the intervention is expected to influence the incentives for people’s behaviour. The key assumptions inherent in these casual models can be tested through the observation and measurement of specific outcome indicators using mixed methods in triangulation. The paper presents a framework to evaluate the design of these mixed methods to assess change and impacts in value chain configurations, using the four types of validity threats to the evaluative conclusion. We apply the framework on three impact assessment experiences, we have been involved in, and reflect on the feasibility to improve its methodological design: micro-irrigation technology supply; farmer field schools; business service development.
Team Process Factors in the Evaluation of a Community-Team-based Entrepreneurial Development Initiative
Presenter(s):
Laurie Van Egeren, Michigan State University, vanegere@msu.edu
Meenal Rana, Michigan State University, ranameen@msu.edu
Diane Doberneck, Michigan State University, connordm@msu.edu
Miles McNall, Michigan State University, mcnall@msu.edu
Abstract: Creating Entrepreneurial Communities (CEC) was a one-year program designed to provide community teams (N = 9) with assistance, tools, and resources to develop environments supportive of entrepreneurs. Teams, selected through an application process, were provided with an intensive four-day training on the development of entrepreneurial communities and one year of coaching support. The evaluation of CEC used a mixed-method approach that was based on a combination of Neo-Analytic Induction and Qualitative Comparative Analysis by Hicks (1994) and that permitted a summary of both qualitative and quantitative data. The core of this approach was to identify teams as “sustained” or “not sustained” and examine whether differences were evident between the two groups in a model that predicted that the development of positive team processes mediates between the intervention and success. Using this method, several community, team, and programming factors were identified that distinguished between “sustained” and “not sustained” teams.

Session Title: Youth Participatory Evaluation: Entering the Age of the Internet
Think Tank Session 544 to be held in Lone Star C on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Robert Shumer, University of Minnesota, rshumer@umn.edu
Discussant(s):
Robert Shumer, University of Minnesota, rshumer@umn.edu
Kim Sabo Flores, Evaluation Access and ActKnowledge, kimsabo@aol.com
Abstract: Youth participatory evaluation has been evolving over the past two decades. While still young, engagement of youth has undertaken some new twists, especially involving the use of electronic media to facilitate its expansion and improve methodology. In this Think Tank we explore two new efforts to use electronic systems to increase the capacity of programs to engage youth in the evaluation process. One system involves the use of an on-line education/training program to help adult mentors/educators work with youth to develop participatory evaluation projects. The second involves the development of e-Portfolios to capture and evaluate learning and social change enacted through service-learning and civic engagement programs. The audience will have the opportunity to react to both systems and then make recommendations for change/improvement so youth participatory evaluation, in the electronic age, can be even more effective. Program Youth participatory evaluation is a field in the making (Sabo, 2003). Ever since the Wingspread conference on youth participatory evaluation (Checkoway, 2003) more and more people are engaging youth in the evaluation process. From Youth in Focus in San Francisco, to engagement of youth in critical praxis through youth/community studies in California (Duncan-Andrade and Morrell, 2008), youth are becoming more involved in evaluating the programs and worlds they inhabit. Preparing them to do a solid job is a challenge, especially since there are such limited resources. In order to address this challenge, Kim Sabo Flores has begun a series of on-line seminars to prepare adults to work with youth in various stages of participatory evaluation. She is piloting a series of lessons that are designed to help adults become true facilitators of evaluation/learning and to implement a reasonable project that demonstrates an understanding of the use of praxis in youth engagement and review. In this Think Tank the audience will have the opportunity to review the sample lessons/program materials and critique the approach and the substance of the program. The goal is to ensure more public input in the development of the lessons, especially from a critical group of evaluators who know and understand youth participatory evaluation. A second approach will be presented for critique and comment by Rob Shumer, evaluator from the University of Minnesota. Shumer is experimenting with the development of e-Portfolios as a mechanism to document and record student experiences with service-learning from middle school through undergraduate education. Each student will have the use of a personal portfolio, developed by the University of Minnesota, with which to record and organize their information about service-learning being a transformative experience. In this part of the Think Tank the audience will have an opportunity to both critique the model for evaluation and comment on the utility of such an instrument and process to record the learning and impact of the programs on the individual and the community. It will also provide a source for discussion of the use of e-Portfolios as a large data source for complex learning initiatives, such as service-learning and civic engagement. Each of these projects should help to promote the kind of discussion that will expand and improve the delivery of youth participatory evaluation for all settings. By obtaining public input on the training materials and the e-Portfolio system, the field of youth participatory evaluation will be greatly improved, making it a more suitable option for many applications of youth engaged in the evaluation process.

Session Title: Funder’s Use of Network Analysis to Build Intentional Collaboration
Panel Session 545 to be held in Lone Star D on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
David J Dobrowski, First 5 Monterey, dobrowski@gmail.com
Abstract: Once you have network analysis maps, what do you do with them? They are useful tools to create a picture of the nature and depth of relationships. We will share how a funder used network analysis maps with a group of twenty three funded agencies. The agencies were able to use the maps to make strategic organizational decisions as to how and with whom it made sense to coordinate and collaborate. Techniques for helping grantees read and understand the network maps - as well as facilitating discussions with them - will be shared. At a different level, how the maps were used by the funder to understand shifts in the service delivery system for young children will also be shared.
Funder’s Perspective on Practical Use of Network Analysis Findings With Funded Agencies
David J Dobrowski, First 5 Monterey, dobrowski@gmail.com
David is the Evaluation Officer with First 5 Monterey. He works directly with the nonprofits and agencies funded by First 5 Monterey. He contributes unique insights into the Funder's perspective and use of the evaluation tools practically for the participants. He presented the network maps and uses with the funded partners at a learning circle with Raul.
Evaluator’s Perspective on Practical Use of Network Analysis Findings With Funded Agencies
Raul Martinez, Harder+Company, rmartinez@harderco.com
Raul works for Harder+Co. An independent evaluation consulting company that did the actual network analysis. He offers the unique perspective of the technical approaches and bridging them with audiences. He presented the network maps with David at a learning circle.

Session Title: Using Latent Class Analysis to Target and Tailor Programs to Specific Populations
Demonstration Session 546 to be held in Lone Star E on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Humphrey Costello, University of Wyoming, humphrey.costello@uwyo.edu
Reese Jenniges, University of Wyoming, jenniges@uwyo.edu
Abstract: In this demonstration, we present an application of Latent Class Analysis (LCA) to adult smokers in Wyoming. LCA is a probabilistic clustering method for identifying unmeasured class membership using both categorical and continuous variables (see Vermunt & Magidson, 2000). We apply LCA to derive a four-class typology of smoking behavior and intention to quit. We then use logistic regression to identify associations between smoking types and demographic variables. Programming and policies may then be tailored to target the needs of each type of smoker. The demonstration introduces the LCA method, discusses appropriate uses, details steps and diagnostics in developing LCA models, and describes how LCA may enhance both program design and program evaluation.

Session Title: Multi-method Approaches
Multipaper Session 547 to be held in Lone Star F on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
John Hitchcock,  Ohio University, hitchcoc@ohio.edu
A Mixed Methods Approach to Measurement for Multi-site Evaluation
Presenter(s):
Carlos Bravo, Evaluation, Management & Training Associates Inc, cbravo@emt.org
Fred Springer, Evaluation, Management & Training Associates Inc, fred@emt.org
Abstract: School climate has become a focal concept for promotion of safer and more productive learning environments that the National School Climate Center describes as “patterns of … experiences … reflect(ing) norms, goals, values, interpersonal relationships, teaching, learning and leadership practices, and organizational structures.” Given this multi-dimensional complexity, measuring school climate is a challenge for evaluators. This presentation summarizes a comprehensive review of all school climate measures used in state school surveys regularly administered in the United States, and the most prominent research and performance monitoring instruments developed specifically to measure school climate. Survey items are mapped onto a comprehensive model of school climate domains and dimensions, relative focus on domains and dimensions are are identified, and similarities and differences in measurement perspective, format, and psychometric approach are profiled. Implications for the definition of school climate, and for measurement development and use by evaluators and policymakers, are discussed.
Using Mixed Methods to Examine and Interpret the Impact of an National Science Foundation (NSF) Geosciences Multi-Institution, Multi-site Teacher Professional Development Program.
Presenter(s):
Susan Henderson, WestEd, shender@wested.org
Dan Mello, WestEd, dmello@wested.org
Abstract: This paper examines the impact of a multi-site teacher professional development program, funded by the Geosciences Directorate of the National Science Foundation. Using a mixed methods design this evaluation of the Transforming of Earth Systems Science Education (TESSE) Program, examines the program’s impact on teacher content and pedagogical knowledge, as well as teacher’s self-reported knowledge and perceived comfort level in teaching various aspects of earth system science. Using focused case studies combined with multiple regression and pre-test post-test gains, this paper also presents differences in the extent of program impact based on the implementation of the professional development at 4 distinctly different institutes of higher education. This study offers practical insight to evaluators assisting clients in providing scientifically based evidence for program scale-up in sites with distinctly unique missions, distribution of resources, and connected with K-12 institutions.

Roundtable: Controlling Quality Within Museums: Coordinating Internal Evaluation Departments
Roundtable Presentation 548 to be held in MISSION A on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Evaluation Managers and Supervisors TIG
Presenter(s):
Sarah Cohn, Science Museum of Minnesota, scohn@smm.org
Anna Lindgren-Streicher, Museum of Science, Boston, alstreicher@mos.org
Abstract: While the basic demand on evaluators is to assess the merit and worth of the products a client produces, an internal evaluation department has the added demand of being appropriately structured and positioned to most effectively fit the community and culture of practice at work in their organization. Internal evaluation departments at museums have the added responsibility of not only answering to the informal learning environment setting in which their projects reside but also to the more widely accepted educational world of formal education. Inherent in this position is the need for evaluations to be of both high quality and flexibility, as the needs of the project team, the museum, the community, and the nature of informal learning shift with time. This discussion focuses on how two museums manage and communicate their evaluations to be most effective in both the smaller and larger communities of practice at work.

Session Title: Issues Impacting the Quality of Program Evaluation and Programs Serving Youth and Young Adults With Significant Disabilities
Panel Session 549 to be held in MISSION B on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Special Needs Populations TIG
Chair(s):
Michael Du, Human Development Institute, zdu@email.uky.edu
Discussant(s):
Brent Garrett, Pacific Institute for Research and Evaluation (PIRE), bgarrett@pire.org
Abstract: This session will focus on two presentations each of which deals with quality and evaluation in a distinctive manner. One presentation will discuss the issues that arise impacting the quality of evaluation findings in collecting badly needed interview data from youth and young adults with significant cognitive disabilities. It will also provide information on the strategies developed by evaluators to resolve these issues and to obtain interview data that allowed for developing the stories of participants’ experiences in the program in these participants’ own voices. The second presentation will focus on what evaluation findings reveal about a range of challenges which emerged during efforts to administer and implement the program that affected the quality of the program to serve children with significant developmental disabilities in ways consistent with the needs of these children. The purpose of the presentation is to explore what evaluation findings tell us about one program’s efforts to serve children with developmental disabilities using structures and implementation practices used in serving typically developing children and/or children with mild to moderate disabilities. The presentations are related in that each will discuss issues that arise in part due to the need for evaluators to grasp the unique and distinctive lived experiences of groups served by the programs being evaluated. This need is the background against which quality plays out in evaluation practice in one case and in the other case, in insights regarding inconsistencies between program structure and implementation on one hand and participant needs and characteristics on the other.
Interviewing Individuals with Significant Disabilities: Considerations for Quality
Chithra Perumal, Human Development Institute, chithra.perumal@uky.edu
Kayla Davidson, Human Development Institute, 
Very often evaluations of programs that serve individuals with moderate and significant disabilities does not includes the voices of the people who are most impacted by it. Approval from institutional review boards for vulnerable populations can be a tedious and protracted process. Additionally, sometimes the responses can be short and may lack the ‘rich’ conversations which contain a wealth of data. In this presentation, the evaluators will discuss their experience in evaluating a program that serves students with developmental and significant disabilities in post-secondary educational settings. More specifically, the presentation will discuss the evaluators’ experience in interviewing individuals with significant disabilities. The presentation will discuss the strategies that they have used to ensure that the ‘voice’ and ‘perspectives’ of the individuals are a key part of the evaluation. The presentation will also include the approaches used in the evaluation to develop stories of the individuals’ experiences in the program based on their voices and the meanings and understandings this experience has for them. Chithra Perumal, lead presenter, has had extensive experience in evaluating programs serving persons with disabilities and in the use of naturalistic qualitative methods in her evaluations. Therefore, she has the capacity to bring to this presentation experience and competencies enabling her to treat the subject matter in a particularly insightful manner.
Challenges to the Quality of Mentoring Programs Serving Children With Significant Developmental Disabilities: What Evaluation Findings Tell Us
Joanne Farley, Human Development Institute, joanne.farley@uky.edu
This presentation will focus on what evaluation findings said about the quality of a program created to provide mentoring services to children with significant developmental disabilities. Mentoring programs have long been found to positively contribute to a range of outcomes for children (e.g., strengthening social skills, improving academic attainment, making healthier decisions, and improving appropriate behavior at school, within the family and within community settings). However, for a variety of reasons, mentoring programs rarely serve children with significant developmental disabilities. Evaluators at the Human Development Institute, University of Kentucky, had the opportunity of evaluating one mentoring program which decided to create a program component in which community mentors were matched with children with significant developmental disabilities. The evaluation began a year and a half into this program’s implementation and continued over a six month period. The evaluation relied primarily on analysis of existing data collected by the mentoring program and evaluators’ extensive interviewing of program personnel, mentors in matches with children with developmental disabilities, and caregivers of children with developmental disabilities involved in program mentoring matches. The findings that resulted from evaluation activities identified a number of issues related to program structure and implementation that affected the quality of the program’s ability to serve the needs of children with developmental disabilities, the community mentors, and children’s caregivers. This presentation will discuss those issues which were identified that have a high probability of confronting the start-up and implementation of other mentoring programs attempting to serve children with significant developmental disabilities. The presentation will also discuss with regard to some of these issues, strategies and practices found by other mentoring programs newly serving this population to be effective in resolving or limiting the negative consequences of issues or challenges. While this presentation will describe in detail the evaluation design and methods used in this evaluation, it will give predominant attention to the results of the evaluation and what these results had to say about the threats to program quality that emerged in serving children with significant developmental disabilities. Joanne Farley has more than twenty years of experience in evaluating programs serving a range of diverse populations including groups with significant disabilities. Having conducted formative and summative and process and outcome as well as comprehensive evaluations of programs, she has developed extensive expertise in evaluating the alignment of program structures and implementation strategies with program performance in meeting the needs of diverse participant groups.

Session Title: The Adaptive Action Cycle: Bridging the Gap Between Lessons Learned and Lessons Applied
Panel Session 550 to be held in BOWIE A on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Systems in Evaluation TIG and the Human Services Evaluation TIG
Chair(s):
Mallary Tytel, Healthy Workplaces, mtytel@healthyworkplaces.com
Discussant(s):
Mallary Tytel, Healthy Workplaces, mtytel@healthyworkplaces.com
Abstract: Human Systems Dynamics helps individuals and organizations see and influence patterns of interaction and behavior that surround them. Using a collection of concepts, processes and tools, practitioners can better understand what is happening in everyday connections. The Adaptive Action Cycle (AAC) is a process which asks three simple questions: What? So What? and Now What? These questions can assist program managers and evaluators in capturing good information about what currently exists, recognizing and shaping patterns and behavior, and allowing them to think about change and intervention in a different way. Lessons learned are only part of the solution. Using a community-based case study, this session will follow a multi-tiered process of data collection, analysis and decision making to demonstrate an approach to bridging the gap between lessons learned and lessons applied.
A Case Study on Adaptive Action in Education
Royce Holladay, Human Systems Dynamics Institute, rholladay@hsdinstitute.org
Knowing family mental health is critical to children, representatives from Minneapolis Public Schools and Hennepin County received a grant to provide therapy to immigrant families whose children were in their shared systems. At project’s end, they wanted to identify lessons learned for this project and for their partnership. I met with healthcare administrators, project trainers, and psychologists who provided direct services. They used a timeline of events, describing individual and shared experiences in the project. The Adaptive Action Cycle (AAC) questions framed their responses. Ultimately we compiled a picture of activities, insights, and perceptions of these professionals. The AAC brought coherence to responses among the groups, reducing variability in responses. It also provided a way to see how individual and systemic responses influenced patterns as they completed the project. From these findings, the client has made recommendations about the future of this partnership and applicability of some funding requirements.
Translating Lessons Learned Into Lessons Applied
Mallary Tytel, Healthy Workplaces, mtytel@healthyworkplaces.com
The field of Human Systems Dynamics uses a collection of concepts and tools to assist practitioners in better understanding what is happening in their everyday situations. One such tool is the Adaptive Action Cycle. By exploring this method, we see how three simple questions help program managers and evaluators capture information, recognize similarities, differences and relationships across space and time, think about change, and make critical decisions. Similarly, the idea of Lessons Learned also examines how knowledge emerges from program implementation and evaluation. Stakeholders familiarly use lessons learn to identify and measuring differences and improvements based upon what they have done. Where there is commonly a gap, is in the shift from lessons learned to lessons applied. Using the Adaptive Action Cycle, we will illustrate how smooth, user-friendly and efficient that transition can be for practitioners and stakeholders alike.

Session Title: Using Evaluation to Enhance Program Participation With Underrepresented Groups in Science, Technology, Engineering and Mathematics (STEM) Fields
Multipaper Session 551 to be held in BOWIE B on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Tamara Bertrand Jones,  Florida State University, tbertrand@fsu.edu
Experiences of a Culturally Responsive Evaluator in Analyzing Scientific Self-Efficacy and Scientific Research Proficiency as Benefits of Summer Research Participation for Underrepresented Minorities
Presenter(s):
Frances Carter, University of Maryland, Baltimore County, frances2@umbc.edu
Abstract: Low participation and performance in science, technology, engineering and mathematics (STEM) fields by underrepresented minorities are widely recognized as major problems. While interventions designed to broaden participation report that one of the key program components, participation in undergraduate research opportunities, results in enhanced student outcomes, little is known about what influences these positive relations. In the current study, an evaluator with lived experiences as a minority and a scientist embeds an undeveloped and sensitive topic with culturally responsiveness. The paper analyzes relations between summer research participation, scientific self-efficacy, and scientific research proficiency. The analysis surveys minority students from several STEM intervention programs that offer undergraduate research opportunities. Participants’ responses on scales are analyzed using factor analysis and regression difference-in-difference to estimate the hypothesized relation. Results from this study will enhance understanding of undergraduate research participation on student outcomes and provide important implications to science education, evaluation, and policy communities.
Enhancing Minority Representation in the Sciences: Results From the Evaluation of the Minority Opportunities in Research (MORE) Programs at Three Universities
Presenter(s):
Simeon Slovacek, California State University, Los Angeles, sslovac@calstatela.edu
Jonathan Whittinghill, California State University, Los Angeles, jwhittinghill@cslanet.calstatela.edu
Abstract: Despite growth in their share of the overall US population, Hispanics, African Americans, Native Americans and Pacific Islanders remain severely underrepresented in both science career programs and science careers. The Minority Opportunities in Research (MORE) programs at the National Institutes of Health have since their inception provided funding to universities to implement strategies and interventions to increase the number of underrepresented minorities earning degrees in the sciences and pursuing research careers in the sciences. This study examines the outcomes of students supported by MORE funded programs at three large minority-serving institutions against those of a comparison group generated through propensity score matching. Results from a pilot study at one of the three universities demonstrated that students in MORE supported programs graduated with science degrees at much higher rate, and were far more likely to pursue advanced study in the sciences.

Session Title: How Does Evidence Influence Policy Change? Examining Two Complementary Approaches With Two Complementary Evaluations
Panel Session 552 to be held in BOWIE C on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Carlisle Levine, CARE, clevine@care.org
Discussant(s):
Veena Pankaj, Innovation Network, vpankaj@innonet.org
Ehren Reed, Innovation Network, ereed@innonet.org
Abstract: If alleviating global poverty depends on successful pro-poor policies, then, CARE, like other international humanitarian organizations, can promote these policies by presenting evidence based on decades of working in more than 60 countries. With Gates Foundation support, CARE is testing this hypothesis via two initiatives. CARE's LIFT UP grant aims to build organizational capacity to more systematically use country-level evidence to influence U.S. policymakers. CARE’s Learning Tours grant provides Members of Congress and influential media and “grasstops” leaders with firsthand experiences aimed at increasing their support for improving maternal health and child nutrition globally. Working with external evaluators Innovation Network (Innonet) and Continuous Progress Strategic Services (CPSS), CARE is assessing the effectiveness of these approaches. Panelists will discuss how to measure the effect of country-based evidence on policy change and highlight how CARE’s overlapping evaluations, inform each other’s work, and increase CARE's ability to influence policy change.
Using Complementary Evaluations to Assess Policy Change and Build Internal Advocacy Monitoring Capacity
Carlisle Levine, CARE, clevine@care.org
Carlisle Levine (CARE) is leading a process to determine how CARE can leverage its program experience to increase the effectiveness of its advocacy efforts by testing two new and related approaches. To assess these overlapping approaches, CARE and its external evaluators defined measures of change and are now testing these measures. Through the LIFT UP evaluation commissioned from InnoNet, CARE is tracking its internal communication pathways from country level to policy advocacy. CPSS’ evaluation of the Learning Tours project offers an in-depth assessment of CARE’s attempts to increase the capacity and willingness of selected individuals to influence policy change. By determining the value of its investments in advocacy, CARE can adjust as needed to increase effectiveness. We will discuss the ongoing challenges of establishing the contribution of these approaches to advocacy outcomes, as well as the evolving methods CARE and its external evaluators are using to respond to this challenge.
Defining and Evaluating Change Agents/Champions: An Evaluator’s Perspective
Lisa Molinaro, Aspen Institute, lisa.molinaro@aspeninst.org
David Devlin-Foltz, Aspen Institute, david.devlin-foltz@aspeninst.org
David Devlin-Foltz and Lisa Molinaro (CPSS) are responsible for assessing the policy outcomes of CARE’s Learning Tours initiative. Panelists will discuss how we track current or potential champions and influential actors for Maternal, Newborn, and Child Health before, during, and after the Tours. We will address tough methodological questions: Can we define what it means to be a champion or an influential for a given policy change? Can we differentiate between the expectations we should have for a policy champion and an influential? Can we successfully track the progress champions or influential actors have made? Can we help CARE translate these learnings into improved Learning Tours? Our panel will contribute to advocacy evaluation practice by showing how our tools and approaches are evolving. We will update our contribution to the field in defining behavioral measures of what it means to be a policy champion or an influential actor.

Roundtable: Incorporating Gender into Mainstream Projects
Roundtable Presentation 553 to be held in GOLIAD on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Feminist Issues in Evaluation TIG
Presenter(s):
Sathi Dasgupta, SONA Consulting Inc, sathi@sonaconsulting.net
Brian Heilman, International Center for Research on Women, heilman.brian@gmail.com
Ronda Schlagen, Independent Consultant, rschlangen@yahoo.com
Jim Rugh, Independent Consultant, jimrugh@mindspring.com
Pamela Walsh, Eastern Michigan University, walshmgmt@comcast.net
Abstract: As gender issues have been mainstreamed, their inclusion in evaluation has often become an afterthought. In this roundtable, we will share specific examples of ways in which gender can be addressed in evaluations of projects or programs with not particular focus on gender. The presenters come from a variety of backgrounds, both male and female, discussing their experiences in US and international contexts. Some topics will include: engaging men as allies in gender analysis, safety and ethical concerns for researching gender-based violence, gender and labor, and more! The discussion will focus on the pros and cons of concrete methods and instruments which might enable a stronger gender analysis in an evaluation.

Roundtable: Utilizing Metaevaluation to Validate Evaluation Quality: Study of a Grant-Funded Graduate Program for Minority Group Students in Science, Technology, Engineering, and Mathematics (STEM)
Roundtable Presentation 554 to be held in SAN JACINTO on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Research on Evaluation TIG
Presenter(s):
Angela Watson, Oklahoma State University, angela.watson@okstate.edu
Katye Perry, Oklahoma State University, katye.perry@okstate.edu
Abstract: The purpose of this session is to present the results of a metaevaluation of a project completed by the presenters within a university setting. More specifically, the authors utilized a discrepancy evaluation model to initially evaluate the program as they also adhered to the Program Evaluation Standards (Joint Committee, 1994). In keeping with the theme of this year’s conference, the authors will present a summary of the processes engaged in during the evaluation followed by cross-validations of these processes against the validity standards advanced by House (1980). In doing so, resulting analyses will help the presenters determine how well the evaluation of the project evidenced “truth” (House, 1980, p. 88), “aesthetic principles” (1980, p. 106) and “justice” (1980, p. 135). House, E. R. (1980). Evaluating with validity. Beverly Hills, CA: Sage. Joint Committee on Standards for Educational Evaluation. (1994). The program evaluation standards (2nd ed.). Thousand Oaks, CA: Sage.

Session Title: Crossing Barriers: Engaging Faculty, Staff, and Students Through Online Course Evaluations
Panel Session 555 to be held in TRAVIS A on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Karissa Oien, California Lutheran University, koien@callutheran.edu
Abstract: California Lutheran University has been conducting online course evaluations for the past two years. This session will focus on the strategies used to transition to the online system, mainly through developing an understanding of the process between students, faculty, and staff. Connections between groups were developed through successful student marketing campaigns, cross-committee faculty meetings, a faculty survey, a staff workshop, and the creation of our diverse CoursEval Team. This session will discuss how these strategies created campus-wide awareness of the importance of online course evaluations.
Crossing Barriers: Engaging Faculty, Staff, and Students Through Online Course Evaluations
Karissa Oien, California Lutheran University, koien@callutheran.edu
This presentation will focus on the strategies California Lutheran University used to transition to online course evaluations. Once we switched to an online system, it became evident that students and faculty did not fully understand the course evaluation process and the use and importance of course evaluations. Evaluation quality was developed through successful student marketing campaigns, cross-committee faculty meetings, a faculty survey, and the creation of our diverse CoursEval Team.
Crossing Barriers: Engaging Faculty, Staff, and Students Through Online Course Evaluations
Melinda Wright, California Lutheran University, mjwright@callutheran.edu
This presentation will focus on the history of the paper and pencil course evaluations at California Lutheran University and how transitioning to online course evaluations has initiated more staff involvement. We have learned it is important to build connections with our staff members on campus due to their interactions with both students and faculty. Evaluation quality was developed through information sessions, workshops, and a course evaluation website.

Session Title: Using Cost --> Procedure --> Process --> Outcome Analysis (CPPOA) Data to Improve Substance Abuse Prevention Programs and Portfolios
Panel Session 556 to be held in TRAVIS B on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
Chair(s):
Michael Langer, State of Washington, langeme@dshs.wa.gov
Discussant(s):
Beverlie Fallik, United States Department of Health and Human Services, beverlie.fallik@samhsa.hhs.gov
CPPOA of a Fourth Grade Intervention for Preventing Use of Alcohol, Tobacco, and Other Drugs
Brian Yates, American University, brian.yates@mac.com
We found that a three-year program for preventing substance abuse in fourth-graders actually increased subsequent reports of willingness to use gateway drugs, and actual use of some of these drugs. Because we conducted a cost ? procedure ? process ? outcome Analysis (CPPOA), rather than a simpler cost-effectiveness analysis, we could say with reasonable confidence both a) which procedures were responsible for these iatrogenic outcomes, as well as b) which clients were most harmed by the procedures. These findings led to specific suggestions about how to modify the program to remedy its iatrogenic effects. Data were collected from 64 youth participating in the prevention program and 64 control youth, at the beginning of three consecutive school years. Iterative multiple regressions show willingness to use both gateway drugs and alcohol, tobacco, and other drugs (ATODs), and actual use of ATODs, was negatively related to changes in social responsibility.
Secure CPPO Data Sharing Techniques that Efficiently Support Analysis of Substance Abuse Prevention Policies and Portfolios
Ron Visscher, Aquinas College, ron.visscher@gmail.com
Techniques to securely and flexibly network cost and other data among substance abuse prevention and treatment efforts are first described. It is shown how the techniques enable secure recombination and reuse of matrices of data among multiple organizations and individuals, thus enabling efficient multi-perspective Cost ? Procedure ? Process ? Outcome Analysis (CPPOA). As a result of participation in such data sharing networks, each individual person, community, funder or other entity is able to better inform their own portfolio of policies and strategies. Both efficiency and security are enhanced as a result of the way the distribution of the data and the tasks are coordinated among the related constituent groups involved. The techniques can be used to support the analysis of “reach” and other aspects of program efficiency within target communities, while also informing the selection of the most effective complement of programs for coverage of substance abuse prevention and treatment needs.

Session Title: Communication: At What Level Does It Help or Hinder Evaluation Capacity?
Multipaper Session 557 to be held in TRAVIS C on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Eric Barela,  Partners in School Innovation, ebarela@partnersinschools.org
Discussant(s):
Susan Parker,  Clear Thinking Communications, susan@clearthinkingcommunications.com
Communicating Versus Real work? The Communication Burden on Volunteer Board Members of Growing Evaluation Organizations
Presenter(s):
Williams Benita, Feedback Research & Analytics, bvanwyk@feedbackra.co.za
Abstract: Evaluation associations are potentially important hubs for delivering on evaluation capacity building. Most evaluation associations rely on volunteer board members to provide oversight and operational management of the organizations, at least initially (Segone, & Ocampo, 2007). If evaluation associations are to grow successfully, it is imperative that association boards are effective in applying the available time of their volunteer board members (Herman & Renz, 1999). One strategy is to limit the number of projects that the organization engages in, and another is to involve the organization’s membership in some of the projects through building strong committees. However, an often overlooked aspect of board participation is the burden of attending board meetings and reading and responding to board email communication (Weare, Loges & Oztas, 2007). This paper provides a quantitative oversight of email and meeting communication of two fledgling evaluation associations in Africa. It considers existing literature about optimising volunteer board effectiveness, and provides some recommendations.
When a Program Evaluation Goes to a Small Town: Unique Opportunities in Communication, Accessibility, and Decision-Making
Presenter(s):
Ann G Bessell, University of Miami, agbessell@miami.edu
Valerntina I Kloosterman, University of Miami, vkloosterman@miami.edu
Sabrina Sembiante, University of Miami, s.sembiante@umiami.edu
Abstract: Supporting poor, ethnically diverse at-risk students when they first arrive in kindergarten, before they fail, may be the best possible investment for our society. Hence, this two-year mixed-method program evaluation of a Kindergarten Support (K-Support) program implemented in four charter elementary schools located in a small central Florida town is the focus of this session. Both the program and its evaluation have distinctive characteristics. The K-Support educational program is innovative and one-of-a-kind; an intensive two-year program serving at-risk students in language and literacy. The evaluation enjoyed direct communication with and accessibility to the private funder and the highest levels in the educational hierarchy, including the superintendent and the charter school board. This resulted in efficient and non-bureaucratic decision-making that facilitated the evaluation process. We will describe the evaluation design, main findings and implications, and discuss the uniqueness of the communication and dynamic between the evaluators and the key stakeholders.

Session Title: Is Quality Improvement In Healthcare Cost-Effective?
Expert Lecture Session 558 to be held in  TRAVIS D on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Health Evaluation TIG
Chair(s):
Mary Gutmann, EnCompass LLC, mgutmann@encompassworld.com
Presenter(s):
Edward Broughton, University Research Company LLC, ebroughton@urc-chs.com
Abstract: Many health interventions have been shown to be cost-effective when implemented to evidence-based quality standards. However, several studies show that health care provided for much of the world’s population fails to meet such standards. Quality improvement (QI) interventions can overcome common obstacles to providing high quality care, even in situations where resources are scarce and health systems are weak. Many decisionmakers are skeptical of such interventions. Therefore, it is crucially important that an economic case can be made for QI. Using examples from US and international health care settings, this lecture discussed methods of cost-effectiveness analysis for QI programs – why they are done, how they are performed and how to interpret their results. This information is crucial to anyone interested in understanding and performing full evaluations of QI programs to make a business case for working towards improvements in the quality of health care.

Session Title: None of the Above: Expanding Binary Categorizations
Multipaper Session 559 to be held in INDEPENDENCE on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Lesbian, Gay, Bisexual, Transgender Issues TIG
Chair(s):
John T Daws,  University of Arizona, johndaws@email.arizona.edu
Discussant(s):
John T Daws,  University of Arizona, johndaws@email.arizona.edu
Beyond the Binary: Expanding Our Categories of Gender Identity and Sexual Orientation
Presenter(s):
Linda Drach, Oregon Public Health Division, linda.drach@state.or.us
Kari Greene, Oregon Public Health Division, kari.greene@state.or.us
Abstract: Often, the complex worlds of gender and sexuality are measured by a single item, with consequent assumptions made. We explore the expansion of these categories in Speak Out 2009, a survey of 843 sexual and/or gender minority individuals in the Portland, Oregon metropolitan area. Speak Out offered 7 choices for sexual orientation, including lesbian, gay, bisexual, and queer, and 7 choices for gender, including transgender, intersex, and genderqueer. The final Speak Out sample was notable because 6% identified as transgender, 7% identified as genderqueer, and, of those, all but one also identified as a sexual minority. The relatively large subsamples allowed us to examine the interaction of self-identified gender and sexual orientation across a number of health behaviors, health outcomes and related factors, as well as to explore multiple ways to categorize both gender and sexual orientation. Measurement issues and implications for evaluation practice will be discussed.
Queering/ Querying Evaluation: Moving Beyond Political Correctness and the Binary State of Mind
Presenter(s):
Denice Cassaro, Cornell University, dac11@cornell.edu
Abstract: How to use evaluation and the evaluation process as a way to open dialogue and educate around issues of sexuality, sex and gender will be explored. I will illustrate how incorporating key concepts from queer, feminist, and critical race theories can provide an educational component (and maybe even a little subversiveness) into the evaluation process allowing for understandings of identities that go beyond binaries. The hope is to further efforts towards social justice with evaluation practice serving as a powerful medium.

Session Title: Technology and Student Outcomes: Mathematics, Language Arts, and Big-District Diversity
Multipaper Session 560 to be held in PRESIDIO A on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Distance Ed. & Other Educational Technologies TIG
Chair(s):
Talbot Bielefeldt,  International Society for Technology in Education, talbot@iste.org
Classroom Network Technology as a Support for Systemic Mathematics Reform: Examining the Effects of Texas Instruments' MathForward Program on Student Achievement in a Large, Diverse District
Presenter(s):
Corinne Singleton, SRI International, corinne.singleton@sri.com
William R Penuel, SRI International, william.penuel@sri.com
Abstract: In this paper we present an evaluation of the Texas Instruments MathForward program in its third year of implementation at middle schools in Richardson, Texas. MathForward is a systemic reform initiative in which teachers integrate classroom network technologies into their mathematics instruction. The research uses a pre-post nonequivalent comparison group design to analyze effects of MathForward on student achievement; it also includes measures to address implementation fidelity. The findings reveal that the MathForward program was implemented with acceptable fidelity to the model, was strongly supported by district officials, and was associated with significant gains in student achievement. These results indicate the promise of the MathForward intervention for high-implementing classrooms, and suggest the readiness of such interventions to be studied scale, using random assignment and aligned assessments.
Effects of Fast ForWord Language Computer-based Training Program on Student Performance in a Large Central Florida School District
Presenter(s):
Yakup Bilgili, Polk County Public Schools, yakup.bilgili@polk-fl.net
Abstract: This study examines the effects of Fast ForWord Language computer-based training program, developed by Scientific Learning Corporation, on student performance in one central Florida school district. The primary purpose of this evaluation is to assess whether, and to what extent, participation in the Fast ForWord Language computer-based intervention program produce a positive impact on targeted students reading achievement. Students in grades Kindergarten through high school, who were mostly performing well-below grade level were scheduled into the supplemental Fast Forword (FFW) program for reading instruction during the current school year (2009-10). Data will be gathered from more than 100 PreK-12 district schools. Records for all students participating in the FFW intervention program from August 2009 through May 2010 will be used in the analysis. The goal is to determine whether the progress made by students over the course of one school year produce a discernible value-added effect over the typical growth patterns of students at similar levels of performance who had not experienced the same program intervention.

Session Title: Herding Cats: Improving the Quality and Quantity of Decentralized Evaluation in a Global Organization Through Capacity Building
Demonstration Session 561 to be held in PRESIDIO B on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Presenter(s):
Thea C Bruhn, United States Department of State, bruhntc@state.gov
Abstract: With 38 bureaus and over 240 embassies and missions around the world, the U.S. Department of State (State) does not lend itself to a one-size-fits-all approach to evaluation. At the same time, State faces an increasing demand for credible evidence of the impact of the U.S. Government’s foreign policy activities. In this session, participants will better understand the important of an integrated model of capacity building to ensure that very disparate approaches to program evaluation on a global scale meet standards for quality and are responsive to the agency’s needs. Participants will see how such a model at State better enables evaluation to: • Improve effectiveness in achieving U.S. foreign policy goals; • Document project accomplishments and achieved outcomes; • Integrate senior leadership priorities; • Demonstrate “value for money;” and • Help coordinate and focus strategic planning to ensure accountability and transparency.

Session Title: Alcohol, Drug Abuse and Mental Health TIG Business Meeting
Business Meeting Session 562 to be held in PRESIDIO C on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
TIG Leader(s):
Marge Cawley, National Development and Research Institutes (NDRI), cawley@ndri-nc.org
Diana Seybolt, University of Maryland, Baltimore, dseybolt@psych.maryland.edu

Roundtable: Assessing Board Performance: Challenges and Constraints
Roundtable Presentation 563 to be held in BONHAM A on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Business and Industry TIG
Presenter(s):
Zita Unger, Evaluation Solutions Pty Ltd, zitau@evaluationsolutions.com
Abstract: Evaluation of board performance has increased considerably in recent years. Since the collapse of high-profile corporations, such as Enron, Tyco and WorldCom more rigorous forms of accountability and compliance have been become standard for public company boards and more commonplace for boards in the public, private, non-profit and for-profit sectors. Whilst the various sectors operate within their own regulatory systems and contexts, all effective boards demonstrate a balance of skills, behaviors, relationships, diversity as well as structures and process. Evaluation plays an important role in contributing to their quality and continuous improvement. The intent of this roundtable is to discuss strategic issues for evaluation in the governance space. Such as, what are key questions for evaluation of board effectiveness? What are optimal conformance and performance measures? What are important human capital issues? Who should evaluate the board? Is diversity the melting pot of success?

Session Title: Low-cost, High-Quality Assessments for Nonprofit Adolescent Pregnancy Prevention Program Planning
Panel Session 564 to be held in BONHAM B on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Shannon Flynn, South Carolina Campaign to Prevent Teen Pregnancy, sflynn@teenpregnancysc.org
Abstract: This panel will focus on the challenge of conducting evaluations in settings with limited resources, such as nonprofits. Solid assessment data is required to design effective interventions but collection of data may seem too costly and cumbersome with shrinking evaluation budgets. Recently, the South Carolina Campaign to Prevent Teen Pregnancy (Campaign) conducted two evaluations to assess environmental structures that may impede or promote contraceptive use among 18 – 19 year old youth that yielded valuable results for program planning, but required limited amounts of organizational resources: staff, time, and money. The first evaluation examines the availability of sexual health services on college campuses and the second illustrates the experience of adolescents who purchase condoms. Success and challenges with methods will be described. The Campaign is a 15 year old nonprofit that prevents teen pregnancy by building the capacity of organizations and communities through education, technical assistance, public awareness, advocacy and research.
Bread, Milk, Condoms: Using Low-cost Strategies to Assess Youth Experiences with Condom Purchasing in Two Communities
Shannon Flynn, South Carolina Campaign to Prevent Teen Pregnancy, sflynn@teenpregnancysc.org
Sarah Kershner, South Carolina Campaign to Prevent Teen Pregnancy, skershner@teenpregnancysc.org
Dana Becker, South Carolina Campaign to Prevent Teen Pregnancy, 
Using a qualitative and quantitative survey instrument*, a semi-structured interview process, and a follow up survey, the South Carolina Campaign to Prevent Teen Pregnancy (Campaign) assessed youth experience of purchasing condoms at 92 retail stores using a cost-effective evaluation design. This presentation will describe methods used, low-cost strategies employed, and strengths and weaknesses of the evaluation design and data gathered. Community adolescents were partnered with Campaign staff and community volunteers to shop for condoms and participate in a survey including: the ease of finding condoms, perceived attitudes of store staff, variety of condoms available, and other dimensions of the shopping experience. Community members and adolescents reviewed the results and commented on results. Findings will illustrate the possibility of gathering data to inform program development using cost-effective strategies and challenges will be highlighted. *instrument created by Philliber Research Associates served as the foundation for survey tool used in this project
Older Youth Pregnancy Prevention: Using Low-cost Methods to Assess Sexual Health Services in Institutions of Higher Learning and Identify Outreach Opportunities
Sarah Kershner, South Carolina Campaign to Prevent Teen Pregnancy, skershner@teenpregnancysc.org
Shannon Flynn, South Carolina Campaign to Prevent Teen Pregnancy, sflynn@teenpregnancysc.org
Mary Prince, South Carolina Campaign to Prevent Teen Pregnancy, mprince@teenpregnancysc.org
Despite the economic climate, high-quality evaluation is vital to assess resources and design effective strategies. As part of a larger project to understand risk and protective factors for pregnancy among 18-19 year olds, the SC Campaign to Prevent Teen Pregnancy (Campaign) surveyed colleges to assess the extent to which sexual health information and services were provided on campus. This presentation will focus on cost-effective methods used conduct this assessment including online survey tools, relationship building, and successful follow-up techniques that yielded an 80% response rate and greatly increased the value of the data. In addition to the assessment of colleges, the Campaign used integrated administrative data from multiple sources (social service system, Medicaid billing records, and juvenile justice), to identify potential points of intervention for pregnancy prevention beyond colleges. Recommendations for getting the “best bang for your evaluation buck” will be discussed as well as lessons learned.

Session Title: Is Working Together Worth It? The Process and Findings of a Longitudinal Evaluation of a Districtwide Professional Learning Community Initiative
Expert Lecture Session 565 to be held in  BONHAM C on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Rebecca Gajda Woodland, University of Massachusetts, Amherst, rebecca.gajda@educ.umass.edu
Presenter(s):
Rebecca Gajda Woodland, University of Massachusetts, Amherst, rebecca.gajda@educ.umass.edu
Discussant(s):
Mark Zito, East Hartford Public School District, zito.mf@easthartford.org
Abstract: In this session, Dr. Gajda, a former secondary school teacher and administrator, will present the Teacher Collaboration Improvement Framework (Gajda, 2008; Gajda & Koliba, 2007; Koliba & Gajda, 2009), which is a field-tested framework for systematically evaluating the quality and improving the performance of teacher collaboration in K-12 school districts. This framework has been utilized to formatively and summatively assess the attributes and achievements of three year professional learning community initiative in one New England school district. Evaluation methods included on-site observation of teacher teams, interviews with district administrators and teachers, and a comprehensive district-wide annual survey. Findings of the evaluation, including the correlation between quality of teacher collaboration, improvements in instruction, and advances in student learning will be showcased. In addition, participants will learn will how district personnel have used the process and findings of the evaluation to make decisions about how to improve supervision of teacher collaboration and professional development.

Session Title: Evaluating Education Programs for English Language Learners
Multipaper Session 566 to be held in BONHAM D on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Courtney Brown,  Indiana University, coubrown@indiana.edu
Discussant(s):
Julie Sugarman,  Center for Applied Linguistics,  julie@cal-org
Multi-dimensional Evaluation of School-wide Intervention for English Language Learners
Presenter(s):
Ginger Gossman, Austin Independent School District, g.l.gossman@gmail.com
Abstract: Austin Independent School District partnered with WestEd to implement a program called Quality Teaching for English Learners as a school-wide intervention to improve ELL outcomes. An element of this work was the development of capacity to continue implementation after the collaboration with WestEd ended; select teachers were apprenticed in techniques reliant on scaffolding and student engagement. To ensure balanced results, the evaluation focused on formative and summative outcomes and multidimensional. Results from the target school were compared to district and control site data. Formative outcomes were measured using administrative records, self-report and focus group data. Summative outcomes included student performance on the Texas Assessment of Knowledge and Skills (TAKS) test, attendance and discipline data. To ensure specificity, these analyses were conducted by grade-level and subject. Results were mixed overall. However, those students who began at the target school as freshman during year 1 of the program demonstrated improved academic outcomes.
Comparative Effectiveness of the Rosetta Stone Dynamic Immersion Program: A Report of a Randomized Experiment
Presenter(s):
Sara Atienza, Empirical Education Inc, satienza@empiricaleducation.com
Sandy Philipose, Empirical Education Inc, sphilipose@empiricaleducation.com
Xiaohui Zheng, Empirical Educaiton Inc, xzheng@empiricaleducation.com
Denis Newman, Empirical Education Inc, dn@empiricaleducation.com
Abstract: This study tests the effectiveness of the Rosetta Stone Dynamic Immersion (RSDI) program, a web-based English Language Development (ELD) program for English Language Learners (ELLs). Interactive multimedia technology is combined with the voices of native speakers, written text, and real life images to teach new words and grammar inductively. RSDI was placed in a school district with a substantial ELL student population and given to ELL students in grades 3-5. This experiment is a comparison of English Language Proficiency scores among students who used RSDI and students who continued to use the district’s existing instructional materials. Factors, such as student mobility, technology issues, and insufficient instruction time, limited the students’ exposure to the intervention, and therefore, the Complier Average Causal Effect (CACE) statistical method is used to estimate the impact of the intervention.

Session Title: From Research to Commercialization: Impact Evaluation of Portfolios of Research
Multipaper Session 567 to be held in BONHAM E on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Israel Lederhendler,  National Institutes of Health, lederhei@od.nih.gov
Tracing From Applied Research Programs to Downstream Applications: Approach and Findings
Presenter(s):
Rosalie Ruegg, TIA Consulting Inc, ruegg@ec.rr.com
Patrick Thomas, 1790 Analytics LLC, pthomas@1790analytics.com
Abstract: The use of historical tracing to investigate knowledge creation and dissemination has received recent attention by The U.S. Department of Energy in a set of five studies of renewable energy and energy efficiency research and development programs. The purpose of the studies was to assess the existence and strength of evidence connecting program knowledge outputs to downstream commercial outcomes both within and outside the industry of program focus. The approach starts with program strategies and activities, identifies its principal knowledge outputs, and documents paths of knowledge flow using multiple evaluation techniques: patent and publication citation analysis, publication co-author analysis, document and database review, a review of licensing, and interviews with expert — thus providing a fuller assessment of linkages than could be accomplished using a single technique. The program areas examined using this approach include wind energy, solar photovoltaic energy, geothermal energy, vehicle energy storage, and vehicle advanced combustion research.
Evaluating Ohio's Portfolio of Technology Programs
Presenter(s):
David Cheney, SRI International, david.cheney@sri.com
Jennifer Ozawa, SRI International, jennifer.ozawa@sri.com
Chris Ordowich, SRI International, christopher.ordowich@sri.com
Abstract: The Ohio Third Frontier Program, the Thomas Edison Program, and the Ohio Venture Capital Authority constitute a comprehensive set of state technology and financing programs which span the technology commercialization continuum, from research and idea creation through to market entry and competitiveness of mature companies through product innovation. In 2008, the Ohio Department of Development, which oversees the state’s technology programs, requested that SRI conduct a rigorous and credible assessment of the impacts of its key technology programs on Ohio’s current economy, as well as future indicators of impact. The quantitative and qualitative results of this study formed the empirical basis for strong bipartisan support to place an initiative to renew and expand the $1.4 billion, 10-year Third Frontier Program on the May 2010 ballot. This paper discusses SRI’s methodology for determining the quantitative and qualitative impacts of the program and discusses the strengths, weaknesses, and lessons learned from the approach.

Session Title: Evaluating Government Research and Technology Policies: Traditional and Emerging Methods
Multipaper Session 569 to be held in Texas D on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Cheryl Oros,  Oros Consulting LLC, cheryl.oros@gmail.com
Quality Evaluations of Government Policies in Research Science and Technology Sector
Presenter(s):
Yelena Thomas, Ministry of Research Science and Technology, yelena.thomas@morst.govt.nz
Abstract: Evaluation quality is a topic of interest for many. There are a lot of approaches and arguments in favour of one or another method. This presentation describes that the mixed method approach and explains why this is the most successful approach for evaluating government policies in New Zealand. The mixed method approach uncovers the multifaceted interventions of any public policy and show the impacts on different user groups. It also provides cost effective and comprehensive impact evaluations. There are, of course, challenges with this approach. This presentation discusses the challenges the author has encountered when implementing the approach and the risk mitigation strategies employed. The presentation also describes how the New Zealand context compares to other countries and discusses whether or not the same approach would work elsewhere.
Applications of Agent-based Simulations in Evaluating Science and Technology Policies
Presenter(s):
Branco Ponomariov, University of Texas, San Antonio, branco.ponomariov@utsa.edu
Abstract: This paper reviews, and applies to the example of cross-sectoral research in nanotechnology, the use of agent-based simulation methods to the evaluation S&T Policy questions, such as the effect of different organizational forms and constraints on collaboration patterns. The paper uses findings from the literature on nano-technology to program a simulation of the behaviors of scientists and institutions entering the field over time. The results from the variety of simulation scenarios will be juxtaposed to the empirically observed network structures. The emphasis of the paper is on showing how the use of simulation techniques is a powerful complement of the conventional approaches of estimating the effects of key variables and policy interventions on behavior. Using such findings to program “decision rules” under which “agents” operate in a collaboration network allows making robust predictions about the likely outcomes of policies aiming at influencing the pattern of S&T collaboration.

Session Title: Joint Evaluation of Private Sector Development Projects: Benefits and Challenges
Panel Session 570 to be held in Texas E on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Evaluation Use TIG and the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Ade Freeman, World Bank, afreeman@ifc.org
Discussant(s):
Cheryl Gray, World Bank, cgray@worldbank.org
Abstract: Increasingly, in the global environment, development banks bring their collective expertise and financial resources together to support public or private sector components of development projects. These projects are often complex and are structured to meet the development objectives of the supporting institutions. Joint evaluation of these projects can result in many advantages, especially with respect to knowledge sharing. They can also reduce the overall cost of evaluation, leverage resources and reduce the evaluation burden on the client. But, many challenges must be addressed including: different evaluation frameworks; inconsistent institutional missions; timing issues; incompatible disclosure policies; and even operating styles and personalities. This panel will convey the presenters good and bad experiences in conducting joint evaluations and, using cases, will suggest how to successfully engage in joint evaluations.
Joint Evaluation of Private Sector Development Projects: Methodological Issues
Chris Olson, European Bank for Reconstruction and Development, olsonc@ebrd.com
EBRD will present methodological issues related to joint evaluation, based on examples from private sector projects in Eastern Europe and Central Asia. Having a second pair of eyes may help minimize potential oversight and thereby strengthen projects. At the same time, different institutional formalities and viewpoints help add dimension and perspective to the evaluation, but different approaches related to evaluation timing, methods and focus, present challenges in harmonizing the final report. This discussion will be relevant to participants who plan to evaluate projects and programs that involve different development partners. The presenter will answer questions on how to structure and conduct such joint evaluations and on practical methods that can be used to harmonize the final product of joint evaluations.
Joint Evaluation of Private Sector Development Projects: Practical Applications and Lessons
Stephen Pirozzi, World Bank, spirozzi@ifc.org
In the wake of the financial crisis, it is even more likely that development institutions will pool resources to support development projects, especially in the hardest hit areas. Such projects, which could include public sector, private sector or public–private partnership, present serious evaluation challenges. Joint evaluation of such projects can be more complex, time consuming and cost efficient, but advantages may outweigh the disadvantages. IFC will present some of the most critical aspects of conducting a joint evaluation, from the methodological and human perspectives. In addition to the purely hypothetical, the presenter will walk participants through one or two real examples of joint evaluation highlighting best practices and addressing such issues as cost sharing, client interactions, and resource utilization. The presenter will discuss the challenges of applying different evaluation methodologies and preparing reports that are relevant to and actionable by development institutions.

Session Title: Using R for Statistical Analysis
Demonstration Session 571 to be held in Texas F on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Graduate Student and New Evaluator TIG
Presenter(s):
Kristen Cici, University of Minnesota, denz0018@umn.edu
Abstract: Statistical software packages such as SPSS and SAS have long dominated the evaluation field as the statistical software packages to use when analyzing quantitative data. In recent years R, a syntax based statistical program, has become increasingly common – yet many evaluators have yet to hear about it. One of the greatest benefits of R is that it is open source and available at no cost. This session will include: introducing attendees to R, comparing R to other statistical software, and providing examples of how evaluators can use R in their evaluation work.

Session Title: The American Evaluation Association's Journal Editors Discuss Publishing in AEA's Journals
Panel Session 572 to be held in CROCKETT A on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the AEA Conference Committee
Chair(s):
Thomas Schwandt, University of Illinois at Urbana-Champaign, tschwand@illinois.edu
Abstract: This session is aimed at those interested in submitting manuscripts for publication in either of AEA's sponsored journals, the American Journal of Evaluation or New Directions for Evaluation. The journal editors will discuss the scope of each journal, the submission and review processes, and keys for publishing success.
Publishing in the American Journal of Evaluation
Thomas Schwandt, University of Illinois at Urbana-Champaign, tschwand@illinois.edu
Tom Schwandt is the Editor of the American Journal of Evaluation
Publishing in New Directions for Evaluation
Sandra Mathison, University of British Columbia, sandra.mathison@ubc.ca
Sandra Mathison is the Editor of New Directions for Evaluation.

Session Title: How to Use Evaluation to Achieve Human Resources (HR) System Alignment
Demonstration Session 573 to be held in CROCKETT B on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Business and Industry TIG
Presenter(s):
Stephanie Fuentes, Inventivo Design LLC, stephanie@inventivodesign.com
Abstract: Evaluation plays a critical role in assuring HR systems are aligned to achieve the maximum benefits for organizations. Too frequently organizations have mismatched practices regarding talent management, employee development, performance management, and rewards and recognition. In many cases for-profit organizations are unfamiliar with the breadth and depth of evaluative capabilities they could use because the only experience they have involves evaluating training courses. By using evaluative inquiry throughout the system, alignment among the four areas can be managed over time to help the organization reach strategic goals. This session presents a model and complementary questions for aligning HR systems using an evaluative inquiry approach. Participants will be introduced step-by-step to each of the four areas and how to ask questions and present evaluation information to decision-makers, what challenges to expect in the process and when they are likely to occur, and what conditions influence successful use of the tool.

Session Title: Translating Evaluation to Enhance Its Meaning and Use: Examples From Two Indigenous Communities - Urban United States of America and Rural Uganda
Multipaper Session 574 to be held in CROCKETT C on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Indigenous Peoples in Evaluation TIG
Chair(s):
Joan LaFrance,  Mekinak Consulting, lafrancejl@gmail.com
Discussant(s):
Joan LaFrance,  Mekinak Consulting, lafrancejl@gmail.com
Indigenous Approaches to Evaluation in Urban Settings: Theory Into Practice
Presenter(s):
Julie Nielsen, NorthPoint Health and Wellness Center Inc, niels048@umn.edu
Abstract: This paper describes how one indigenous evaluator (Anishinabe, White Earth) co-translated her research-based theory of indigenous approaches to evaluation in urban settings into practice with an urban-based Native nonprofit organization that was transforming itself from a “deficit-based social services” agency into an “assets/strengths-based healing community.” The organization was exercising self-determination in providing culturally-specific programming, but was frustrated by the constraints to extending such self-determination into its evaluations, which were largely prescribed by the organization’s funders. I will describe my study and the steps we took together - in the midst of substantial organizational turmoil not unfamiliar to those who work in nonprofit settings - to use the findings of the study to create and support the organization’s new self-determined model of evaluation The paper will also explain how the indigenous values underlying the indigenous approach intersected with House’s (date) notions of “truth, beauty, and justice,” while addressing issues of evaluation quality.
The Place-Value of Indigenous Knowledge in a Promoting the Use of Evaluation Findings: A Case Study on Using Qualitative Measures to Assess the (Potential) Impact of Public Works Programmes on the Lives of the Poor and the Vulnerable in a Post-conflict Environment
Presenter(s):
Simon Kisira, Evaluation Resource Group, simon_skw5@yahoo.co.uk
Abstract: While impact evaluations of public works programmes are usually conducted using quantitative methods, in the absence of counterfactuals and authentic baselines, coupled with budgetary constraints, the case study will give an exposition of a planned longitudinal evaluation largely employing the use of indigenous methods of measurement and assessment, taking into account the before and after scenarios among sampled “haves” and selected “have-not” households. To the extent possible, the assessments will give relative weights based on factors such as height of insurgency in Northern Uganda, resettlement of displaced persons and the introduction of the traditional justice system. Peer reviews among local community “evaluators” and mutual accountability between the different actors on the project will be emphasized. The quality of an evaluation, in this case, will be measured by the extent to which the evaluation findings are deemed useful, acceptable and ultimately used by local community members, project managers and policy makers for learning and decision making.

Session Title: The United States Government Accountability Office's (GAO) New Yellow Book: What's in It for Evaluators?
Think Tank Session 575 to be held in CROCKETT D on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the AEA Conference Committee
Presenter(s):
Michael Hendricks, Independent Consultant, mikehendri@aol.com
Rakesh Mohan, Idaho State Legislature, rmohan@ope.idaho.gov
Abstract: Mention the “GAO’s Yellow Book” or the “Government Auditing Standards of the U.S. Government Accountability Office” to 10 evaluators and you will likely receive 10 blank stares. Most evaluators don’t realize this document even exists, and those who do believe it relates only to auditing and accounting, certainly not to evaluation. But they would be wrong. Two of the document’s eight chapters and 50 of its 166 pages (30%) are devoted to “performance auditing”, which is extremely similar to program evaluation. In addition, this document is being revised this year, and the new Yellow Book may contain standards of special interest to evaluators. In this highly interactive Think Tank, two senior AEA members who serve on the US Comptroller General’s advisory council to the Yellow Book will introduce the new standards and lead a discussion of how they are applicable – and useful – to evaluators working both inside and outside government.

Session Title: Using Qualitative Methods in Evaluations With Limited Resources
Multipaper Session 576 to be held in SEGUIN B on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Qualitative Methods TIG
Chair(s):
Jennifer Jewiss,  University of Vermont, jennifer.jewiss@uvm.edu
Discussant(s):
Jennifer Jewiss,  University of Vermont, jennifer.jewiss@uvm.edu
Surveys: A Tool for Building a Case Study?
Presenter(s):
Natalya Gnedko, Chicago Public Schools, ngnedko@cps.k12.il.us
Denise Roseland, University of Minnesota, rose0613@umn.edu
Abstract: This paper presents the findings and experiences of an internal evaluation team that used a survey to develop a collective case study. The choice of using a survey as a tool for building a case study came about when the program planners asked for “stories” of in-school instructional coaches about their successful and unsuccessful experiences of working with teachers, but were unable to dedicate resources required for interviews. In response to the program planners’ request, the evaluation team developed a survey that contained mostly open-ended questions. The questions were designed to guide coaches through telling about their experience in a way that would help them create a “story”. To analyze the responses, the evaluation team used a framework developed by external evaluators, thus beginning efforts to validate the framework. The survey and resulting case study were a part of the larger evaluation of the district’s in-school instructional coaching program.
Are Surveys Enough? A Case Study in Employing User Tests and Focus Groups to Improve Website Evaluation
Presenter(s):
Michael Porter, College Center for Library Automation, mporter@cclaflorida.org
Dawn Aguero, College Center for Library Automation, daguero@cclaflorida.org
Aimee Reist, College Center for Library Automation, areist@cclaflorida.org
Abstract: The web-based Library Information Network for Community Colleges (LINCCWeb) is the library-resource search tool used by nearly 1,000,000 students, faculty, and staff at 80 libraries of Florida’s 28 community and state colleges. This resource is provided by the College Center for Library Automation (CCLA) in Tallahassee. Historically, CCLA has had trouble getting in-depth feedback from students and faculty. Annual data from these users has been gathered via surveys. These have yielded useful information, but not the in-depth, qualitative information desired. Targeting these users, CCLA conducted focus groups and user tests at seven campuses across five colleges throughout Florida. These evaluations were extremely valuable and gave CCLA a new, in-depth understanding of user needs. This session will provide lessons learned on marketing, recruiting, providing incentives, logistics of conducting sessions, analysis, and reporting of focus group and user test results. This session will be especially valuable for others in low-budget, non-profit evaluation.

Session Title: Peace Corps’ Volunteer Reporting Tool: Increasing the Capacity for Evidence-based Decision-Making at Multiple Levels of the Peace Corps
Demonstration Session 577 to be held in REPUBLIC A on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Eleanor Shirley, Peace Corps, eshirley@peacecorps.gov
Abstract: The Volunteer Reporting Tool (VRT) represents a significant step towards Peace Corps’ goal of rigorously demonstrating the results of our Volunteers’ and Partners’ diverse work worldwide. The Peace Corps’ size and unique structure have made standardized monitoring and evaluation a real challenge for the agency over the years. When the VRT was rolled out to over 65 Peace Corps posts worldwide in 2008-2009, it was the first time in the agency that every Post and every Volunteer used a standardized, yet customizable, data collection and data management system. This demonstration will show how the VRT works at each Peace Corps post, and will reveal successes, challenges and lessons learned in designing this system to align with Peace Corps work, and in training field staff and Volunteers to effectively use the system.

Session Title: Online Visual System for Strategic Planning and Performance Monitoring: iProgress v Check
Demonstration Session 578 to be held in REPUBLIC B on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Health Evaluation TIG
Presenter(s):
Jianglan White, Georgia Department of Community Health, jzwhite@dhr.state.ga.us
Alex Cowell, Georgia Department of Community Health, ajcowell@dhr.state.ga.us
Abstract: This paper introduces an online visual system for strategic planning and performance monitoring - iProgress v Check. The iProgress v Check is an online data system, designed by the Georgia Division of Public Health, to develop program strategic plans, and to monitor and track community level health promotion and disease prevention programs funded by the organization. Based on social-ecological approaches and theory-of-change logic models, the system is designed to underpin a program strategic plan, with identified goals, objectives, strategies, and performance indicators, to track, monitor and collate program activities and progress in meeting program objectives, consistently across funded grantees. It promotes strategic planning and implementation. It actively guides performance-driven decision-making and resources allocation. It helps to identify best practice strategies in the local community, and to identify gaps in resources, policy, and technical assistance. It also strengthens collaboration and communication between state and local staff.

Session Title: The Basis for Good Judgment in Evaluation
Multipaper Session 579 to be held in REPUBLIC C on Friday, Nov 12, 10:55 AM to 11:40 AM
Sponsored by the Theories of Evaluation TIG and the Research on Evaluation TIG
Chair(s):
Bianca Montrosse,  Western Carolina University, bianca.montrosse@gmail.com
Ensuring Quality in Evaluation by Generating Credible Judgment
Presenter(s):
Marthe Hurteau, University of Quebec at Montreal, hurteau.marthe@uqam.ca
Sylvain Houle, University of Quebec at Montreal, houle.sylvain@uqam.ca
Pascal NDinga, University of Quebec at Montreal, ndinga.pascal@uqam.ca
Michael Schleifer, University of Quebec at Montreal, schleifer.michael@uqam.ca
Véronique Lemieux, University of Quebec at Montreal, veronique.lemieux@gmail.com
Marie-Pier Marchand, University of Quebec at Montreal, mariepiermarchand@hotmail.com
Abstract: “Evaluation is fundamentally about judging the value of something” (Rog). If credible evidence is a required element of quality, and is necessary to generate credible judgment, it’s still not enough (Schwandt). In a former study, Hurteau & Boissiroy (2009) have established that “argumentation” (harmonizing information and developing rigorous reasoning in order to produce statements and judgments) is an essential element, but poorly developed in the literature. The present research explored this concept by interviewing 25 various professionals that are generating a judgment in their practice. They were asked to describe and compare two situations: a successful and an unsuccessful one. The data analyzed will allow the emerging of a model to generate relevant argumentation to support a judgment. Focus groups with experienced program evaluators are establishing the transferability of this model to the specific context of program evaluation. The results will be presented.
The Goal Standard, and Knowing Enough for Quality Evaluation
Presenter(s):
James Griffith, Claremont Graduate University, james.griffith@cgu.edu
Abstract: The current paper argues for an epistemological stance in evaluation that connects to current movements in contemporary Philosophy. Contemporary philosophical discussions of such ancient questions as ‘When can we be certain?’ ‘When is knowledge secure?’ ‘When do we have enough evidence?’ have obvious and meaningful application in contemporary evaluation practice. Gettier’s (1963) refutation of analyses of knowledge as justified, true, belief thrust philosophers into decades of attempts to rethink justification or discover some additional element that, added to justified true belief would yield knowledge. Some contemporary philosophers have turned in a new direction referred to variably as interest-relative, means-end, or practical interest epistemology. While this view is certainly not universally accepted in philosophy, this turn toward a practical orientation to knowledge in what is arguably the pure research discipline is informative for evaluation, where theorists have taken pains to distinguish evaluation from pure research, citing evaluation’s action orientation.

Return to Evaluation 2010
Search Results for All Sessions