2011

Return to search form  

Session Title: Valuing our Methodological Diversity
Panel Session 951 to be held in Pacific A on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Presidential Strand
Chair(s):
Jennifer C Greene, University of Illinois at Urbana-Champaign, jcgreene@illinois.edu
Discussant(s):
Thomas Schwandt, University of Illinois at Urbana-Champaign, tschwand@illinois.edu
Abstract: Evaluation has many countenances in society, and this diversity of evaluation purpose and stance has accompanied us well for several decades. This diversity has supported the spread of evaluation into many sectors of society, swelled the ranks of evaluation associations around the globe, and attracted many different kinds of practitioners and scholars to the evaluation community. BUT, we are once again arguing about method. The argument this time is infused with multiple strands from the political arena, notably, new liberalism's calls for accountability, for results, and for credible evidence upon which to base policy decisions. But, we evaluators have also taken sides in this debate and have re-created rifts once healed. This session is envisioned to help begin to heal those rifts. Evaluators of all stripes are needed in today's complex and fast-paced world, and evaluators of all stripes are needed in our ongoing conversations and engagements, one with another.
Balancing Rigor, Relevance, and Reason: Fitting Methods to the Context
Debra Rog, Westat, debrarog@westat.com
Each evaluation situation is different. Each has its own set of questions and a distinctive multi-faceted context. Over the years, I have tried to be more cognizant of evaluation's context and to bring a mix of methods to addressing the questions. However, my training as an experimental social psychologist with special emphasis on ruling out alternative explanations has an undeniable influence on my work. In this presentation, I will address how I strive to conduct evaluations that are defensible (rigor), feasible within contextual limits and opportunities (reason), and attentive to multiple stakeholder concerns (relevance). I will further address how my evaluation practice often provides an opportunity - beyond the particular case - to learn more about a domain of intervention, a broader social problem, and/or a vulnerable population. I will describe how I have tried to maximize the evaluation to enhance this understanding and to provide findings of more general consequence.
Methods for Kaupapa Maori Evaluation
Fiona Cram, Katoa Ltd, fionac@katoa.net.nz
M-üori (Indigenous) evaluators, including myself, often live in two worlds: one foot in communities where local M-üori services strive to reduce disparities and facilitate wellness; the other foot in agencies that fund those services. Both worlds demand evaluation methods that credibly assess and understand success. Use of the 'Most Significant Change' method allows people from both worlds to tell their stories of change as well as contemplate what they value in terms of success. This creates opportunities for dialogue and shared understandings, and de-emphasizes my evaluator role as a translator between worlds. This method is compatible with Kaupapa M-üori (by M-üori, with M-üori) evaluation as it privileges M-üori cultural philosophies, including the importance of relationships, and takes for granted M-üori self-determination. Through the use of this method evaluation seizes an opportunity to contribute to the creation of a more inclusive society that honors and protects the rights of M-üori.
Valuing Different Ways of Knowing in Evaluating Programmes, Policy and Practice
Helen Simons, University of Southampton, h.simons@soton.ac.uk
My practice of evaluation has always been concerned with exploring the democratic function of evaluation to promote understanding of social and educational programmes. To this end I have used approaches, such as case study and narrative, which engage people in the generation of evaluation knowledge and have reported my work in accessible ways to those beyond the case. The aspiration has been to create opportunities for dialogue and development of policies and practice. Increasingly, I feel the need to broaden the countenance of my evaluation practice both to acknowledge evaluation as intrinsically a social, political practice and to extend and honour different ways of knowing. I will argue that using art forms in the evaluation process both broadens our perspective of how to 'value' programmes and policies and extends the democratic function of evaluation to create useful knowledge in fair and just ways.

Session Title: Valuing Diversity: How to Construct Consensus for Monitoring and Evaluation (M&E) Policy with Multiple Stakeholders - Experiences With the Global Environment Facility (GEF)
Panel Session 952 to be held in Pacific B on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Evaluation Policy TIG
Chair(s):
Baljit Wadhwa, Global Environment Facility Evaluation Office, bwadhwa@thegef.org
Abstract: The Global Environment Facility (GEF), a partnership of international agencies has the mandate to finance improvements in the global environment. Agencies work in partnership and with local and international governments and organizations. Reforms at the GEF led its Council to request the Evaluation Office and Secretariat to work with the GEF partnership and revise the GEF Monitoring & Evaluation policy for its next programming period. The revised policy was presented and adopted by Council in November 2010. This session will provide a template for integrating views from diverse stakeholders based on GEF's experiences. The process of developing guidelines for results based tracking, quality assurance, compliance and inclusive engagement has led to a policy aiming to meet the needs of management, implementing agencies, evaluators and beneficiaries. Perspectives will be shared with the audience as lessons for navigating organization's unique systems to improve measurement and evaluation throughout the lifecycle of programs and projects.
Revising GEF's M&E Policy: Arriving at Minimum Standards in a Partnership With Diverse Sets of M&E Requirements
Carlo Caruggi, Global Environment Facility Evaluation Office, ccarugi@thegef.org
The GEF evaluation policy takes a "minimum requirements" approach, but assumes that the full agency M&E requirements will be applied to GEF operations. This presentation will present the new policy, highlighting revision since the original in 2006. Carlo will then discuss the Office's experience of integrating diverse perspectives from across the partnership in development of the policy. Carlo joined the GEF Evaluation Office in July 2009, after having worked for 2 years in the Evaluation Service of FAO in Rome. He is an Italian national with more than 21 years of experience in environment and development. Carlo holds an MSc in Agricultural Science from the University of Bologna in Italy and in 2000 he completed a MSc on Environment and Development at the Imperial College at Wye, University of London, UK. He is also trained in Environmental Impact Assessment from the University of Milano in Italy.
Delicate Balances in Nature and in Evaluation: The Coming Together of Rigorous Results Tracking Through Diverse and Flexible Mechanisms
Dima Reda, Global Environment Facility Secretariat, dreda@thegef.org
GEF Projects are implemented through several multi-lateral agencies. The GEF also has six focal areas, each with its own unique objectives and methodologies for measuring outcomes and impacts. Dima will speak to the challenge of address the unique needs of this diverse array of stakeholders and specific focal areas or recipient countries during design of the policy. Dima Reda coordinates Results Based Management for the GEF. She works closely with the focal areas (Biodiversity, Climate Change, Land Degradation, International Waters, and Chemicals) to report portfolio level results from over 600 GEF projects currently under implementation. Prior to joining the GEF, Dima worked for the United Nations, UNDP & UNEP, to support the sustainable management and restoration of the Iraqi Marshlands as well as on renewable energy and energy efficiency issues. Dima holds a Master in Environmental Science and MBA from Yale University and a BA in biology from Brown University.
Expanding Evaluation Culture Through Develoment of the United Nations Development Program GEF Project Evaluation Guidance
Alan Fox, United Nations Development Program, alan.fox@undp.org
The UNDP recently developed a guidance for GEF project evaluation. Alan will speak to his experience of working with each of the UNDP regional service centers, conferring also with country offices to develop and now training on the policy to improve evaluation quality. UNDP has a decentralized management style and views the policy development as an opportunity to expand an 'evaluation culture' in many of the countries where they operate. As the GEF implemented projects provide the most rigorous project evaluation requirements for UNDP, Alan is then in an interesting situation of supporting UNDP and GEF supporting evaluation capacity strengthening in many countries. Alan Fox serves as Evaluation Adviser in the United Nations Development Programme Evaluation Office. He leads country studies and thematic evaluations, specifically overseeing evaluation of programmes in the environment/energy sectors. Alan joined UNDP in 2009 after previous positions in the US government and as a consultant.
Valuing Perspectives From the Land: Experiences From Expanding the Role of GEF Focal Points in the Revised M&E Policy
Osvaldo Feinstein, Independent Consultant, ofeinstein@yahoo.com
Osvaldo Feinstein, professor at the Master in Evaluation of the Complutense University, Madrid. Senior consultant with the World Bank Independent Evaluation Group, UNDP’s Evaluation Office and the Evaluation Office of the International Fund for Agricultural Development (IFAD). He has been an adviser to the Spanish Evaluation Agency and a member of the Panel on Monitoring and Evaluation of the CGIAR Science Council. He was an evaluation manager and adviser at the World Bank's Operations Evaluation Department, and a senior evaluator at the UN International Fund for Agricultural Development. Osvaldo has world-wide experience in evaluation and published books and articles on evaluation and development. He has been a consultant with various international development agencies and a professor and lecturer at several universities and at the European Institute of Public Administration (EIPA)/Barcelona.

Session Title: Value-added Estimates as Part of Teacher Evaluations: From Data Management to Results Interpretation
Demonstration Session 953 to be held in Pacific C on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Quantitative Methods: Theory and Design TIG and the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Mei-kuang Chen, University of Arizona, kuang@email.arizona.edu
Nicole Kersting, University of Arizona, nickik@email.arizona.edu
Abstract: Value-added estimates have emerged as part of teacher evaluation and even become an important component of the basis for performance-based pay. The purpose of this session is to demonstrate the process of carrying out value-added analysis from data management in SPSS, to multi-level modeling in HLM, and to the interpretation of value-added estimates for teachers. The technical part of the value-added estimates should benefit program evaluators who are interested in data analysis. Importantly, interpretation of value-added estimates for teachers and implications for attempts to identify the highly effective and ineffective teachers will be discussed. The 4 cohorts of mathematics scores from standardized tests of 4th and 5th graders in one of the nation's largest school districts will be used for this demonstration. Although this workshop involves the background and data from value added educational evaluation, the principles and methods are applicable to other fields in which performance estimation is important.

Session Title: Evaluating Networks: The Evolving Practices, Their Promises and Perils
Panel Session 954 to be held in Pacific D on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Kim Ammann Howard, BTW Informing Change, kahoward@btw.informingchange.com
Abstract: In this session we will present three different network evaluation scenarios. We will draw on evaluation work we have done with networks to build social capital in communities, to transform fields of practice, and to achieve policy and population level results. Because networks are complex, dynamic, and self-organizing systems they present unique challenges for evaluators. In each scenario we will describe the evaluation questions that were being asked; the stakeholders involved and what they wanted to learn; and the variety of methods that were used to map connections, capture network effects, and assess network health and impact. We will also consider where value is created in a network learning process, how sponsors and facilitators can interfere with or create the conditions for learning in networks, and the role of the evaluator in the network learning process.
Evaluating the Strategies and Effects of Investing in a Network of Non-profit Leaders in Boston
Claire ReineIt, Leadership Learning Community, claire@leadershiplearning.org
Urban communities, like Boston, have hundreds of nonprofit organizations that work in different sectors and neighborhoods often with little communication, coordination or collaboration, and sometimes in competition with one another for recognition and resources. In 2005, the Barr Foundation created a fellowship program to honor the City's most talented executive directors and to create the conditions for them to form personal relationships, connect their resources, and work together to improve community well-being. The formation of the network is a process that has unfolded over time and taken shape based on the actions and interactions of its members. This presentation will explore the role of the evaluator in an emergent network process, describe how network mapping and storytelling have been used to document network effects and impacts, and share insights about how to effectively evaluate networks.
When Networks Don't Materialize: Challenges and Opportunities in Funder-Driven Network Initiatives
Melanie Moore, See Change, melanie@seechangeevaluation.com
Many network initiatives are led by foundations seeking to convene key actors in a particular sector, and set the stage for collective action toward a common goal. Rather than prescribe what the group should do, or how it should be done, or, in some cases, even what the common goal is, many funding organizations that convene groups of grantees and other stakeholders prefer to allow momentum toward collective action to emerge organically. The idea of an instrumental "network" is highly desirable, but there are few clear roadmaps for funders for how to turn a series of convenings into a sustainable network. An analysis of three different funder-driven network initiatives in which functioning, broad-based networks did not form according to the funders' expectations will offer lessons for network architects going forward.
Evaluating Approaches and Impacts of Networks to Promote Community Health
Kim Ammann Howard, BTW Informing Change, kahoward@btw.informingchange.com
Recognizing that traditional models of health care alone cannot create and sustain healthy communities, The California Endowment and Tides launched a four-year, $10 million effort in 2008 to build on community clinics' strengths and encourage them to form multi-sector networks to more effectively promote community health. In this session, we will describe our experience evaluating the variety of approaches undertaken by the program's 26 grantees to address a wide range of community health issues throughout California, and the resulting impacts. This will include a discussion of: (1) the different tools, processes and learning opportunities that benefited projects and, at the same time, provided important data and learning for funders and evaluators, and (2) opportunities and challenges of evaluating evolving networks that have different foci, project strategies, partners, geographic areas and starting points, and the implications for evaluation design and practices.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Evaluating the Civic Engagement of K-12 Students: A Framework for Representing Student 'Voice'
Roundtable Presentation 955 to be held in Conference Room 1 on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Michael Berson, University of South Florida, berson@usf.edu
Corina Owens, University of South Florida, cmowens@mail.usf.edu
Liliana Rodriguez-Campos, University of South Florida, liliana@usf.edu
Abstract: Voice can be defined as describing the perspectives and actions of a group of people. Often student voices are not taken into consideration when developing educational programs, however these programs often directly impact their education and future. This evaluation incorporated a collaborative evaluation approach to access student voice during the development and implementation of a civic engagement education program in a southeastern school district in the United States. Civic engagement is a broad construct, as there are numerous ways to be involved and influence the world, and it is imperative to look beyond the most common forms of civic engagement. The purpose of this paper is to discuss the process evaluators used to translate the 19 core indicators of civic engagement (CIRCLE, 2006) into an instrument to gather evidence on student perspectives to inform the evaluation.
Roundtable Rotation II: Empowering Students to Self-evaluate Their Learning
Roundtable Presentation 955 to be held in Conference Room 1 on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Keith Proctor, Brigham Young University, keith.proctor@byu.edu
Abstract: This presentation will explore the merits of David Fetterman's empowerment evaluation for helping college students evaluate their learning experiences. Empowerment evaluation encourages program participants being evaluated to participate in their evaluation while being mentored by an evaluator. Multiple case studies are currently being used to explore how students evaluate their learning failures. Data from these studies suggest that students are not evaluating their own experiences, but are relying on instructors to evaluate their performance. A central question that arises from this data is whether students could be empowered to evaluate their own learning, including but not relying solely on instructor feedback. A literature search for this particular use of empowerment evaluation identified no studies regarding how to empower students to evaluate their learning experiences. This session will invite participants to explore ideas through a roundtable discussion about the value of adapting empowerment evaluation to enhance student self-evaluation.

Roundtable: Valuing Stakeholders and Integrating Multi-method Findings in Public Health Evaluations
Roundtable Presentation 956 to be held in Conference Room 12 on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Health Evaluation TIG
Presenter(s):
Korinne Chiu, University of North Carolina, Greensboro, k_chiu@uncg.edu
Kelly Graves, University of North Carolina, Greensboro, kngrave3@uncg.edu
Abstract: This roundtable will provide a forum in which to discuss best practices in integrating multiple stakeholder perceptions of community need and the benefits and challenges of the use of a multi-method design. To frame this discussion and provide a concrete example, a recent mixed-methods community needs assessment was conducted in order to identify the current resources and barriers to sexual and reproductive health services for young women. This assessment included various stakeholders such as local health department staff, service providers, young women in the community who were pregnant/parenting and who were not currently pregnant/parenting, local college health centers, and local school district personnel. Challenges in analyzing mixed-method findings as well as strategies in reporting convergent and divergent findings across diverse stakeholders and methods will be discussed.

Session Title: Theories of Change and Participatory Methodologies in the Evaluation of Peacebuilding Projects
Panel Session 957 to be held in Conference Room 13 on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Nicholas Oatley, Search for Common Ground, noatley@sfcg.org
Discussant(s):
Nicholas Oatley, Search For Common Ground, noatley@sfcg.org
Abstract: Peacebuilding and conflict transformation is a relatively new field. As such there is little by way of received wisdom from evidence based practice and what works. The development of theories of change and the use of appropriate methodologies that capture the results of work in this area are paramount concerns for a discipline still in its infancy. This panel brings together contributions that reflect the experience of developing and applying theories of change in varying contexts and explores the application of different methodologies for measuring the results of peacebuilding work around the world. The importance of values in influencing the development of theories of change and the plurality of values and interests reflected in different evaluation practices will be highlighted in the presentations and discussions.
Theories of Change and the Effectiveness of Partners for Democratic Change International's Global Conflict Resolution Network
Julia Roig, Partners for Democratic Change, jroig@partnersglobal.org
This paper will discuss the challenges faced by Partners for Democratic Changes' leadership team in designing and implementing an evaluation process using participatory tools and techniques with the Directors of the 20 autonomous local Partners Centers for Change and Conflict Management (called Partners for Democratic Change International) throughout the world. In particular, the discussion will focus on the institutional challenges and strategies for developing common Theories of Change within a global network of conflict resolution professionals who are working in very different environments and implementing diverse programs with a wide range of donors. Differing stakeholder perspectives will be considered including the differing values that come in to play in Partners work founded on the principles of a) sustainable impact through capacity-building of local institutions b) participatory and multistakeholder approach to addressing conflict management challenges and c) social entrepreneurialism http://www.partnersglobal.org/
A Well-Told Story: Capacity-Building, Theories of Change, and Peacebuilding Practice
Andrew Blum, United States Institute for Peace, ablum@usip.org
This paper addresses the central question of 'Why do organisations within the peacebuilding field find it difficult to articulate a coherent theory of change that underpins their work?' Drawing on evidence accumulated over many years of interactions with with peacebuilding organizations in developing countries, the paper will assess competing arguments for why this is the case. In particular, a number of these arguments claim that theories of change are a "foreign" or a "western" concept. It is argued that the linear logic of this approach is antithetical to the way local organizations conceptualize their work. Using proposals submitted to the United States Institute of Peace (USIP), the paper assesses these arguments against competing arguments in the literature. Since both "western" and "non-western" organizations submit proposals to USIP, the proposals provide an opportunity to assess if and how the theories of change approach is problematic for non-western organizations.
Susan Allen Nan, George Mason University, snan@gmu.edu
This paper analyzes developmental evaluation in the Georgian-South Ossetian Point of View Process (2008-11). The paper focuses not on the conflict resolution process itself, but on the developmental evaluation that has served as feedback to the facilitation team and participants, input to the process planning, and also as a mode of conflict resolution itself. The process of engaging stakeholders, including parties to the Georgian-South Ossetian conflict, in developmental evaluation has provided a forum for clarifying their goals and theories of change, which have changed over time. This goal clarification serves a conflict resolution function as parties from opposite sides of the war find goals in common as well as goals that diverge. The evaluation results have reshaped the program substantially from the initial classic problem solving workshop design to a new model of catalytic workshops. The evaluation's utility is thus two-pronged: resolving conflict and improving the program design and implementation.
Evaluating the Influence of Non-governmental Organizations (NGOs) on International Policy Development: A Comparative Study of Methodological Approaches
Jerome Helfft, International Center for Transitional Justice, jhelfft@ictj.org
This paper presents a set of methodological approaches for evaluating the influence of NGOs on the development of international policies. It discusses the challenges faced in assessing the impact of efforts to influence policymakers, and describes and reviews several approaches, including case studies based on the sociology-of-organization approach (Crozier and Friedberg work on actors' interests, game and systems) and innovative methodologies of advocacy work at the institutional level. The paper examines the evaluation process of the International Center for Transitional Justice (ICTJ) in fostering and shaping the development of the Report of the Secretary-General: "The rule of law and transitional justice in conflict and post-conflict societies (S/2004/616)" issued in August 2004. This evaluation is currently being undertaken by a group of researchers from New York University. The relevance of the theories of change related to reforming the elite in order to achieve peacebuilding or Human Rights goals will be discussed.
Theories of Change, Youth and Conflict
Rebecca J Wolfe, Mercy Corps Conflict Management Group, rwolfe@nyc.mercycorps.org
The reasons youth participate in violence is multi-faceted. In many places around the world, youth are frustrated by a lack of economic opportunities, avenues for political participation, and meaningful community engagement. To design more effective programs to prevent young people from joining violent movements, we identified 11 "core" theories of change in three main arenas: Economic, Political and Community Engagement. We also designed related indicators to test these theories. In this paper, we will describe our efforts in elucidating these theories and indicators and our efforts to test the theories in a number of our youth programs, including programs in Yemen, Tajikistan, and Kenya. Based on our initial experience testing peacebuilding theories of change in the field, we will discuss the challenges and lessons learned of evaluating theories of change in cross-sectoral programs.

Session Title: Strengthening Use of Evaluation: Issues and Perspectives
Multipaper Session 958 to be held in Conference Room 14 on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Capacity Building: A Phenomenological Study of the African Women Perceptions and Experiences in Leadership Development Training
Presenter(s):
Jane Wakahiu, Marywood University, jwakahiu@m.marywood.edu
Abstract: The promotion of leadership skills among women leaders in developing nations is essential in order for change and progress to occur in these countries. This phenomenological evaluative study examines the perceptions of nine women in Kenya, Uganda and Ghana who were engaged in a three-year Hilton Foundation-supported, Sisters Leadership Development Initiative (SLDI) program. Using impact assessment strategies to assess the transformation in these women and their ministries, this study describes how, by subsequently practicing acquired leadership skills, these women brought about change in their educational, healthcare, social and pastoral ministries. Data were collected using in-depth interviews and observations of the changes in the participants and their ministries. The findings indicate that leadership development training enhanced their capacity for effective service delivery and allowed for the expansion of their ministries, thus improving life for their people. The study illustrates how the design of a leadership program can permeate relevant leadership skills to confirm the creation of innovative practice for the effective organizational management in the developing nations. The study provides insight pertinent to increasing quality leadership skills among leaders responding to complex global organizational challenges.
Process Evaluation Methodologies for Estimating the Chances of Program Failure
Presenter(s):
Phillip Decker, University of Houston, Clear Lake, decker@uhcl.edu
Roger Durand, Durand Research and Marketing Associates, LLC, durand4321@gmail.com
Abstract: In this paper/presentation we seek to increase the likelihood of successfully implementing programs by proposing simple, cost-effective process evaluation methodologies for estimating the chances of program failure. The process evaluation methodologies we propose, methodologies that include 'voice of the crowd' and 'future search,' have been adopted widely in decision making situations ranging from strategic planning to the mutual learning and acquisition of values. But despite their affinity to empowerment, collaborative and participatory evaluation, these methods have not been widely utilized either in formative or process assessments. We demonstrate the application and value of these methodologies through a case study of a hospital's ambulatory care program.
A Participatory Approach to Improving Outcome Monitoring: The Development of a Multi-site Outcome Monitoring system for HIV-prevention Interventions
Presenter(s):
Jason Forney, Michigan State University, forneyja@msu.edu
Giannina Fehler-Cabral, Michigan State University, gcabral79@gmail.com
Abstract: Organizations implementing HIV-prevention interventions are often required by funders to collect outcome monitoring (OM) data for accountability purposes. However, OM can feel like an imposed practice where grantees with limited evaluation capacity have minimal ownership of the evaluation. This paper discusses how an OM system used by 9 grantee organizations, implementing 7 HIV-prevention programs, was improved using a participatory collaborative approach (Patton, 2002). To increase the validity and utility of OM, we worked collaboratively with grantees and program monitors to create new survey tools reflecting the evaluation goals unique to each organization, conducted capacity building trainings around data collection and data management, created a web-based data entry system allowing grantees to enter their own data and generate real-time reports, and facilitated meetings where organizations shared results and identified ways to improve their interventions. Lessons learned and qualitative findings describing the effectiveness of participatory strategies to improve OM will be presented.
Intervention Fidelity vs Creative Adaptation: Evaluating a Participatory Method for Market Development
Presenter(s):
Graham Thiele, International Potato Center, g.thiele@cgiar.org
Douglas Horton, Independent Consultant, d.horton@mac.com
Emma Rotondo, PREVAL, emma.rotondo@preval.org
Rodrigo Paz, Institute for Social and Economic Studies, rodrigopaz@supernet.com
Guy Hareau, International Potato Center, g.hareau@cgiar.org
Abstract: The Andean Change Alliance worked from 2006-2010 to test and evaluate a range of participatory R&D methods including the 'Participatory Market Chain Approach' (PMCA) in four countries in the Andes. The PMCA aspires to improve small farmer livelihoods by involving them in innovation processes with other market chain actors (Horton et. al, 2011). The 'Participatory Impact Pathways Approach' (Douthwaite et. al., 2007) was used to frame the evaluation. Applications were designed as replications of a single intervention, using a protocol so that generalizations could be made across cases. Trade-offs in evaluation design were needed to gain local collaboration and allow space for actors to adapt the protocol to varying local conditions and priorities. This paper evaluates the theory of change underlying PMCA, examines the fidelity of the PMCA implementation process with respect to the protocol, assesses the contribution to pro-poor innovation and draws lessons for future use and evaluation of participatory methods.

Session Title: Value and Valuing: Importance to Program Success and Its Relationship to Cost
Multipaper Session 959 to be held in Avila A on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
Chair(s):
Nadini Persaud,  University of the West Indies, npersaud07@yahoo.com
The Value of Local Production in Revitalizing Communities
Presenter(s):
Ron Visscher, Western Michigan University, ron.visscher@aquinas.edu
Abstract: This economic evaluation of the production and consumption activity of every US county and city over a five year period provides evidence suggesting the significance of local production activities in contributing to local economic vitality, even in increasingly service oriented economies. There is a need for more evidence to support and guide public policy designed to stimulate the business and economic base of local economies. By providing evidence suggesting the extent to which people live and consume in geographic locations where they can be productive, the study provides evidence of the value of local production and helpful guidance for effective public policy-making for community revitalization.
The Importance of Values and Valuing in Project Financing and Project Costing
Presenter(s):
Nadini Persaud, University of the West Indies, npersaud07@yahoo.com
Paul Morgan, Caribbean Development Bank, morganp@caribank.org
Abstract: The design and implementation of programs in developing countries require, inter alia, a clear understanding of the special needs and cultural values of the targeted beneficiaries, and of local laws and lending policies of local financial institutions for the programs to be successful. The paper will discuss a number of issues relevant to values and valuing in project financing and project costing and will share the lessons of experience learnt from a rural development project implemented in a developing country—Belize—which was co-financed by the Caribbean Development Bank. At the conclusion of the presentation, an interactive discussion will take place so that the group can share with each other personal experiences with development projects. This presentation is important to the evaluation profession because the identification of relevant values is central to project success and ultimately to the ex-post evaluation which will determine merit, worth, or significance of the project.
Development of Logic Models as a Precursor to Conducting a Cost-Benefit Analysis: Successes and Lessons Learned
Presenter(s):
Melissa Chapman Haynes, Professional Data Analysts Inc, melissa.chapman.haynes@gmail.com
Lija Greenseid, Professional Data Analysts Inc, lija@pdastats.com
Julie Rainey, Professional Data Analysts Inc, julie@pdastats.com
Abstract: Logic models are one key tool evaluators have to engage stakeholders and project leaders in determining values, and how those values relate to project activities and intended results. We implemented a logic model process in collaboration with project leaders to identify key processes and results of a community-based health organization, with a specific focus on tobacco cessation-related activities. The purpose of this logic modeling process was two-fold. First, it helped the project leaders to communicate their work via a visual display. Second, it allowed our evaluation to understand the broader context and scope of the project's tobacco-cessation work, including in-person cessation classes, referrals to the tobacco Quitline, community partnerships, and training of health professionals related to tobacco cessation. Key factors identified during the logic model process were incorporated into a cost benefit analysis. We will discuss how logic models were used as one factor in designing a cost benefit analysis.

Session Title: Using Data Dashboards in Formative Evaluations
Panel Session 960 to be held in Avila B on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Data Visualization and Reporting TIG
Chair(s):
Meridith Polin, Public/Private Ventures, mpolin@ppv.org
Abstract: An ongoing challenge for evaluators is finding ways to be responsive to stakeholders making day-to-day decisions about program operations. Data dashboards are one tool available to establish a "utilization-focused" approach to working with program stakeholders, while simultaneously providing evaluators an opportunity to check program data quality and review intermediate outcomes. Successful data dashboard use, however, depends upon both good design principals and feedback processes that work within the context of the programs being implemented. Participants will leave this session with 1) a solid grounding in data dashboard design principles, including evolving examples of dashboards refined over the course of several months; 2) specific examples and comparisons of dashboards used in three different sites (cities) implementing the national Elev8 initiative, including printed documents and an interactive web-based design and; 3) an understanding of specific challenges to using data dashboards with program administrators and ways to overcome those challenges.
Developing Dashboards That Have Local and National Relevance for a Multi-city Initiative
Meridith Polin, Public/Private Ventures, mpolin@ppv.org
Public/Private Ventures coordinates the national evaluation of Elev8. As part of this work, P/PV is responsible for creating monthly reports that describe utilization across the primary pillars of Elev8: school-based health centers, extended-day learning programs, and family support programs. P/PV developed local dashboards to inform the work at the site level as well as a national dashboard for additional stakeholders including policymakers and funders. The dashboards are also used to assess quality and completion of data. As P/PV's work with the sites winds down, we will work with them to sustain data collection and the utilization of the Elev8 dashboards. This presentation will focus on how the dashboards were developed with the varying needs of stakeholders in mind, how P/PV developed both local and national indicators, and what the sustainability plans for sites are.
Using School-level Dashboards for Continuous Improvement in an Integrated Services Initiative
Nathan Hess, Chapin Hall at the University of Chicago, nhess@chapinhall.org
In an integrated service delivery model, programs work together to serve the needs of a particular population. In the case of Elev8, three to four primary partners work within in a school building to support student success. Dashboards hold particular value for these staff to understand where their work intersects and what the impacts of their collective efforts are. In this presentation, Chapin Hall, the local evaluator for Elev8 Chicago, will describe how we use school-level dashboards to ignite conversations about service delivery, continuous improvement efforts, and impacts. We will discuss the Elev8 'feedback loop', the opportunities and challenges of our approach, and how the dashboards have evolved over time.
Data Visualization on Demand: Using an Online Dashboard to Provide Access and Information to Stakeholders
Taj Carson, Carson Research Consulting Inc, taj@carsonresearch.com
Carson Research Consulting is the local evaluation team for Elev8, project designed to help middle school students make a successful transition to high school. As part of our evaluation, we collect information using an online database where program participation data is stored, and health partners and the Baltimore City Public Schools provide us with data on health services received, school attendance, and academic data. In order to make the most of these data, we used Incite, a powerful dashboard and data visualization tool. With Incite, stakeholders always have access to the dashboard and can see the patterns and relationships that they need to know about day to day. Incite allows the evaluation team, program staff and stakeholders to slice and dice the information available, and facilitated more informed conversation about the meaning of the data we are seeing month to month.

Roundtable: Evaluating Students' Career Decisions in Science, Technology, Engineering, and Mathematics (STEM) Fields
Roundtable Presentation 961 to be held in Balboa A on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Karen Yanowitz, Arkansas State University, kyanowitz@yahoo.com
Carolyn Cohen, Cohen Research & Evaluation LLC, cohenevaluation@seanet.com
Davis Patterson, Independent Consultant, davispatterson@comcast.net
Rachel Becker-Klein, PEER Associates, rachel@peerassociates.net
Michael Duffin, PEER Associates, michael@peerassociates.net
Abstract: Selecting a career in a science, technology, engineering, and mathematics (STEM) field relies not only on the student's interest and attitudes, but also on the influence of parents, teachers, and other significant people in the student's life. Thus, evaluations of programs designed to ultimately impact children's career decisions (as specified by the ITEST goals) may need to assess not only the students themselves, but also the impact of the program on other stakeholders, such as parents or teachers. In this session, evaluators from several programs will discuss the challenges they have faced in ascertaining how different type of STEM programs, designed for students and/or teachers, may influence these participants, as well as other interested parties, such as parents, which may ultimately impact students' interest and awareness of STEM career opportunities.

Session Title: Friend or Foe? Collaboration Among Evaluation and Program Staff Within Foundations
Panel Session 962 to be held in Balboa C on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Phillip Chung, Colorado Trust, phillip@coloradotrust.org
Abstract: For many foundations, effective collaboration across evaluation and program departments remains a considerable barrier to developing, implementing, assessing and learning from a grant investment. Indeed, while there is frequent talk among foundations of being "interdisciplinary", what does this actually mean and look like? This session brings together program officers and evaluation officers from two different foundations for an insider look into their varying models of integrating evaluation and evaluative learning into the work of a foundation. In particular, panelists will discuss the challenges and opportunities evaluation and program officers face in meaningfully collaborating across the grantmaking and managing spectrum to achieve a common goal; practical strategies in making evaluation findings useful and used; and, the impact that effective or ineffective evaluation-program officer partnerships may have on grantees. We will also engage in discussion on how implementing an interdisciplinary approach can affect a foundation's vision of becoming "strategic grantmakers".
Program-Evaluation Collaboration in a Strategic Learning Environment: The Evaluation Perspective
Phillip Chung, Colorado Trust, phillip@coloradotrust.org
In 2007, The Colorado Trust evolved the focus of its evaluation department to encompass a more strategic emphasis on systematically integrating research, evaluation and learning into foundation practice. This vision, embarked on earlier by several foundations across the country, sought to foster a culture and process of using evaluation findings, data and research as cornerstones in informing how grants are conceptualized, developed and implemented. Moreover, it was envisioned as a platform to promote constant reflection and inquiry by foundation staff at all levels, but especially among program and evaluation staff. Over the years, we have encountered many fits and starts throughout this process. One of those "fits and starts" reflects how The Trust's shift in evaluation has impacted the way in which program officers and evaluation staff effectively collaborate. This presentation will discuss the evaluation perspective for how this issue has manifested.
Program-Evaluation Collaboration in a Strategic Learning Environment: The Program Perspective
Chris Armijo, Colorado Trust, chris@coloradotrust.org
Program officers play an integral role in identifying, understanding and using evaluation findings to regularly inform their work with a grantee organization or in assessing the impact of a grant strategy. In order, however, to realize that vision program officers must be more actively involved in thinking about evaluation, proactively involve evaluation staff at key times, and be open to difficult conversations about evaluation findings. Indeed, a critical question that The Colorado Trust continually addresses is how to make the program-evaluation collaboration a more intentional and systematic process that occurs on an ongoing basis. Moreover, what can program officers do individually to foster greater internal partnership, and how does this translate into a more cross-departmental practice, especially as we seek to integrate evaluation and evidence-based practice throughout our grantmaking?
Evaluation in a Foundation Setting: The View From Evaluation Staff
Marisa Allen, Colorado Health Foundation, mallen@coloradohealth.org
Over the past three years, the Colorado Health Foundation (TCHF) designed and implemented an innovative evaluation model which tracks the progress in achieving its mission of making Colorado the healthiest state in the nation. In 2008 TCHF began developing meaningful evaluation strategies to support the Foundation in achieving its mission, provide a valuable tool for foundation staff and board members to determine grantmaking successes and inform future strategies for high-impact, cost-effective grantmaking. One of the challenges of building a new evaluation model has been integrating evaluative thinking into the existing grantmaking process. From this session, funders and interested others will learn what worked for TCHF, the challenges we faced, and some strategies for integrating evaluation into Foundation strategy and practices.
Evaluation in a Foundation Setting: The View From Program Staff
Brenda Sears, Colorado Health Foundation, bsears@coloradohealth.org
The Colorado Health Foundation's (TCHF) implementation of a new evaluation model changed the grant review process and the day-to-day work of program officers. From the perspective of program staff, integrating evaluation activities into grant recommendations has required a focus on developing a skill set not previously required of program officers. Though a steep learning curve for some, the shift in mindset across staff at all levels towards more evaluative thinking has created several benefits. Evaluation is not an afterthought, but often a first step in the grant review process. Staff throughout the organization actively participates in evaluation planning, supports data collection, and discusses key findings. Session participants will leave with practical strategies for engaging program staff in evaluation and providing the education necessary for thoughtful contributions.

Session Title: Mindfulness-based Facilitation in the Development of Evaluation Capacity Building for Non-Governmental Organizations (NGOs)
Demonstration Session 963 to be held in Capistrano A on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building
Presenter(s):
Shelly Sharon, Independent Consultant, savioshe@gmail.com
Abstract: The literature on ECB processes and programs mainly emphasizes aspects such as methods, design, and mechanisms in evaluation (e.g. Preskill & Boyle, 2008). Skolits, Morrow & Burr (2009) highlight the diverse roles that evaluators hold in the evaluation process such as critical friend, manager, educator, etc. However, ECB is not just about fostering evaluative thinking or learning to foster sound evaluation practices, it is also about influencing the organizational culture and changing attitudes and perception about evaluation. Mindfulness is a process of illuminating evaluator's own values in a social process and creating an evaluation culture. Mindfulness-based facilitation emphasizes the role and qualities an evaluator manifests in his/her facilitation during the ECB process. Hence, ECB facilitation raise the client's awareness by containing difficult emotions as these arise in the process of change. The essence of mindfulness-based facilitation is incorporating values of compassion, integrity, professionalism, care, modesty, and active listening (among others) into all evaluative communication and practices.

Session Title: Hearing the Family Voice in Evaluation
Panel Session 964 to be held in Capistrano B on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Social Work TIG
Chair(s):
Madeleine Kimmich, Human Services Research Institute, mkimmich@hsri.org
Abstract: Social services evaluations typically assess service impact through specifically-defined measures of recipient outcomes, often a mixture of objective and subjective information - for example, whether a problem behavior has ceased as well as professional judgments about how likely the behavior is to reoccur. Research indicates that family success depends not just on the objective merits of a service but also on how the intervention is seen by family members and how actively they engage in the process. It is essential to understand the family's direct experience of services, especially how they view their progress related to the intervention. Panelists present three distinct approaches - interview, survey, and focus group --to learn the perspective of family members. Each discussant describes the intervention being studied, key questions to be answered, how family voice was captured, and the results obtained. Discussion addresses the benefits, challenges, and value of hearing the family voice.
Using Qualitative Interviews to Increase Understanding of the Kinship Caregiving Experience
Kimberly Firth, Human Services Research Institute, kfirth@hsri.org
HSRI conducted conversational, qualitative interviews by telephone with over 60 kinship caregivers (relatives or non-relatives caring for a child or children who would otherwise be in foster care), as part of the OHIO Title IV-E waiver demonstration evaluation. These interviews shed light on the impact of waiver-funded kinship supports from the kinship caregiver perspective. Interviewees shared their experiences including interactions with child welfare agency staff and navigating court and service systems. Integrating this information with other study findings is especially revealing; for example, some kinship caregivers were confused by the legal custody process and its implications, an important consideration in light of waiver efforts to increase child permanency rates by giving legal custody to kinship caregivers. Kimberly Firth conducted many of the kinship caregiver interviews and analyzed resulting data; she will share interview findings and discuss the challenges of integrating family voice into other study efforts.
Focus Groups with Families Who Participate in Family Team Meetings
Madeleine Kimmich, Human Services Research Institute, mkimmich@hsri.org
Family Team Meetings (FTM) is a specific form of case planning that is the centerpiece of Ohio's Title IV-E waiver demonstration. Under the rubric of FTM, a wide range of people known to the family come together on a regular basis to plan for and review progress the family is making to assure the safety and stability of children. The evaluation examines the implementation of FTM across 18 demonstration sites, with special attention to fidelity to the steps in the FTM process. However, researchers have had less success measuring the 'black box', to learn about the nature of the interactions which occur and how participant outcomes are affected. Focus groups conducted with participating families revealed crucial limitations in the FTM process, which enriched the understanding of the intervention's impact and led to modifications to subsequent evaluation design. Madeleine Kimmich, Principal Investigator, has 35 years of experience in social services evaluation.
Family Voice Through Mail Surveys: An Example From a Child Welfare Differential Response Program
Linda Newton-Curtis, Human Services Research Institute, lnewton@hsri.org
Reports of child abuse are typically met by a child welfare investigation, and, if allegations are substantiated, follow-up services will be ordered by the agency or the court - a top-down approach. Differential Response represents a shift in intervention philosophy and practice with some low-risk families, replacing the investigation with a more supportive approach. Workers partner with families to identify services and supports that best fit the needs and characteristics of the family, specifically valuing the family perspective. To understand the difference in family outcomes arising from the traditional and the alternative social work approaches, a randomized control trial methodology was employed. Results from exit surveys conducted among families served in both the traditional and the alternative tracks show marked differences. Linda Newton-Curtis, lead survey researcher for HSRI's evaluation of Differential Response in Ohio, discusses the pros and cons of survey methodology as a way to present 'family voice'.

Session Title: Using Outcome for Quality Improvement
Multipaper Session 965 to be held in Carmel on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Laura Anderson,  University of Maryland, Baltimore, landerso@psych.umaryland.edu
Effects Of Routine Feedback To Clinicians On Youth Mental Health Outcomes:A Randomized Cluster Design
Presenter(s):
Leonard Bickman, Vanderbilt University, leonard.bickman@vanderbilt.edu
Susan D Kelley, Vanderbilt university, susan.d.kelley@vanderbilt.edu
Carolyn Breda, Vanderbilt University, carolyn.s.breda@vanderbilt.edu
Ana Regina Andrade, Vanderbilt University, ana.regina.andrade@vanderbilt.edu
Manuel Riemer, Wilfrid Laurier University, manuel.riemer@gmail.com
Abstract: We tested the hypothesis that youths receiving treatment as usual in a community setting would improve faster when clinicians received frequent feedback on their clients' progress. A cluster randomized experiment was conducted with 28 sites delivering home-based treatment in 10 U.S. states. Data were collected at the end of each session from the youths, caregivers and clinicians. A total of 356 youths, 432 caregivers and 167 clinicians participated. Intent to treat analyses using hierarchical linear modeling of data provided by clinicians, caregivers, and youths all showed that clients of clinicians in the sites that could receive feedback improved faster than those in sites that could not receive such feedback. A dose-response analysis showed that those clinicians who viewed more feedback reports showed even stronger effects. Routine measurement and feedback can be used to improve outcomes for youths who receive TAU.
Evaluation of an Organizational Innovation: Use Of A Clinical Quality Tool To Improve Substance Abuse Treatment
Presenter(s):
Kakoli Banerjee, Santa Clara County Alcohol & Drug Services, kakoli.banerjee@hhs.sccgov.org
Laurie Drabble, San Jose State University, laurie.drabble@sjsu.edu
Michael Hutchinson, Santa Clara County, michael.hutchinson@hhs.sccgov.org
Abstract: In health care organizations, innovation is driven by a need to improve service delivery, service quality or client outcome. Because pragmatic considerations drive the adoption of innovative practices, they frequently occur in circumstances where traditional program evaluation models cannot be readily used. In this paper, we use an approach based on theory of change to evaluate an innovative clinical quality tool in a large county substance abuse treatment system. The county implemented a patient reported outcome measurement system based on Miller and Duncan's model as a clinical tool. In this study, we propose to link client derived measures (clinical measures gathered during treatment) with organizational measures (treatment context) and substance abuse treatment outcomes at discharge. This study affords an opportunity to examine the interaction between an intervention and different contexts to see how the client impact may differ across individual treatment providers, using data gathered for program monitoring purposes.
Evaluating Metal Health Providers' Perceptions of Using Routine Outcome Surveys to Measure Clients' Progress in Therapy: A Mixed Method's Approach using Ajzen's Theory of Planned Behavior
Presenter(s):
Jennifer D Lockman, Centerstone Research Institute, jennifer.lockman@centerstone.org
Randall S Reiserer, Centerstone Research Institute, randall.reiserer@centerstone.org
Casey C Bennett, Centerstone Research Institute, casey.bennett@centerstone.org
Christina Van Regenmorter, Centerstone Research Institute, christina.vanregenmorter@centerstone.org
April Bragg, Centerstone Research Institute, april.bragg@centerstone.org
Rebecca Selove, Centerstone Research Institute, rebecca.selove@centerstone.org
Kathryn Bowen, Centerstone Research Institute, kathryn.bowen@centerstone.org
Tom Doub, Centerstone Research Institute, tom.doub@centerstone.org
Abstract: Behavioral health policy makers and insurance providers have emphasized using standardized outcome assessments to track clients' progress in therapy. However, behavioral health providers rarely measure patient reported outcomes (PROs). Here we discuss a mixed methods evaluation of a large-scale pilot study that addressed clinician perceptions of the Client Directed Outcomes Informed (CDOI) instrument in a real-world community behavioral health setting. The purpose of this study was: 1) to use Ajzen's Theory of Planned Behavior (TPB) to identify effective factors for implementation and 2) to examine the degree to which collecting routine outcomes using CDOI impacts client-reported therapeutic outcomes. Our results suggest that the CDOI instrument has the capacity to predict client progress in therapy. Therapeutic progress varied with respect to different domains of Ajzen's TPB, clinician age, and years of clinical experience. We discuss values-bases implications for creating an organizational climate conducive to using outcomes measures to track therapeutic progress.

Session Title: When Was the Last Time You Played With Tinker Toys? Teaching Evaluation Can Be Fun!
Demonstration Session 966 to be held in Coronado on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Teaching of Evaluation TIG
Presenter(s):
Shelly Mahon, University of Wisconsin, Madison, mdmahon@wisc.edu
Abstract: The primary purpose of this demonstration is to illustrate a variety of ways to use hand-on, interactive activities to teach evaluation concepts. These can be used in a formal classroom or to build capacity within the organization you work. Everything from understanding the difference between research and evaluation to collecting and analyzing data can be taught using fun and energizing activities that keep people engaged. Following a brief discussion of the benefits of activity-based learning, the presenter will facilitate the group in a variety of activities. Participants will learn how to introduce, facilitate, and debrief activities in a way that ties the learner's experience to the concepts, theories, and practices being taught. Finally, participants will utilize tools like climbing ropes, tinker toys, and tent poles to play, create, and operationalize important evaluation concepts. Participants will leave with new strategies and fresh ideas for teaching evaluation and building evaluation capacity.

Session Title: The Tennis Ball Game: Evaluating and Improving Processes Through PDSA (Plan-Do-Study-Act) Cycles
Skill-Building Workshop 967 to be held in El Capitan A on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Health Evaluation TIG
Presenter(s):
Liza Kasmara, Harlem United, lkasmara@harlemunited.org
Tamika Howell, Harlem United, thowell@harlemunited.org
Abstract: During this session, participants will learn the Plan-Do-Study-Act (PDSA) method often employed by hospitals and other health organizations to continuously evaluate and improve processes in delivering patient care. Participants will be engaged in an interactive tennis ball game(1) designed to help them understand the PDSA concept and how to conduct PDSA cycles. In addition, findings from a quality improvement project using PDSA method to increase access to care will be presented. The QI project was conducted at Harlem United, a community health organization in New York City that provide services to medically underserved individuals in Central and East Harlem neighborhoods. By the end of the workshop, participants will learn how to design changes to a process, test changes and build on them, and set goals to make improvements. Reference: (1)New York Department of Health AIDS Institute (2010). The Game Guide: Interactive exercises for trainers to teach quality improvement in HIV care.

Session Title: Evaluating the Paris Declaration: A Joint Cross National Evaluation
Panel Session 968 to be held in El Capitan B on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Niels Dabelstein, Danish Institute for International Studies, nda@diis.dk
Abstract: The Paris Declaration, endorsed in March 2005, by over one hundred Ministers and Heads of Agencies lays down a roadmap to improve the quality of aid and its impact on development. An independent, multi-phased and cross-country evaluation of the Paris Declaration was initiated in 2007. The first phase comprised 20 separate but coordinated evaluations in donor countries and developing countries. A Synthesis of these evaluations was published in June 2008. The second phase comprised 28 evaluations of donors and developing countries. It was conducted during 2010 - 11 and the report published in June 2011. The focus of phase 2 is on the effects of the Paris Declaration on aid effectiveness and development results. This is one of the hitherto largest joint evaluations undertaken applying a unique decentralized approach employing more than 150 evaluators from 30+ countries. The Panel presents the results of the evaluation and lessons learned from the approach.
Organizing the Evaluation of the Paris Declaration
Niels Dabelstein, Danish Institute for International Studies, nda@diis.dk
This first presentation will describe the overall architecture of the evaluation and how the evaluation was organized and designed to ensure stakeholder ownership. Niels Dabelstein is Head of the PD Evaluation Secretariat and coordinated the overall evaluation.
Evaluating the Paris Declaration: A Developing Country Perspective
Jaime Garron Bozo, Ministry of Development Planning, Bolivia, jaime.garron@vipfe.gob.bo
This presentation will describe and discuss the conduct of the country evaluation in Bolivia. Focus will be on evaluation outcomes and methodological challenges. Mr. Bozo is a member of the International Reference Group for the Evaluation and was responsible for the conduct the evaluation in Bolivia.
Evaluating the Paris Declaration: A Donor Country Perspective
Ted Kliest, Netherlands Ministry of Foreing Affairs, tj.kliest@minbuza.nl
This presentation will describe and discuss the conduct of the donor country evaluation in the Netherlands. Focus will be on organizational issues. Mr. Kliest is a member of the Reference and chairs the Management Groups for the Evaluation. He was responsible for the conduct of the Evaluation in the Netherlands.
Evaluating the Paris Declaration: Synthesizing 33 Evaluations and Studies
Bernard Wood, Bernard Wood & Associates Limited, bwood@gmail.com
This presentation will present the main findings and conclusions of the Synthesis report and discuss the methodological challenges of synthesizing 28 diverse evaluations and 5 Thematic Studies. Mr. Wood in an independent evaluator leading the team that produced the overall Synthesis report of the Evaluation of the Paris Declaration.

Session Title: Advanced Geographic Information System (GIS): Using Geostatistics in Evaluation
Demonstration Session 970 to be held in Huntington A on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Integrating Technology Into Evaluation
Presenter(s):
Tarek Azzam, Claremont Graduate University, tarek.azzam@cgu.edu
David Robinson, University of Redlands, davidrgis@gmail.com
Abstract: This demonstration intends to illustrate the power of GIS, and how it can be used in evaluation practice. Participants in this session will be presented with examples of how GIS can be used for analyzing program implementation and impact data. This will be a step-by-step demonstration of some of the main features of GIS and their applicability to program evaluation. Examples will be derived from programs to showcase the types of evaluation questions that GIS can answer through its geostatisical capabilities and visual mapping abilities. This session intends to connect the theoretical application of GIS to its practical application with the hope that practitioners would be able to implement of the suggested approaches in their evaluation practice. Participants will also be given a list of relevant GIS resources to help with their professional development.

Session Title: Government Evaluation: International Case Studies
Multipaper Session 972 to be held in Huntington C on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Government Evaluation TIG and the International and Cross-cultural Evaluation TIG
Chair(s):
Jim Rugh,  RealWorld Evaluation, jimrugh@mindspring.com
Outcomes-based Performance Evaluation in the South African Public Sector
Presenter(s):
Fanie Cloete, University of Johannesburg, fcloete@uj.ac.za
Babette Rabie, University of Stellenbosch, South Africa, babette.rabie@spl.sun.ac.za
Abstract: The South African government has adopted an ambitious government-wide monitoring and evaluation system (GWM&ES) headed by a newly established Ministry for Monitoring and Evaluation in the Presidency since 2009. It is supposed to systematically monitor and evaluate the outcomes of government programme performance at all governmental levels. However, South Africa has serious backlogs in infrastructure and services. This paper deals with one such weakness: the absence of policy outcome indicators to measure progress towards a better life for all. The paper assesses the Outcome Service Agreements for basic education, enhanced employment and sustainable human settlements that have been concluded between the President and his Ministers. It concludes that the move from an output to an outcome evaluation focus is still largely conceptual and rhetorical. Appropriate measurable statements (outcomes) and measuring instruments (indicators) in terms of the good practices of outcome-driven evaluation and indicator development are not yet in place.
Evaluating the Local Government Anti-Corruption Initiative: A South African Experience
Presenter(s):
Peter Vaz, Research Triangle Institute, pvaz@rti.org
Mary Cole, Development-Evaluation.com, mw.mjcole@mweb.co.za
Moses Rangata, Cooperative Governance and Traditional Affairs, mosesr@cogta.gov.za
Abstract: The goal of the Local Government Anti Corruption Initiative (LGACI) under the USAID-funded Local Governance Support Program (LGSP) in South Africa was for municipalities to function with increased transparency and accountability. The evaluation of the LGACI asked whether awareness of corruption had been created and risks assessed; if anti-corruption strategies were developed, adopted and implemented; and if institutional capacity and competence for anti-corruption in municipalities had increased. The evaluation applied a conventional Logframe methodology with innovative corruption prevention indicators and evidence. It established that the LGACI had been influential by stimulating interest, awareness, knowledge, action, and municipal pride in anti-corruption. Democracy and good governance, particularly anti-corruption, is a new challenge for evaluators and evaluation. The sharing of experiences is needed. It is too early to cite its use in evidence-based policy but there are related developments taking place in South Africa now.
Evidence-based Policymaking: Lessons from 13 years of Evaluating Oportunities in Mexico
Presenter(s):
Adolfo Martinez-Valle, Coordinacion Nacional Programa Oportunidades, adolfomartinezvalle@gmail.com
Rogelio Grados, Coordinacion Nacional Programa Oportunidades, rogelio.grados@oportunidades.gob.mx
Ana Nuaez, Coordinacion Nacional Programa Oportunidades, ana.nunez@oportunidades.gob.mx
Abstract: Objective: The purpose is to examine why evaluation has been successful for Oportunidades, a thirteen-year old conditional cash transfer program in Mexico that currently covers more than 6 million poor families. Methods: A systematic review is conducted of its most important evaluations to analyze its effects on both impact and process from 1997-2010. Results: Three key factors explain why evaluation has been successful for Oportunidades. First, evaluation findings have proven the impact of the program on its target population. Also, it has produced timely results to improve the design and implementation of the program. Finally, but not less important, there has been a permanent dialogue between policymakers, international development banks, staff, and evaluators to meet all stakeholders' interests. Conclusions: Important lessons are drawn for other countries on how and why evaluations can have an important contribution to the policymaking process of a large scale capacity development program such as Oportunidades.

Session Title: An Examination of Evaluation Challenges with the Nebraska Strategic Prevention Framework State Incentive Grant (SPF-SIG)
Panel Session 973 to be held in La Jolla on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Phillip Graham, RTI International, pgraham@rti.org
Abstract: In 2005, the Center for Substance Abuse Prevention (CSAP) made awards to a first cohort of states implementing the agency's flagship substance abuse prevention initiative, the Strategic Prevention Framework-State Incentive Grant (SPF-SIG) program, which is a 5-step public health approach to reduce substance use. CSAP has funded approximately 77 states, territories, and tribal nations since 2005, but evaluating SPF-SIG's effectiveness poses methodological and practical challenges. Presenters will highlight the evaluation challenges and share solutions devised in the evaluation of the Nebraska SPF-SIG, in which the Nebraska Department of Public Health funded sixteen (16) local coalitions to implement evidence-based prevention strategies using the SPF model. The local coalitions targeted three outcomes: underage drinking, binge drinking, and impaired driving. The first paper examines the use of archival data; the second examines challenges with implementation fidelity of environmental strategies; and, the third paper examines challenges and solutions associated with measures of effectiveness.
Developing a Rigorous Evaluation Design with Archival Data
Monique Clinton-Sherrod, RTI International, mclinton@rti.org
Phillip Graham, RTI International, pgraham@rti.org
Lori Palen, RTI International, lpalen@rti.org
Jason Williams, RTI International, jawilliams@rti.org
Mindy Anderson-Knott, University of Nebraska-Lincoln, mandersonknott2@unl.edu
Dave Palm, State of Nebraska, david.palm@nebraska.gov
A comprehensive yet feasible evaluation design is essential in capturing the impact of Nebraska's community-level efforts to address substance use through its SPF-SIG. For Nebraska, the outcome evaluation assesses the effectiveness of SPF activities in modifying targeted behaviors, root causes, and contributing factors at the state level through archival data. In addition, outcome data is combined with process data to answer questions about how and why SPF-SIG efforts achieved (or failed to achieve) the state's goals. However, we faced unique challenges in the development of a rigorous design that was viable given NE's unique geographical characteristics. While our design incorporates process and outcome evaluation at the state and local levels, challenges in developing and implementing this design stemmed largely from the rural nature of the state and addressing limited comparison sites given large intervention service areas. This presentation describes these and other challenges and steps taken to address them.
Measuring Implementation Fidelity for Environmental Strategies
Lori Palen, RTI International, lpalen@rti.org
Phillip Graham, RTI International, pgraham@rti.org
Monique Clinton-Sherrod, RTI International, mclinton@rti.org
Mindy Anderson-Knott, University of Nebraska-Lincoln, mandersonknott2@unl.edu
Dave Palm, State of Nebraska, david.palm@nebraska.gov
As part of Nebraska's SPF SIG evaluation, local evaluators worked with program coordination/implementation staff to complete implementation fidelity rubrics. These rubrics assessed adherence to best practices for specific alcohol-related environmental interventions (e.g., social marketing, compliance checks). There were two challenges to analyzing fidelity rubric data. First, the rubrics confounded strategy progress with fidelity, such that an in-progress strategy might get the same score as a completed strategy with poor fidelity. Second, there was a ceiling effect that complicated comparisons across communities, strategies, and time points: Of environmental rubrics completed in the first year of implementation, average fidelity was 88% of the maximum possible score, and 56% of rubrics received a perfect score. There was also an unanticipated benefit of administering fidelity rubrics. For implementers who were less knowledgeable about strategies, the rubric items served as a checklist of actions that would need to be taken to ensure fidelity moving forward.
Measuring Effectiveness in SPF-SIG Evaluation: Challenges and Solutions
Jason Williams, RTI International, jawilliams@rti.org
Phillip Graham, RTI International, pgraham@rti.org
Lori Palen, RTI International, lpalen@rti.org
Monique Clinton-Sherrod, RTI International, mclinton@rti.org
Dave Palm, State of Nebraska, david.palm@nebraska.gov
Evaluation of SPF-SIG efforts in Nebraska faces challenges across multiple points of implementation. Relevant data originate from multiple levels, including state, county, and individual respondents. Models of interest may rely on data from more than one source which has implications for both the method of aggregating disparate data sources as well as for which level of analysis is most appropriate (e.g., individual vs. local community). Geographic and population characteristics of Nebraska pose challenges to finding appropriate comparison communities for those not exposed to SPF-SIG programming. Within SPF-SIG communities there is not uniformity on the primary outcomes, root causes, and contributing factors targeted by coalitions, which further complicates selection of relevant samples and comparisons for evaluation models of the initiative. This presentation describes these challenges and will offer proposed solutions or methods to ameliorate the impact of these challenges on drawing valid inferences about the impact of the Nebraska SPF-SIG.

Session Title: Researching Ethical Dilemmas
Multipaper Session 974 to be held in Laguna A on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Research on Evaluation
Chair(s):
Michael Szanyi,  Claremont Graduate University, szanyi.michael@gmail.com
Ethical Sensitivity in Evaluation and How Evaluators Identify Ethical Dilemmas
Presenter(s):
Steve Jacob, Laval University, steve.jacob@pol.ulaval.ca
Geoffroy Desautels, Laval University, geoffroy.desautels.1@ulaval.ca
Abstract: Evaluation occurs in a context where ethical dilemmas are frequent and unavoidable. Evaluators must daily deal with various ethical tensions that could potentially influence the quality of their work. Using an analytical model that allows us to categorize evaluators according to whether they are the corporatist type (sensitive to the reputation of the profession and to market pressures) or the altruistic type (sensitive to the social impacts of evaluation), we sought to discover if evaluators demonstrated the same sensitivities when faced with ethical dilemmas. Inspired by a research methodology put forward by Morris and Jacobs, we met with and interviewed Canadian evaluators of public policies. Our research led us to conclude that the altruistic type show a high ethical sensitivity, while the corporatist type show a more moderate ethical sensitivity. We also observed that other factors such as the evaluation environment (internal or external) and experience can influence ethical sensitivity.
Anytime, Anywhere, Evaluation Ethics DO Matter!
Presenter(s):
Wayne MacDonald, Social Sciences and Humanities Research Council of Canada, wayne.macdonald@sshrc-crsh.gc.ca
Heather Buchanan, Jua Management Consulting Services, hbuchanan@jua.ca
Abstract: At these meetings, we are invited to examine professional practice within a context of 'values and valuing'. Custom and norms are shaped by what we value. This paper presents an international comparison of ethical challenges identified by the evaluation communities in Canada, United States and Australia. While the dialogue has been quieter in Canada compared to other countries, the presentation will highlight results from a 2010 national survey of Canadian evaluators. These will be analyzed against a backdrop of findings from other national surveys members of the American Evaluation Association (1993; 2009) and the Australasian Evaluation Society (2003). Based on the findings, we argue for a more proactive agenda for the Canadian Evaluation Society to support the needs and challenges of its members in dealing with ethical challenges. For AEA, should ethics be a standing item at annual meetings, and proposals actively solicited and considered by a TIG/Committee?

Session Title: Representing Stakeholder Values Through Effective Communication of Findings
Skill-Building Workshop 975 to be held in Laguna B on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Data Visualization and Reporting TIG
Presenter(s):
Danielle Martin, Army Institute of Public Health, danielle.n.martin2.ctr@us.army.mil
Jennifer Piver-Renna, Army Institute of Public Health, jennifer.piverrenna@us.army.mil
Abstract: Stakeholder involvement increases evaluation use and has become accepted practice within the evaluation profession, as evidenced by a survey conducted in 2006 which found 98% of American Evaluation Association members agreed that evaluators should take the responsibility for involving stakeholders in the evaluation process. Stakeholder values and involvement differs across organizations, causes, and the utilization of findings in their decision making processes. In order to fulfill the mission of the program evaluation, an evaluator must effectively communicate findings through identifying and understanding stakeholders' values. Attendees of this skill-building workshop will be provided with hands on experience necessary to articulate stakeholders' values, and to frame, document and disseminate findings based on these values. Attendees will also be provided with scenarios of program evaluations and directed through the process of communicating findings to specific stakeholder audiences. Additionally, attendees will receive a list of resources for further practice and learning.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Developing Useful and Effective Monitoring and Evaluation Systems: A Discussion on Experiences Determining Which Data Counts
Roundtable Presentation 976 to be held in Lido A on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Meredith Blair, Humanity United, mblair@humanityunited.org
Kristen Zimmerman, Mennonite Central Committee, kristenzimmerman@mcc.org
Abstract: Decisions about which data is collected are laden with value judgments and despite our best intentions to develop monitoring systems and evaluation methodologies that incorporate beneficiary feedback, time and resource constraints limit our use of truly participatory approaches. Spanning international locations introduces further constraints of connectivity, language and travel. While the practice of collecting more data on indicators and outcomes continues to increase as more emphasis is put on needing to show contribution, so do the incentives for donors and headquarter offices to set extensive data collection requirements on grantees and implementing partners. The presentation, facilitated by Humanity United, a small grantmaking organization, and the Mennonite Central Committee, an international nongovernmental organization, will hope to draw out recommendations, practices, and lessons learned on how to incorporate beneficiary voice into program design, data collection, and evaluation, while dealing with program and project resource and time constraints.
Roundtable Rotation II: Building a Participatory Monitoring and Evaluation Capacity (M&E) System for an International Development Organization: The Hunger Project, a Case Study
Roundtable Presentation 976 to be held in Lido A on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Catherine Borgman-Arboleda, Indendent Evaluation Consultant, cborgman.arboleda@gmail.com
Dana H Taplin, ActKnowledge Inc, dtaplin@gc.cuny.edu
Francis Oseh-Mensah, The Hunger Project, fran6_omensah@yahoo.com
Carolyn Ramsdell, The Hunger Project, cmr@thp.org
Abstract: This session will use evaluators' experience building a participatory M&E system for The Hunger Project - an international NGO focused on ending poverty. We will discuss and share challenges and effective practices around working to build a monitoring and evaluation system which meets the dual needs of demonstrating the impact of complex social change work (especially to institutional funders), and developing an internal knowledge strategy which feeds and informs work happening on the ground. We will look at the methods and models used, with a specific focus on working with limited budgets and using online tools (such as google docs), and guides to facilitate collective evaluation design, implementation and analysis. We will discuss ways in which a Theory of Change model has been helpful to evaluation design and can support ongoing evaluative thinking. We anticipate having at least one staff member from THP present. (The M&E staff person from the Ghana Hunger Project office is applying for international travel funding - Francis Oseh-Mensah.)

Session Title: Implementing Evidence-based Programs to Prevent Adolescent Pregnancy: Real World Challenges for Evaluators and Practitioners
Multipaper Session 977 to be held in Lido C on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Jane Powers, Cornell University, jlp5@cornell.edu
Discussant(s):
Abraham Wandersman, University of South Carolina, wandersman@sc.edu
Abstract: In recent years, there has been substantial interest in the use of evidence based programs (EBPs) in the field of teen pregnancy prevention. This reflects the increased quantity and quality of evaluation findings which demonstrate that certain programs are effective in reducing risky sexual behavior and promoting sexual health. Although EBPs have yielded positive results under research conditions, there is mixed evidence regarding their ability to achieve similar outcomes in "real world" settings. Each of the presenters in this session is actively engaged in supporting the implementation of evidence based teenage pregnancy prevention programs in different community settings. They will describe strategies used to address the challenge of implementing EBPs with fidelity including how they 1) engage communities to partner in implementation research, 2) monitor/measure the fidelity of program implementation, and 3) use evaluation data to identify challenges and supports, and build capacity to implement EBPs.
Finding Common Values: Recruiting Schools for a Randomized Controlled Trial of a Teen Pregnancy Prevention Program
Shannon Flynn, South Carolina Campaign to Prevent Teen Pregnancy, sflynn@teenpregnancysc.org
Sarah Kershner, South Carolina Campaign to Prevent Teen Pregnancy, skershner@teenpregnancysc.org
Christopher Rollison, South Carolina Campaign to Prevent Teen Pregnancy, crollison@teenpregnancysc.org
Mary Prince, South Carolina Campaign to Prevent Teen Pregnancy, mprince@teenpregnancysc.org
This presentation will describe the SC Campaign's process of developing relationships with schools and securing their involvement in a federally funded randomized controlled trial. The values of the schools and the larger community were important factors in their willingness to initiate evidence- based comprehensive teen pregnancy program for middle school youth and to engage in a five year randomized controlled trial. As the lead agency on this project, the SC Campaign and each school had to find common ground around their respective values on teen pregnancy prevention, evaluation, and partnerships. The process of recruitment will be described, including challenges, strategies to overcome them, and successes. Shannon Flynn, MSW is the Research and Evaluation Director at the SC Campaign to Prevent Teen Pregnancy, a statewide nonprofit organization.
Enhancing Fidelity and Quality of Implementation in Diverse Community Settings
Jennifer Duffy, South Carolina Campaign to Prevent Teen Pregnancy, jduffy@teenpregnancysc.org
Amy Mattison Faye, South Carolina Campaign to Prevent Teen Pregnancy, afaye@teenpregnancysc.org
Erin Johnson, South Carolina Campaign to Prevent Teen Pregnancy, ejohnson@teenpregnancysc.org
Forrest Alton, South Carolina Campaign to Prevent Teen Pregnancy, falton@teenpregnancysc.org
The SC Campaign recently began working intensively in two communities with the goal of reducing the teen birth rate in each community by 10% within five years. In order to meet this goal, the SC Campaign is working with a wide range of youth-serving organizations in each community to support implementation of different evidence-based programs. Collecting information on the extent to which these programs are implemented with fidelity and using that data for quality assurance and improvement are essential to the success of the overall project. Challenges and strategies for engaging program facilitators in the collection of data on fidelity will be described in this presentation. In addition, tools for collecting information on fidelity and adaptations will be covered. Finally, plans for using this data identify potential implementation problems and strategies to improve quality of implementation will be discussed.
Practitioner Versus Evaluator Perspectives on the Value of Evidence in Teen Pregnancy Prevention
Jane Powers, Cornell University, jlp5@cornell.edu
Marilyn Ray, Finger Lakes Law & Social Policy Center, mlr17@cornell.edu
Amanda Purington, Cornell University, ald17@cornell.edu
The NYS Department of Health has recently launched a new initiative aimed at preventing adolescent pregnancy and promoting adolescent sexual health through a comprehensive and coordinated community approach. For the first time, the state is mandating that its 50 grantees use evidence-based sexuality educational programming as a condition of the award. The grantees had to select from a list of 28 programs with proven effectiveness as identified by the US Department of Health and Human Services. In this presentation, we will describe how the ACT for Youth Center of Excellence, an intermediary organization, is supporting the efforts of the communities to implement the EBPs. We will describe the evaluation plan for monitoring implementation, tools developed to measure adherence to program fidelity, share preliminary evaluation findings, and discuss challenges faced by both evaluators and practitioners.

Session Title: Long-term Impacts: Evaluating Long-term Research Impacts and Valorising Evaluation Long-term
Multipaper Session 978 to be held in Malibu on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Neville Reeve, European Commission, neville.reeve@ec.europa.eu
Abstract: Traditionally, evaluation and monitoring of the EU Framework Programmes has been aligned with the policy and decision-making processes, including for example the recent Interim evaluation of FP7. But one of the effects has been a tendency to treat evaluation results solely as inputs to these processes, with inadequate attention to securing the wider and even long-term impacts. At the same time evaluation has itself often been short term in nature, carried out with the sole intention of validating current activities. The effect has been to ignore long term impacts. This session will examine separate recent initiatives intended to address these deficits.
Long-term Impacts of the Framework Programs
Neville Reeve, European Commission, neville.reeve@ec.europa.eu
The traditional focus of EU evaluation work has been towards the next legislative requirement, in effect preparation of a future FP. While laudable from the perspective of ensuring evidence-based policy, the concomitant weakness has been less attention given to the analysis of the longer-term impacts. With a view to addressing this deficit a project was launched in 2010 to examine the impacts of the 4th, 5th, and 6th FPs, on a selection of key research areas. This used innovative quantitative techniques both to select and analyse different fields. Based on bibliometrics and co-word analysis, these have sought to identify breakthroughs in different fields and map these to the research carried out under the FPs. At the same time, case studies have supported more in-depth analyses. This paper will present the set up and findings from this work, including lessons and pitfalls on methodology.
Long-term Impacts of FP7 Evaluation Results
Peter Fisch, European Commission, peter.fisch@ec.europa.eu
The Interim evaluation of FP7 was carried out by independent experts based on an extensive set of studies, analyses, and statistics. The resulting evaluation report was presented to the European Commission in November 2011. Since then there have been extensive efforts to build-on the results of this evaluation through, amongst other things: a specific and formal exercise by the Commission to reply to the results of the evaluation; dissemination activities; formal political processes, including by the European Parliament and the European Council to fashion independent replies; a major conference to discuss findings and present new ideas; a range of further consultations intended to lead towards a future funding scheme. These efforts are aimed at valorising the results of the evaluation and monitoring and to secure a lasting basis for policy making and further evaluation work. This session will present these activities critically, examining weaknesses and strengths.
Indicators for Measuring Outcomes and Impacts
Carlos Oliveira, European Commission, carlos.oliveira@ec.europa.eu
Policy instruments providing EU's support to research and innovation in ICT often address a multitude of goals and objectives on various levels, ranging from the broad goals of growth and competitiveness or the contribution to addressing societal challenges such as ageing, climate change and social inclusion to very specific objectives in specific ICT technology fields (robotics, microelectronics, etc.). This results in complex and heterogeneous portfolios of projects which are combined with the need to answer to the expectations of various stakeholders (industry, policy-makers at national level, parliamentarians (EP), social groups, national governments). The presentation covers the process of identification of suitable measures (or proxies) for supporting the monitoring and ex-post evaluation of these programmes in terms of their effectiveness, efficiency, relevance, coherence, sustainability, utility, and added-value. The overall coherence of the selected indicators in terms of relevance, understandability, succinctness, contextualisation and data gathering requirements are objects of special attention.

Session Title: Evaluation of a Nano Science and Technology Centers Program: Mixed Methods Approach to Assessing its Realization of Policy Objectives
Multipaper Session 979 to be held in Manhattan on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Juan Rogers, Georgia Institute of Technology, jdrogers@gatech.edu
Abstract: The main instrument of recent US nanotechnology policy is a set of programs to fund multiple research centers with a variety of foci depending on the funding agency's portfolio. This session presents the design and results of an evaluation research project to assess the outcomes and impacts of one of these programs. The papers will present the overall design of the project and the results in three dimensions: a) the performance of research as reflected in the published record of papers, patents and other products and the evidence of use mainly using bibliometric approaches and networks of collaboration; b) The links with the commercial sector paying special attention to the collective picture that emerged from the combined activity of the centers; c) The societal dimensions of the program that were specifically built into the design of the solicitation as it responded to one of the priorities of the policy.
Program Level Assessment of Outcomes and Impacts of Research Performance of Centers
Juan Rogers, Georgia Institute of Technology, jdrogers@gatech.edu
The project used a mixed methods explanatory sequential research design, which commences by assembling and analyzing quantitative data and formulates questions to be followed up in depth with qualitative case studies. The quantitative component was developed from all the center reports, WOS publication records and patent records. Both bibliometric and network analyses were conducted on these data to find patterns at the program level. These patterns were then compared to field level data for nano science and nano technology that are already part of the research infrastructure of our team. For productivity, citations, collaboration, interdisciplinarity and international linkages, among other such features of the performance of research, we were able to establish field level benchmarks and program level patterns in order to situate the program in the field and get a sense of its role and impact.
Aggregate Patterns of Linkage of Nanotechnology Centers With Industry: Program Outcomes
Luciano Kay, Georgia Institute of Technology, luciano.kay@gatech.edu
The commercial potential of research on nanoscience and nanotechnology is one of the main justifications for its public support. This paper uses the results of both the quantitative and qualitative analyses to establish program level patterns of linkage between centers and industry. The analysis did not focus merely on the one-to-one instances of center-industry relations. It developed the emergent patterns of interaction amongst all centers and companies to map the network of relations that characterize much of the nanoscience and nanotechnology R&D environment today. First, there is a typology of linkages according to substance and form of relations. Second, there is a network of interactions that shows complementarities of the linkages in the field between the centers in the program and a large subset of the companies in this industry. The results show that the nanotechnology industry has become interdependent with the research infrastructure constituted by these centers taken together.
Societal Dimensions of the Nano Science and Technology Center Program
Jan Youtie, Georgia Institute of Technology, jan.youtie@innovate.gatech.edu
The societal implications of nanotechnology are specifically included among the concerns that must be addressed by research centers in the US policy for the field. These include and go beyond the development and composition of the R&D workforce. They also focus on the exploration of potential consequences of the introduction of nanotechnology-based products in mass markets from an environmental and health point of view as well as pre-figuring new social experiences that these products might bring about. These issues represent difficult interdisciplinary problems that require not only the interface of natural and health sciences but also social sciences and humanities. We found that the interdisciplinary formulation of these problems has been done with mixed results. Much progress has been made on the educational and public diffusion side but it has proven more difficult in areas where the concepts and theoretical frameworks are still at considerable cognitive distance from each other.

Session Title: Qualitative Inquiry in International Health
Multipaper Session 981 to be held in Oceanside on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Qualitative Methods TIG and the Health Evaluation TIG
Chair(s):
Norma Martinez-Rubin,  Evaluation Focused Consulting, norma@evaluationfocused.com
Discussant(s):
Norma Martinez-Rubin,  Evaluation Focused Consulting, norma@evaluationfocused.com
The Life Story of Drug-user Women in Tehran-Iran
Presenter(s):
Jila Mirlashari, Tehran University of Medical Sciences, jmirlashari@yahoo.com
Apo Demirkol, University of Sydney, demirkolster@gmail.com
Mahvash Salsali, Tehran University of Medical Sciences, m_salsali@hotmail.com
Hassan Rafiey, University of Social Welfare and Rehabilitation Science, hassan441015@gmail.com
Jahanfar Jahanbani, Tehran Islamic Azad University, jjahanbani@yahoo.com
Abstract: There is limited information on drug dependency among Iranian women. Most of the time using drugs among women in Iran is a hidden process. The aim of this qualitative study is to explore the components which might play a role in initiation of drug use among Iranian young women.14 in-depth interviews were conducted with young drug user women and their family members. Based on our data analysis, traumatic events during childhood, significant disconnection between these individuals, their families and the society, aimless way of living, wrong information regarding drugs and addiction, continuous feeling of grief, loneliness, and helplessness and having a drug user husband or boyfriend were identified as important determinants of substance use. The results of this research suggest that dealing with a major problem such as drug dependency among women needs early intervention and comprehensive assessment of the context in which they live and use substances.
Effects of Social Injustice on Abnormal Mammography Follow-up among Low-income Women
Presenter(s):
Shelly-Ann Bowen, University of South Carolina, bowensk2@mailbox.sc.edu
Michael Byrd, University of South Carolina, mdbyrd01@mailbox.sc.edu
Chayah Stoneberg-Cooper, University of South Carolina, chayah.cooper@nyu.edu
Abstract: Objectives. The social justice perspective can address inequitable health outcomes for women who receive abnormal breast screening. This study explored the role of social injustice on disparities in follow up of abnormal mammography. Methods. A cross-sectional qualitative telephone study designed to explore factors influencing the cognitive processing of an abnormal breast screening result was conducted among low-income women participating in a free breast cancer screening program. Interviews were transcribed and analysis was performed using grounded theory approach to elicit psychosocial themes related to follow-up. Results. During the study period we interviewed 72% (16) of the women referred for case management. Findings revealed the impact of psychosocial context, structural and cultural barriers on the cognitive representation of breast cancer and health seeking behaviors of the women. Conclusions. Factors such as low SES, stage of diagnosis, delayed treatment, and structural and cultural barriers need to be addressed efficiently to eliminate disparities in breast cancer mortality.
Using Most Significant Change Methodology to Evaluate Impact of Scaling up of a Health Innovation in Four Countries
Presenter(s):
Susan Igras, Georgetown University, smi6@georgetown.edu
Elizabeth Salazar, Georgetown University, es336@georgetown.edu
Rebecka Lundgren, Georgetown University, lundgrer@georgetown.edu
Marie Mukabatsinda, Georgetown University, awarenessrda@rwanda1.net
Sekou Traore, Georgetown University, irhmali_straore@yahoo.fr
Donald Cruz, Georgetown University, irhguatemala@yahoo.com
Abstract: In a multi-year, multi-organizational process to scale up a new family planning (FP) method, M&E indicators measure the extent of method integration into norms, reporting, training, and other health systems elements. To learn about values, meanings, and unanticipated effects in stakeholder groups and how those involved are affected, inductive methodologies play important roles. Most Significant Change (MSC), a participatory, story-based methodology was adapted for use in evaluation of scale up of integrating the Standard Days Method (SDM) into FP programs in Guatemala, Mali, Rwanda, and India. Organizations involved in introducing SDM were trained on MSC, collected, analyzed, and selected top MSC stories at user/client, provider, and program manager/policy maker levels. Over 100 stories from male and female scale up participants were collected, triaged, analyzed, and later shared with FP stakeholders, which led to new understandings of scale up processes and effects and led to adjustments in scale up strategies.

Session Title: Planning and Conducting Needs Assessments in Health
Multipaper Session 982 to be held in Palisades on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Needs Assessment TIG and the Health Evaluation TIG
Chair(s):
Nicola Dawkins,  ICF Macro, ndawkins@icfi.com
Discussant(s):
Sue Hamann,  National Institutes of Health, sue.hamann@nih.gov
Needs Assessment for Program Planning
Presenter(s):
Ann Del Vecchio, Alpha Assessment Associates LLC, ann.delvecchio@gmail.com
Nadine Tafoya, Nadine Tafoya and Associates, nayanet2426@gmail.co​m
Abstract: A completed needs assessment usually comprises the first section of funding proposals (private, state, federal). Instructions and / or the requirements for this section are usually minimal and do not consider the amount of work necessary to conducting a community wide needs assessment. Additionally needs assessment data can form the foundation of an evaluation plan when a comparison to the baseline or pre intervention state is needed to document outcomes. This paper provides the description of the steps taken in seven local communities as well as a statewide comparison assessment to document the need for substance abuse prevention programs. The process focused on collecting data from a variety of sources including epidemiological data from the state department of health, key informant interviews, crime and driving while intoxicated statistics, emergency room admissions and youth focus groups. Prioritization of the needs used one of the methods described in Altschuld and White, 2010.
A Needs Assessment of End-of-Life and Palliative Care Science: The Value and Challenges of a Multi-Method Approach
Presenter(s):
Amanda Greene, National Institutes of Health, amanda.greene@nih.gov
Jeri Miller, National Institutes of Health, jmiller@mail.nih.gov
Lisbeth Jarama, NOVA Research Company, ljarama@novaresearch.com
Abstract: This session will discuss the development of a government, multi-method needs assessment and the implementation challenges. Various complementary methods are used to identify the scope of and to provide a national portrait of the state of end-of-life and palliative care research and research funding trends in order to enhance a coordinated, broad-based, and diversified approach to this research field among public and private institutions and the research community. To increase opportunities for use of the findings from this needs assessment by various stakeholders, a multi-method approach is being employed. This includes analysis of federal and philanthropic databases, an analytic review of the literature, three online surveys, and key informant interviews with end-of-life and palliative care researchers and funders. Finally, a scoping exercise is being conducted, where experts review and map the collected data to identify gaps, and envision and prioritize innovative approaches to meet needs.

Session Title: Social Network Analysis in Education: Evaluation of Impact, Growth and Collaboration
Multipaper Session 983 to be held in Palos Verdes A on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Social Network Analysis TIG and the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Ashley T Brenner,  IDA Science & Technology Policy Institute, a.t.brenner@gmail.com
Using Network Analysis to Inform a Case Study Approach: Assessing Influence and Impact in Federal Education Awards
Presenter(s):
Ashley T Brenner, IDA Science & Technology Policy Institute, a.t.brenner@gmail.com
Jason Gallo, IDA Science & Technology Policy Institute, jgallo@ida.org
Asha Balakrishnan, IDA Science & Technology Policy Institute, abalakri@ida.org
Gina Walejko, IDA Science & Technology Policy Institute, gwalejko@ida.org
Mario Nuez, IDA Science & Technology Policy Institute, mnunez@ida.org
Vanessa Pena, IDA Science & Technology Policy Institute, vpena@ida.org
Stephanie Shipp, IDA Science & Technology Policy Institute, sshipp@ida.org
Abstract: The IDA Science and Technology Policy Institute (STPI) investigated the collaborative networks of two National Science Foundation (NSF) award programs, Research and Evaluation on Education in Science and Engineering (REESE) and Discovery Research K-12 (DR K-12). We analyzed awardee collaboration networks for each program and between the two programs to understand both the influences on these networks as well as how these networks affect the creation, diffusion, and uptake of intellectual products created with program funding. We used a mixed methods approach, combining complementary qualitative and quantitative analyses. These methods included bipartite social network analyses, descriptive statistical analyses, case studies, bibliometric analyses, and interviews. The goal was to examine how the networks of awardees funded by the REESE and DR K-12 programs affected the outputs of the awards and their adoption in the educational system. This presentation will focus on the methodology used to complete the project's case studies.
The Effects of New Technology on the Growth of a Teaching and Learning Network
Presenter(s):
Roberta Spalter-Roth, American Sociological Association, spalter-roth@asanet.org
Jean Shin, American Sociological Association, shin@asanet.org
Olga Mayorova, American Sociological Association, mayorova@asanet.org
Abstract: This paper describes the findings from a summative evaluation of an innovative, interactive, digital teaching resources library in sociology (TRAILS). The purpose of TRAILS is to increase the usage and diffusion of cutting-edge teaching and learning materials to a more diverse audience as well as to increase the network activities of TRAILS users. The evaluation tests theoretical models of diffusion of innovations and unanticipated gains from network participation. The evaluation's purpose is to measure the effects of TRAILS by testing a series of hypotheses about changes in the scope and diversity of TRAILS users, and the density and diversity of a network of teaching and learning materials producers and consumers. The evaluation uses a quasi-experimental design to examine the changing demographic and institutional characteristics of TRAILS users and the increases or decreases in users' network activities. It provides a generalizable model for evaluating the effects of teaching and learning innovations.
Evaluating and Improving Teacher Collaboration With Social Network Analysis
Presenter(s):
Rebecca Woodland, University of Massachusetts, Amherst, rebecca.woodland@educ.umass.edu
Shannon Barry, University of Massachusetts, Amherst, skbarry17@gmail.com
Katrina Crotts, University of Massachusetts, Amherst, kmcrotts@gmail.com
Abstract: Social Network Analysis (SNA) is a statistical method of analysis that generates visual maps and enables the empirical examination of relationships between individuals and groups. Largely underutilized in educational evaluation, SNA is powerful research methodology that can be used to improve communication and knowledge creation among teachers, and to scale instructional innovation throughout a school. In this presentation, we will report how SNA has been used in the evaluation of teacher collaboration, specifically as a key component in the implementation of the Teacher Collaboration Improvement Framework (Gajda & Koliba, 2008). Session participants will learn how evaluators can use SNA to generate information in creative formats that school principals and K-12 superintendents (primary evaluation stakeholders) can use to: 1) inform decisions about how best to structure, strengthen, and support school and district-level teacher collaboration, and 2) empirically examine the relationship between teacher collaboration, quality of instruction, and student achievement.

Session Title: Issues of Race in Evaluation
Multipaper Session 984 to be held in Palos Verdes B on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Erika Taylor,  Prince George's University, etaylorcre@aol.com
Evaluating Whether or Not Stereotype Threat Could Be Occuring in California's Child Welfare Common Core Training Program
Presenter(s):
Cynthia Parry, Cynthia F Parry Associates, cfparry@msn.com
James Coloma, San Diego State University School, jcoloma@projects.sdsu.edu
Leslie Zeitler, University of California, Berkeley, lzeitler@berkeley.edu
Abstract: In California's Child Welfare Common Core Training Program, Caucasian trainees have often scored higher on posttests, and occasionally have increased their scores more from pre to posttest than trainees in one or more other racial/ethnic groups. However, previous analyses in which items showing differential functioning by race were identified and excluded did not always show reduction or elimination of racial/ethnic differences. Thus, stereotype threat is being investigated as a possible phenomenon that might explain these differences. This workshop will describe the efforts that have been made to eliminate bias in California's Child Welfare Common Core knowledge tests, the purpose of this study, the type of analysis conducted, the research findings, and the application of the results.
Art and Racial Identity: Evaluating a Culturally Responsive Arts Education Program in an Urban School District
Presenter(s):
Helga Stokes, Duquesne University, stokesh@duq.edu
Rodney Hopson, Duquesne University, hopson@duq.edu
Gretchen Generett, Duquesne University, generettg@duq.edu
Angela Allie, University of Pittsburgh, adallie1@gmail.com
Kaleigh Bantum, KALEIGH.BANTUM@gmail.com, kaleigh.bantum@gmail.com
Tyra Good, Duquesne University, goodt@duq.edu
Abstract: Students in three inner-city public elementary schools in a northeastern US city participate in a Culturally Responsive Arts Education Project (CRAE). African and African Diaspora art is integrated across the curriculum. The expectations are that the project raises student engagement in school, strengthens a positive racial identity among the largely African-American student body, increases cultural responsiveness among the predominantly Caucasian teaching staff, and consequently contributes to closing the achievement gap. Local artists teach their respective disciplines and collaborate with art teachers and classroom teachers in integrating the arts in the curriculum. The evaluation analyzes academic and arts learning, behavior, engagement, relationships, racial identity and instructional climate. One key question involves racial and ethnic identity formation. The presenters explore in this paper the evaluation of racial, ethnic and cultural identity formation and how the youth in the program perceived this identity as a result of the artistic involvement.
Race or Racism? Value Implications and Practical Solutions
Presenter(s):
Kelly Robertson, Western Michigan University, kelly.robertson@wmich.edu
Diane Rogers, Western Michigan University, diane.rogers@wmich.edu
Abstract: While data on race is frequently collected as part of evaluation, what is it that we are really trying to measure? It appears as if many evaluations use data on race as an unspoken indicator for racism, although failing to specify this and utilizing race as the only indicator of racism has value implications which can promote racial stereotypes and lower the quality of the evaluative findings. We will discuss value implications associated with different ideas of racism and practical strategies, indicators, and tools to measure it. We will also present the value implications behind using race versus the social construct of race when trying to assess the impact of racism and practical solutions for measuring the latter.

Session Title: Evaluating Professional Development and Program Improvement by Using Mixed Methods
Multipaper Session 985 to be held in Redondo on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Mixed Methods Evaluation TIG
Chair(s):
Chad Green,  Loudoun County Public Schools, chad.green@loudoun.k12.va.us
The Best Laid Plans... Often Go Astray: Conducting a Mixed-methods Evaluation of a Changing Project
Presenter(s):
Kimberly Cowley, Edvantia, kim.cowley@edvantia.org
Kimberly Good, Edvantia, kimberly.good@edvantia.org
Nicole Finch, Edvantia, nicole.finch@edvantia.org
Abstract: The Appalachia Regional Comprehensive Center at Edvantia began facilitating a Teacher/Leader Effectiveness Community of Practice in 2010 for its state education agency clients across the five-state region. A mixed-method evaluation plan was developed for determining outcomes and the perceptions of value that regional stakeholders placed on their participation. Data collection methods were to include meeting observations, a survey to establish members’ expectations for impact and their level of confidence pre/post for achieving that impact, informal interviews conducted during the meetings, and semi-structured interviews with staff and members. However, midway through the project year, clients’ needs and interest changed, necessitating a programmatic shift away from a regional level to state-specific efforts. This presentation highlights how even the best-laid evaluation plans can change simply as a result of programmatic adjustments, and how we realigned our evaluation design to the extent possible as a result of those changes (and lessons learned along the way).
Evaluation of the Colorado Clinical and Translational Science Institute's (CCTSI) Leadership in Innovative Team Science (LITeS) Program: A Mixed-Methods Approach
Presenter(s):
Marc Brodersen, University of Colorado, Denver, marc.brodersen@ucdenver.edu
Anne Libby, University of Colorado, Denver, anne.libby@ucdenver.edu
Abstract: This paper details the mixed-methods approach used to evaluate the CCTSI's Leadership in Innovative Team Science (LITeS) program. The program was designed to provide leadership, teamwork, and mentoring training to principal investigators and program directors of federally funded T32 and K12 training programs, as well as relevant deans of the university. The training included eight full-day workshops scheduled in two-day blocks that spanned the academic calendar from September to May. The evaluation consisted of surveys administered to all participants at the end of each training block, as well as a one-year follow-up. These surveys were designed to assess knowledge gained and utilized in reference to domains relevant to NIH's Roadmap to Translational Research. Structured interviews were then conducted with participating deans to determine the effectiveness of the program in developing skills in these domains, and the value of the program in training leaders in medical research at the university.
Assessing post-training behaviors of suicide prevention training attendees using mixed methods over time
Presenter(s):
Brandee B Hicks, ICF Macro, bbrewer@icfi.com
Adrienne G Pica, ICF Macro, gpica@icfi.com
Christine M Walrath, ICF Macro, cwalrath@icfi.com
Richard McKeon, Substance Abuse and Mental Health Services Administration, richard.mckeon@samhsa.hhs.gov
Abstract: Since 2004, the Garrett Lee Smith (GLS) Suicide Prevention Initiative has supported prevention activities in 65 different States, Tribes, Territories and 78 different college campuses across the United States. The implementation of training activities has been a key component of the GLS initiative since inception with over 350,000 individuals being trained. As part of this SAMHSA funded initiative, a multi-method evaluation was developed to learn about prevention strategies used across grantees. This paper will describe three methods that were developed to assess trainee outcomes as the evaluation has evolved, and resources used to inform this process. The measures include: 1) a post-test specific to the training type to assess knowledge, self-efficacy, and intention to use the training; 2) a qualitative follow-up interview to learn about the trainee experience and utilization of knowledge and skills gained; 3) and a quantitative follow-up survey that examines trainee's retention and utilization of training material.
Performance Evaluation on National Policies and Strategies for Child Development
Presenter(s):
Ujsara Prasertsin, Chulalongkorn University, ubib_p@hotmail.com
Chirdsak Khovasint, Srinakharinwirot University, chird91@gmail.com
Somjet Viyakarn, Silpakorn University, 
Abstract: The purpose of this research was to evaluate effectiveness, efficiency and result of management and performance in national policies and strategies for child development by mixed methods evaluation. The evaluation results showed that: context-provincial officers ran the operation as general work, not mission emerged from the strategies; input-the plan was standardized target groups, and its budget was adequately allocated; process-the plans were appropriate and effective, and monitoring and quality assessment were run; effectiveness-the percentage of target group children did not go down as planned; impact-the plan officers concentrated on target group children's behavior, not skill or proficiency improvement; sustainability-the society needed planned operations and offered help and support for cooperation; transportability-the central national strategies and plans could not operated effectively, so each province was supposed to run its own plans and strategies with supports from the center.

Session Title: Innovation Through Institutional Integration: Multiple Stakeholders and Multiple Methods of Evaluation
Panel Session 986 to be held in Salinas on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
M David Miller, University of Florida, dmiller@coe.ufl.edu
Abstract: This panel examines the issues and problems in evaluating a program in the university setting which has the intent of making innovations within the institution that will be sustained after grant support ends. The National Science Foundation has funded The Innovation through Institutional Integration (I-Cubed) program at the University of Florida is a five-year project funded by the National Science Foundation (NSF) to foster integration of all student-based research and training programs in STEM (Science, Technology, Engineering, Mathematics), including SBE (Social, Behavioral & Economics), disciplines. The papers in this symposium examine the complexities of evaluating this type of program. The papers focus on (a) defining the stakeholders and their roles, (b) survey research to examine needs, process and impact, and (c) case studies that allow an in depth examination of the mechanisms that lead to change in the institution.
Innovation Through Institutional Integration: Identifying Stakeholders and Defining Their Roles
Nicola Kernaghan, University of Florida, nikkik@ufic.ufl.edu
In the first paper, we discuss the identification of the multiple stake holders in the program and their roles. The program includes a Graduate Student Advisory Council (GSAC) who bear the primary responsibility for guiding the program in defining the goals and needs of the graduate students in the STEM and SBE disciplines (the program was designed to be bottom-up); an Internal Advisory Board (IAB) comprised of faculty with NSF training grants and faculty with knowledge to help in planning and implementing the program; and an External Advisory Board (EAB) comprised of business leaders and political leaders which are involved in the STEM disciplines. Other stakeholders include the graduate students affected by the program, undergraduates and post docs, and faculty and staff. The grant also includes internal evaluators and external evaluators.
Innovation Through Institutional Integration: Evaluating Needs, Process and Impact
Lidong Zhang, University of Florida, zhld02@gmail.com
Ou Zhang, University of Florida, zhangou888@gmail.com
In this, we describe the traditional survey methods that are being used to assess the needs and the impact of the program. Using survey methods, two years of annual data have been collected to assess the needs of the graduate students funded on NSF grants (approximately 300 students surveyed per year), the faculty PIs and co-PIs on NSF grants (approximately 400 faculty surveyed per year), and the graduate coordinators in STEM and SBE disciplines (approximately 125 per year). This data has been used to inform the GSAC, IAB and EAB about the needs of the stakeholders and their participation in the program. In addition, many of the events sponsored by the project include separate survey-based evaluations to monitor the program process and impact. This paper will also discuss the results and issues in conducting these evaluations.
Innovation Through Institutional Integration: Case Studies
M David Miller, University of Florida, dmiller@coe.ufl.edu
In the final paper, third paper, we discuss the limitations of survey methods in assessing integrating programs into the institution. As a result, we have spent the last year conducting case studies to better understand the mechanisms that lead to institutional changes that will be sustained. Three case studies have been conducted and the issues in assessing institutional integration are discussed. The three case studies focus on (a) the integration of more grant writing training in STEM disciplines within the current university library structure, (b) issues and problems in sustaining the GSAC and its role in institutional change, and (c) the institutionalization of collaborative programs in STEM disciplines. Each of the case studies included multiple methods of data collection including qualitative and quantitative methods.

Session Title: Investing in Learning, Investing in Change: Measuring the Impact of Advocacy
Panel Session 987 to be held in San Clemente on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Nancy Csuti, The Colorado Trust, nancy@coloradotrust.org
Abstract: While funders have increasingly added funding advocacy to their grantmaking portfolios, how to evaluate the impact of this funding continues to be debated. This session will discuss concrete examples of options for learning about, evaluating, and documenting the impact of advocacy funding in more than 10 states. The session will cover what information is collected and how. Discussions of the multiple audiences for this information - grantees, funders, advocacy groups - and how they might best use the knowledge to inform their work and achieve bigger impacts will be included. At the end of this session attendees will better understand why funders invest in evaluation of advocacy as well as methods for how to do this in a way meaningful to both funders and advocacy organizations.
The Investment: Advocacy Evaluation Framework
Tanya Beer, Center for Evaluation Innovation, tbeer@evaluationinnovation.org
Tanya Beer will frame the panel discussion with an overview of the current "state of the field" in advocacy evaluation, offering a set of guiding principles for the design, approach, and use of real-time advocacy evaluation. Summarizing insights that have emerged in recent years from the literature and from a bourgeoning community of practice of advocacy evaluators, her presentation will highlight common methodological, political and capacity challenges that evaluators face. Finally, as a preface to the concrete examples provided by the other panelists, she'll highlight the primary obstacles that advocates and funders encounter when trying to incorporate real-time advocacy evaluation findings into strategic decision-making.
Advocacy Evaluation: An Investment in Learning
Ehren Reed, Innovation Network, ereed@innonet.org
Ehren Reed will share two examples of how an advocacy evaluation can be used to promote and inform learning-within foundations and across grantees. Drawing from his experiences leading a state-level effort to expand health coverage and improve health care as well as a federal-level effort to promote immigration reform, Ehren will share insights on how an evaluation can be structured to promote learning for all stakeholders. Topics discussed in this sessions will include: What has worked well and what are the challenges? What kinds of evaluation questions, approaches, and methods are being used? What reporting mechanisms help promote learning and evaluation use? How can an evaluation effectively balance competing expectations to promote learning for foundations AND grantees?
A Good Investment? A Funder's Perspective on Advocacy Evaluation
Nancy Csuti, The Colorado Trust, nancy@coloradotrust.org
As a representative of a foundation that funds advocacy, Nancy Csuti will discuss the foundation's reasons for investing in evaluating advocacy grants and what the board and staff expected to achieve through this investment as well as what the ultimate outcome was. Issues of capacity on the part of grantee, evaluator and foundation staff will be discussed as well as lessons learned by the funder on what is required to be an informed consumer of such evaluations. How lessons from the multi-year investment continue to inform future work will be highlighted.
Looking Back: Return on Investments in Advocacy
Lisa Ranghelli, National Committee for Responsive Philanthropy, lranghelli@ncrp.org
Lisa Ranghelli will share lessons about evaluating advocacy impact from NCRP's Grantmaking for Community Impact Project. For the project, Lisa developed a retrospective methodology to measure the impacts of foundation funded advocacy, organizing and civic engagement and implemented it with clusters of nonprofits in 7 regions covering 13 states. She will discuss the opportunities and challenges of this type of assessment approach, which used both quantitative and qualitative methods. She will also reflect on the usefulness of the tool, process and findings for funders and advocacy nonprofits.

Session Title: Collaborative Development and Implementation of a Cross-site Evaluation of a National Systems and Policy Change Initiative Across Diverse Sites and Approaches
Panel Session 988 to be held in San Simeon A on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Laurie Carpenter, University of Michigan, lauriemc@umich.edu
Discussant(s):
Martha Quinn, University of Michigan, marthaq@umich,edu
Abstract: In 2007, the W.K. Kellogg Foundation funded nine collaboratives in diverse communities around the country, collaboratives now part of the Food & Community program, to develop action plans to work toward systems and policy change to increase access to healthy, locally grown food and safe and inviting options for physical activity for vulnerable children and their families. The evaluators and leaders from the nine collaboratives worked together with evaluators from the University of Michigan to develop a cross-site evaluation to measure the impact of the initiative on children and families living in each community, and also across all communities. The panelists will provide participants with an intimate look into the work undertaken by the evaluators to integrate community members and other stakeholders into the implementation of the evaluation, using the collaboratively developed cross-site evaluation tools, and to build local capacity toward the evaluation of systems and policy change efforts.
Using Evaluation Findings to Prioritize and Further Community-based Systems and Policy Change Work
Rickie Brawer, Thomas Jefferson University and Hospital, rickie.brawer@jeffersonhospital.org
Evaluators working with the Philadelphia Urban Food & Fitness Alliance (PUFFA) took an active role in working with PUFFA leadership to move the work of the collaborative forward, in changing systems and policies in the food and active living environments in targeted neighborhoods in Philadelphia. This presentation will describe the various methods utilized by the evaluators in helping the leadership prioritize and move the planning process forward, including a number of diverse data collection methods and the presentation of evaluation findings to collaborative committees and the community.
An Overview of the Development of the Food and Fitness Cross-site Evaluation
Laurie Carpenter, University of Michigan, lauriemc@umich.edu
Laurie Lachance, University of Michigan, lauriel@umich.edu
Martha Quinn, University of Michigan, marthaq@umich.edu
Maggie Wilkin, University of Michigan, mwilkin@umich.edu
Noreen Clark, University of Michigan, nmclark@umich.edu
The collaborative process used to develop the cross-site evaluation of the Food & Fitness initiative involved the evaluators and leaders from the nine collaboratives and evaluators from the Center for Managing Chronic Disease at the University of Michigan. During the planning phase of the initiative, evaluators, guided by an evaluation advisory group, decided upon the most important elements to consider in the evaluation, and worked to integrate those elements into relevant and useful tools to utilize across the nine sites. The main components of the cross-site evaluation include information related to collaborative partners, resources leveraged, and details regarding the major systems and policy change efforts of the collaboratives. Once the initial drafts of tools were developed and pilot tested by the local evaluation teams in their communities, additional refinements were made to arrive at the final set of cross-site tools, with which the evaluators collected data in 2009 and 2010.
Cross-site Evaluation as a Way to Create Reflection Opportunities for and Build the Capacity and Sustainability of Initiatives
Mary Emery, South Dakota State University, mary.emery@sdstate.edu
Evaluators working in a cross-site evaluation context are challenged to address the data collection requirements related to the cross-site data collection efforts and at the same time attend to the capacity building aspects inherent in formative or developmental evaluatory approaches. Success in broadening and deepening the Initiative work necessarily also means additional time and effort in data collection. At this stage in community initiative work, questions about the sustainability of the effort and the hoped for increase in capacity to sustain the effort into the future also require time and effort. This presentation will discuss our experience with successful and not so successful strategies to assist in the Northeast Iowa Food & Fitness Initiative to engage in reflection leading to improvements in project operation. The presentation will also identify key challenges encountered in tackling evaluation issues related to capacity building and sustainability within a cross-site context.
Participatory Assessment and Evaluation with Community Youth: To Understand the Food and Active Living Environments in New York City
Heewon Lee, Columbia University, hl2001@tc.columbia.edu
Pam Koch, Columbia University, pak14@tc.columbia.edu
Isobel Contento, Columbia University, irc6@tc.columbia.edu
Youth have been active members of the New York City Food & Fitness Partnership evaluation team, involved in both assessment and evaluation activities. For our assessment of foods available in retail outlets in Central Brooklyn, we utilized youth in the process of interviewing store managers and completing a modified version of the Nutrition Environment Measures Survey (NEMS). The youth learned how food availability and pricing varied in different neighborhoods, and the findings were presented to a city council member interested in improving food availability in the community. Another group of youth advocates assessed the activities available at block parties in their community, and these data helped the youth be able to make recommendations for how to make the closed streets into useful spaces for active recreation. Youth involvement expanded our ability to collect data for both assessment and evaluation and created a valuable learning experience for the youth involved.
Using Cross-Site Evaluation Tools as an Entry Point for Participatory Evaluation
Mia Luluquisen, Alameda County Public Health Department, mia.luluquisen@acgov.org
Anaa Reese, Alameda County Public Health Department, anaa.reese@acgov.org
Lauren Pettis, Alameda County Public Health Department, lauren.pettis@acgov.org
Oakland's HOPE (Health for Oakland's People and Environment) Collaborative used a participatory evaluation approach for piloting and utilizing the cross-site evaluation Collaborative Partners Form. During the initial 2-year planning grant, the HOPE Collaborative formed an evaluation team that included members of four Community Action Teams that worked closely with the project evaluators to develop the data collection process, data analysis and interpretation of results. The evaluation team conducted the data collection and came together to analyze the data and provide preliminary results. The team formulated a summary and interpretation of the data that they presented to the full HOPE Collaborative. The findings from the Collaborative Partners tool helped to 1) inform the implementation strategies of the HOPE Collaborative, 2) shape the final versions of the policy and systems change evaluation instruments and 3) model a participatory evaluation process for other W.K. Kellogg Food & Fitness sites.
Integrating Cross-site and Site-specific Evaluations to Build Capacity Toward the Evaluation of Systems and Policy Change Efforts
Chris Navin, Navin Associates, navin@navinassociates.com
Marian Milbauer, Navin Associates, milbauer@navinassociates.com
The Evaluation Committee of the Boston Collaborative for Food & Fitness (BCFF) draws its members from each of the other content area subcommittees, and is overseen by a team of evaluators working with the collaborative as independent consultants. The Evaluation Committee is tasked with overseeing the implementation of both the cross-site and site-specific evaluations, and as the initiative has moved from planning to implementation, the role of the committee has evolved. This presentation will describe the nature of BCFF and the role of its Evaluation Committee, design processes for the evaluation of the work of the collaborative, resource constraints under which the committee must operate to implement its responsibilities, and the successes of and challenges to implementation of the evaluation.
Using Cross-site Evaluation Tools With Collaborative Leadership and Community as a Means of Furthering the Work of the Collaborative
Catherine Sands, Partnership in Practice, chsands@fertilegroundschools.org
The participatory methods of using cross-site evaluation tools with collaborative leadership and community as a means of furthering the work of the Holyoke Food & Fitness Policy Council. Through the cross-site and process evaluation, we have created an evaluation learning environment, in which an intergenerational, racially diverse collaborative membership share their different perspectives in dialogue and think together as a group. Holyoke's cross-site implementation is enhanced through multiple, participatory, flexible approaches, designed to address real issues. The cross-site evaluation is used: - To focus the development of leadership, community knowledge and evaluation skills - To provide guidance in developing strategy and readiness to bring to bear when windows of opportunity arise for systems and policy change opportunities.

Session Title: Designing Professional Development Evaluation to Assess Impact
Multipaper Session 989 to be held in San Simeon B on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Sheila Robinson Kohn,  Greece Central School District, sbkohn@rochester.rr.com
Discussant(s):
James Sass,  Rio Hondo College, jimsass@earthlink.net
Lesson Study Initiatives: The Importance of Participation Evaluation Processes for Science MSP Professional Development and Their Impact on Instruction and Student Outcomes
Presenter(s):
Kathy Gullie, State University at Albany, kp9854@albany.edu
Abstract: There are multiple considerations for evaluators when focusing on grant funded program evaluation. These include maintaining a clear focus on the desired outcomes; recognizing the critical importance of specific methodologies within the evaluation process; and determining whether the methods used as part of initiatives make a difference in expected and actual outcomes. This paper investigates these considerations by looking at a Mathematics Science Partnership grant initiative focused on the teaching of science using the concepts behind collaborative reflection methods in particular, Lesson Study. The research examines real examples of the use of the Lesson Study method for improving science, particularly at the primary science level, and supports the educative value of this participatory evaluation process through teacher science content acquisition, interactions and self evaluation during science lessons, observations of classrooms after involvement in professional development and lesson study discussions by teachers, and their relationship to elementary student science outcomes.
Evaluating American History Teachers' Transfer of Learning via Classroom Observations
Presenter(s):
Karen Kortecamp, The George Washington University, karenkor@gwu.edu
Abstract: This presentation focuses on the utilization of a collaborative process in an education setting to engage evaluation stakeholders in 1) designing methods to measure the impact of teacher professional development on teacher practices, 2) developing an observation protocol to measure teachers' transfer of pedagogical content knowledge, and 3) reflecting on the findings. The observation protocol will be shared and participants will be invited to explore how it can be adapted to accommodate a variety of subject specific content and pedagogy.
Integrating Evaluation and Research Activities Across Partner Organizations: Collaboration in Discovery Research K-12
Presenter(s):
Andrea Beesley, Mid-continent Research for Education and Learning, abeesley@mcrel.org
Sheila A Arens, Mid-continent Research for Education and Learning, sarens@mcrel.org
Joseph Taylor, Biological Sciences Curriculum Study, jtaylor@bscs.org
Chris Wilson, Biological Sciences Curriculum Study, cwilson@bscs.org
Nancy Landes, Biological Sciences Curriculum Study, nlandes@bscs.org
Abstract: McREL is working with BSCS to provide external evaluation of a scale-up Discovery Research K-12 project. Through a discussion of our approach to this work, the presentation of preliminary findings, and reflection on different facets of evaluation utilization, participants will learn about the project and our successful approach for weaving together development, research, and evaluation. * The roles of the evaluators and researchers on a DR K-12 project * How BSCS and McREL worked together to integrate the roles of research and evaluation in STeLLA * The research design, and how aspects of the design became opportunities for collaboration and feedback between the partners * The evaluation design, including evaluation questions and sub-questions and instrument development * Research and evaluation findings from the pilot year * How the pilot evaluation findings were used by BSCS developers and researchers * The role of the partnership in supporting utilization of findings
Assessing the Effects of a Coaching Model of Professional Development on Teaching Practices: An Instrument Validity Study
Presenter(s):
Celina Lee Chao, University of California, Los Angeles, celina.lee@gmail.com
Lisa Dillman, University of California, Los Angeles, ldillman@ucla.edu
Nicole Gerardi, University of California, Los Angeles, gerardi_nicole@yahoo.com
Abstract: The purpose of this study was to validate and assess the reliability of a survey instrument that measures constructs that represent the anticipated outcomes of a model of teacher professional development that employs coaching. These constructs are hypothesized to provide intermediary predictive information about teacher effectiveness, and eventually, student achievement. They are: instructional efficacy, leadership, collegiality, collaboration, and commitment to ongoing learning. The data collected and reported in this paper are from 230 K-12 teachers across a large urban school district in Southern California. The measure was developed by a panel of experts over several years, and piloted both informally and formally with the population of interest. Results from a principal components analysis and reliability investigation are reported.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: New Venues for Practical Experience in Evaluation: Cultivation of Evaluator Competencies Within a Graduate School of Medicine
Roundtable Presentation 990 to be held in Santa Barbara on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Graduate Student and New Evaluator TIG
Presenter(s):
Patrick Barlow, University of Tennessee, Knoxville, pbarlow1@utk.edu
Eric Heidel, University of Tennessee, Knoxville, rheidel@utk.edu
Alison Lockett, University of Tennessee, alison@tennessee.edu
Tiffany Smith, University of Tennessee, Knoxville, tsmith92@utk.edu
Abstract: In the proposed roundtable, three graduate students working towards a doctoral degree in evaluation, statistics, and measurement will facilitate a discussion on the development of specific evaluator competencies within the context of working as research consultants at the University of Tennessee Graduate School of Medicine. The presenters will share their experiences in these assistantship positions in addition to some of the unique challenges presented by working at a large teaching hospital. Moreover, the presenters will actively engage attendees in discussing methods for further integrating graduate student evaluators within medical fields in a way that will provide greater benefits to evaluators' graduate education and subsequent research agendas within hospitals. Lastly, the presenters will discuss current and potential evaluation opportunities in existing graduate schools of medicine.
Roundtable Rotation II: Student Perspectives on Balancing Practical and Conceptual Aspects of Evaluation-focused Graduate Education
Roundtable Presentation 990 to be held in Santa Barbara on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Graduate Student and New Evaluator TIG
Presenter(s):
Susanne Kaesbauer, University of Tennessee, Knoxville, skaesbau@utk.edu
Jason Black, University of Tennessee, Knoxville, jbalck21@utk.edu
Ann Cisney-Booth, University of Tennessee, acisneybooth@utk.edu
Abstract: While many graduate students struggle to find balance between the various aspects of their lives and careers, graduate students in evaluation-focused programs are faced with additional challenges Students must assimilate diverse theoretical perspectives and apply wide-ranging skill sets across a multitude of situations and roles. These unique features of evaluation can make it a particularly exciting and especially challenging area of study. This roundtable will detail the struggles and successes of three graduate students, with time management, planning, coursework/scheduling demands, stress, and comfort with uncertainty, as well as their respective creative solutions to find their own equilibrium. We anticipate that discussion with attendees will highlight additional challenges and solutions in these and other areas. Sharing struggles and solutions can be of value to other graduate students and faculty members, as solution strategies can be adopted and adapted to other settings.

Session Title: Weighting and Measurement in Evaluation Quality for Educational Arenas
Multipaper Session 991 to be held in Santa Monica on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Karen Larwin,  Youngstown State University, khlarwin@ysu.com
Evaluating Measurement Invariance in Educational Settings
Presenter(s):
Lihua Xu, University of Central Florida, lihua.xu@ucf.edu
Abstract: In the world of education, tests and mental measurements are used tremendously to assess students' certain attributes (e.g., students' degree of achievement motivation or level of anxiety). Educational researchers often compare the effect of group differences in different attributes or constructs on academic achievement in attempt to understand the possible causes of achievement gap in various subject areas. However, one important concern raised up within the last few years with the measurements is whether the groups (e.g. gender, ethnicity, age etc.) interpret the measurement the same way. If inconsistent interpretation of the measurements exists between groups, the results of group comparison at the level of mean comparisons of observed variables are misleading. Therefore, this paper will discuss the important concept of measurement invariance and steps to evaluate invariance and at the end a step-by-step illustration using cross-cultural data will be presented.
It's all Relative: Applying Relative Weight Analysis to Understand and Improve Programs, Practices and Data Collection Methods
Presenter(s):
Emily Hoole, Center for Creative Leadership, hoolee@ccl.org
Abstract: This session will focus on how evaluators can use Relative Weight Analysis (RWA) to go beyond multiple regression in understanding the relative importance of predictors in explaining an outcome, especially when the predictors are correlated. Using several examples of how this method can be used to understand social phenomenon, improve programs and improve survey design, this introduction will add an additional tool to the evaluator's quantitative toolbox. At the Center for Creative Leadership, relative weight analysis has been used to understand and improve the overall participant experience, review the most critical questions on an end of program survey, explore patient satisfaction in a public health clinic, and understand what elements of feedback coaching are most effective. These results provide deeper insights and actionable data for evaluators and stakeholders. Opportunities and challenges in using the method will be explored as well as practical aspects of applying RWA to a dataset.
A Posteriori Evaluation Of Assessments: Comparison of Hierarchical Cluster Analysis and Confirmatory Factor Analysis
Presenter(s):
Vasanthi Rao, University of South Carolina, raov@mailbox.sc.edu
Min Zhu, University of South Carolina, helen970114@gmail.com
Robert Johnson, University of South Carolina, rjohnson@mailbox.sc.edu
Kristina Ayers Paul, University of South Carolina, paulka@mailbox.sc.edu
Abstract: Assessments used in large scale testing programs are carefully designed to measure a construct. Efforts in the developmental stages of an assessment involve experts in review by using statistical, curriculum, administrative, and political lenses. Once the test is administered, post-hoc evaluations of assessment quality focus on construct-related issues such as item difficulty, discrimination, etc. In this paper, we propose that the examination include a rigorous scrutiny of ancillary claims made by the assessment, such as cognitive levels of test items, and confirm if they have been met. Using hierarchical cluster analysis (HCA) and confirmatory factor analysis (CFA), we look at a visual arts assessment(N= 4386) that was designed to contain an allotted percentage of test items at different levels of cognitive skills using the original Bloom's taxonomy (Bloom, 1956). We compare and contrast HCA and CFA as an example of post-hoc analysis of assessments to validate this claim.

Session Title: Measuring Quality in Child Care and Family Support Programs: Tobacco Tax Funds at Work
Multipaper Session 992 to be held in Sunset on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Human Services Evaluation TIG
Chair(s):
Margaret Polinsky,  Parents Anonymous Inc, ppolinsky@parentsanonymous.org
Discussant(s):
Louis Thiessen Love,  Uhlich Children's Advantage Network, lovel@ucanchicago.org
Lessons Learned in Evaluating Evidence-based Practices in Kansas
Presenter(s):
Karin Chang-Rios, University of Kansas, kcr@ku.edu
Annie McKay, University of Kansas, amckay@ku.edu
Abstract: The identification and implementation of evidence-based practices (EBP) have become priority issues for funders and stakeholders. Evaluators are increasingly being asked to assess the quality and strength of evidence of effectiveness as part of their work with programs. This paper examines the development and use of an evidence-based practice rating system in Kansas. For the past five years, this rating system has been used to assess programs funded through the Children's Initiative Fund. The Children's Cabinet and Trust Fund uses the EBP ratings, in conjunction with service delivery and outcome data, to inform their accountability process and guide decision-making regarding future funding. The process by which the rating system was developed, its functionality within the context of the Children's Cabinet's accountability framework and challenges associated with its use in evaluating programs at the state and local level will be discussed in the paper.
Narrowing the Gap: Examining Changes in the Social And Economic Gaps Among a Random Sample of Participants in Programs Funded to Provide Direct Services to Families
Presenter(s):
Erika Takada, Harder+Company Community Research, etakada@harderco.com
Raul Martinez, Harder+Company Community Research, rmartinez@harderco.com
David Dobrowski, First 5 Monterey County, david@first5monterey.org
Abstract: First 5 Monterey County implemented a longitudinal parent interview evaluation study that allowed evaluators to look at how the relationship between social and economic family characteristics and program and child outcomes changed over one year. The study involved the administration of a parent interview with 172 randomly selected newly enrolled families. Baseline data was collected in 2008-09 and again from the same parents one year later. There were three main findings: 1) intensive interventions were accessed at a significantly (p<.01) higher rate than at baseline by Latino families with less than a high school education, families whose primary language is Spanish, and families who have household annual incomes of less than $30,000, 2) families with those same characteristics were reading and engaging with their children significantly more (p<.001), 3) the social and economic disparities gap seen at baseline across program indicators appeared to narrow after one year.
Measuring Quality in Child Care Settings: Parent and Provider Perspectives
Presenter(s):
Michel Lahti, University of Southern Maine, mlahti@usm.maine.edu
Allyson Dean, University of Southern Maine, adean@usm.maine.edu
Sarah Rawlings, University of Southern Maine, srawlings@usm.maine.edu
Abstract: More than half the states in the United States are implementing or piloting a Quality Rating and Improvement System for their child care programming. The federal Office of Child Care has recently released proposed Benchmarks that intend to promote the establishment by all states of a set of components of Quality Improvement initiatives. Since early 2008, the state of Maine, Division of Early Care and Education, DHHS, has implemented Quality for ME, which identifies a set of standards for various program types. This paper will present findings on how parents and providers perceive the quality of the child care setting, and if there are differences in perceptions of quality dependent upon a quality Step rating assigned to the program. This is a multi-year, state wide evaluation of this program. The authors implemented a random assignment design which includes observations of at least 320 child care programs. The paper presentation will describe the study design, the most current literature on measuring family engagement and parental perceptions of quality, and describe how early care and education staff view the quality of the programs.

Return to Evaluation 2011
Search Results for All Sessions