2010 Banner

Return to search form  

Session Title: Design Thinking: The Art of Evaluation
Panel Session 702 to be held in Lone Star A on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Presidential Strand and the Evaluating the Arts and Culture TIG
Chair(s):
Ching Ching Yap, Savannah College of Art and Design, cyap@scad.edu
Discussant(s):
Ching Ching Yap, Savannah College of Art and Design, cyap@scad.edu
Abstract: Design thinking describes both a process and an approach to problem solving that has been developed by designers for the arts over many decades. Popularized by books such as Daniel Pink’s A Whole New Mind, Tim Brown’s Change by Design, and The Design of Business by Roger Martin, design thinking is now attracting the attention of practitioners in fields outside of the arts. The diffusion of design thinking into other fields presents an opportunity for multidisciplinary teams to innovatively engage in problem solving. In this panel, practicing design professors from the Savannah College of Art and Design share aspects of the design thinking process, introduce related theory, and provide practical examples of how this innovative approach could be applied across many fields, including evaluation. Using hands-on design exercises, the audience will experience the design thinking process and begin to see the art of evaluation through the lens of design thinking.
Make the Familiar Strange, and Make the Strange Familiar
Christine Miller, Savannah College of Art and Design, cmiller@scad.edu
From a researcher of innovation in product development perspective, Dr. Miller will explain the fundamental concepts and essential components of design thinking and application process. She will also provide practical examples of how design teams engage in design thinking by challenging the heuristics to determine best solutions for current design problems. Christine Miller is a professor of Design Management at the Savannah College of Art and Design (SCAD). She earned a Ph.D. in Anthropology and Management from Wayne State University where she conducted ethnographic research on the relationship between innovation and formalization at a Tier One automotive supplier. Her research interests include how sociality and culture influence the design of new products, processes, and technologies. She also studies technology-mediated communication within groups, teams, and networks and the emergence of technology-enabled collaborative innovation networks (COINs).
The Breakthrough of Praxis
Robert Fee, Savannah College of Art and Design, rfee@scad.edu
The application of design thinking allows design teams to engage in abductive reasoning and reframe problems that elevate the design process beyond praxis to achieve breakthrough solutions. Reframing problems through abductive reasoning often involves constructing a new point of view that reveals a new logical paradigm. In this presentation, Professor Fee will draw from his extensive experience as an industrial designer and educator as he describes the process of how organizations or groups discover the “breakthrough” during the design thinking processes. Robert Fee is a professor and the graduate program director for Design Management at SCAD. Joining the faculty in 1998, Fee played a significant role in formulating the direction of the Industrial Design Department. This year, Fee was recognized as one of the Most Admired Educators of 2010 by DesignIntelligence, the bi-monthly report published through the Design Futures Council.
Taking the First Step Towards Design Thinking
David Ringholz, Savannah College of Art and Design, dringhol@scad.edu
Because individuals and organizations have differing degree of tolerance for risk and innovation when collaboratively designing solutions, engaging all parties in a conversation is crucial in setting appropriate expectation and managing the creative process. One of the design tools for facilitating conversation is the Gains Map that allows participants to plot the position of concepts along the continuum of innovation from Tactical to Breakthrough. The presenter, Ringholz, will lead the audience using Gains Map to experience the inspiration and ideation phases of the design thinking process and to take the first step towards design thinking. David Ringholz is a faculty member and the acting chair of SCAD’s industrial design program. Throughout his award-winning career, he has worked on projects with Coca-Cola, Fossil, Sony-Ericsson, Newell-Rubbermaid, Philips, Sunbeam, Target, Home Depot, and others. In 2006, the Design Futures Council recognized Ringholz as one of the top 40 Design Educators in the US.

Session Title: Theory of Change Online: Implications for Participatory Planning and Evaluation Using Technology
Panel Session 703 to be held in Lone Star B on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Chair(s):
Helene Clark, ActKnowledge, hclark@actknowledge.org
Discussant(s):
Helene Clark, ActKnowledge, hclark@actknowledge.org
Abstract: The panel will consist of participants in a pilot of an online application to develop Theories of Change, which can be stored, shared and commented from anywhere. As an inherently participatory process, how do the benefits and challenges of using an online application affect (either enhance or reduce) participatory processes and what is the effect on the overall theory? What issues did the panelists find in a) creating their TOCS and b) using an online application for the first time for TOC and c) what are the implications for future uses. Three panelists from different types of organizations, topic areas and geographic context with discuss their theories and the uses of online capabilities.
Using Technology to Enable Constituency Feedback
David Bonbright, Keystone Accountability, david@keystoneaccountability.org
This presentation will will share our experiences in using technology to enable constituency feedback and our early experiences with Theory of Change Online as a way to enhance participation of key constituents in planning. We come to this as a partner to social investors as they build learning and improvement systems with their grantees. The context for this is the entire value chain of social investor, implementer and primary constituents/beneficiaries. Technology enables us to create a heretofore very hard to see Whole System view.
Differences in Theory Development Before and After Using an Online Theory of Change Application
Abra McAndrew, University of Arizona, mcandrew@email.arizona.edu
This presentation will discuss the process we used to develop our Theory of Change without the on-line drawing program, and how that process might be experienced differently in the TOCO environment. In order to build our model pre-TOCO, we had to use other tools such as Post-It notes and sketches on a white board. This allowed the members of the group to participate in a kinesthetic way, which has the advantage of the participants’ physical and immediate connection to developing the model. However, the disadvantages were that the system was a bit messy and sometimes difficult for all members to clearly see what was going on (ie sloppy writing, abbreviations, etc). Also, at the end of the day I, as the facilitator, needed to reproduce the drawing suggested by the group in a Power Point or Word document (we tried both, and both were too clunky to use during the actual discussion process). This took the product out of the group’s hands as I took back the ideas and shaped the drawing. The advantage of the TOCO environment is that the Outcomes and Connectors could be easily projected and manipulated by participants during discussion, giving them a more direct hand in the final product.
Using Theory of Change Online as Part of an Advocacy Approach to the Needs of Children
Jodie Langs, West Coast Children's Clinic, jlangs@westcoastcc.org
This presentation will discuss using Theory of Change Online in the context of an organization that uses Theories of Change as an advocacy tool to promote a better understanding of the mental health needs of children in the child welfare system and to identify the most effective practices and public policies to meet these needs. Within WestCoast we have facilitated the Theory of Change process with organizational staff as a program design and evaluation tool. This presentation will discuss her agency’s experience in using Theory of Change Online in the context of designing, evaluating and revising program interventions in the field of children’s mental health.

Session Title: Training in Evaluation: Formal and Informal Capacity Building
Multipaper Session 704 to be held in Lone Star C on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Stewart Donaldson,  Claremont Graduate University, stewart.donaldson@cgu.edu
Innovative Approaches for Evaluation Education and Training: Examples From the Claremont Graduate University (CGU) Evaluation Programs
Presenter(s):
John LaVelle, Claremont Graduate University, john.lavelle@cgu.edu
Stewart Donaldson, Claremont Graduate University, stewart.donaldson@cgu.edu
Abstract: In recent years the practice of evaluation has grown substantially, as evidenced by a rise in the number of professional evaluation organizations, AEA membership, the demand for evaluation services, and the number of number of universities educating the next generation of evaluators (LaVelle & Donaldson, 2010). This presentation will highlight the innovative approaches to learning being used by Claremont Graduate University (CGU), a long-standing program that has consistently offered specific training in evaluation to diverse groups of learners. The CGU program will be discussed in relation to its evaluation curriculum, distance education model for the certificate program, online workshops, annual professional development workshops, and evaluation debates. The presenters will discuss some of the key insights that have been gleaned from evaluation data collected in efforts to improve each of these approaches to providing evaluation education, training, and capacity building.

Session Title: Foundation Effectiveness: Aligning Impact Strategy and Evaluation
Demonstration Session 705 to be held in Lone Star D on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Dawn Martz, Foellinger Foundation Inc, dawn@foellinger.org
Mike Stone, Impact Strategies Inc, mike@help-nonprofits.com
Abstract: The session presents a case study of how the Foellinger Foundation developed a comprehensive evaluation program. The program is based on the following three key lessons from the field: • A foundation needs to be clear about the type of impact it hope to create, and then should act accordingly. • Foundation effectiveness takes into account grantee and foundation performance. • All evaluation activity should be aimed at the learning needs of the foundation and its grantees. • The session illustrates how to develop a foundation evaluation program that takes into account: • How a foundation uses its resources to create the type of impact it desires (i.e., the grantmaking strategy) • How a foundation allocates and aligns it internal processes to support the grantmaking strategy (i.e., the organizational strategy) • The key framework used in the process draws from the work of Peter Frumkin’s Strategic Giving (2006).

Session Title: Applications of Multiple Regression for Evaluators
Demonstration Session 706 to be held in Lone Star E on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Dale Berger, Claremont Graduate University, dale.berger@cgu.edu
Abstract: Multiple regression is a powerful tool that has many applications in evaluation and applied research. Regression analyses are used to describe relationships, test theories, make predictions with data from experimental or observational studies, and model linear or nonlinear relationships, including interactions. This demonstration will begin with a review of correlation, assumptions for statistical tests, diagnostics, and remedies for problems. A review of simple regression will be followed by examples of multiple regression models that may be useful in evaluation research, including models with categorical predictors and interactions. A handout with examples will include SPSS/PASW instructions and output.

Session Title: The Process of Evaluating Supporting Partnerships to Assure Ready Kids (SPARK) and Ready Kids Follow-Up (RKF): Embracing and Informing Truth, Beauty, and Justice
Panel Session 707 to be held in Lone Star F on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
York Susan, University of Hawaii, Manoa, yorks@hawaii.edu
Discussant(s):
Huilan Krenn, W K Kellogg Foundation, huilan.krenn@wkkf.org
Abstract: The purpose of the proposed panel is to discuss the evaluation process from Supporting Partnerships to Assure Ready Kids (SPARK) and the Ready Kids Follow up (RKF) studies. SPARK was a nation-wide school readiness initiative funded by the W.K. Kellogg Foundation, while RKF examines school success for students who benefited from SPARK programs. Panel members include the Initiative Level Evaluator (ILE) and local evaluators from Hawaii, New Mexico, and Ohio. The panel will discuss how the evaluation process evolved from individual designs using different measures for SPARK to adopting common measures across sites for RKF. Our focus is to discuss the opportunities gained/lost in both designs, and to generate further discussion with regards to evaluating programs with divergent designs in culturally complex communities.
Methodological Challenges to Initiative-level Evaluation
Patrick Curtis, Walter R McDonald and Associates Inc, pcurtis@wrma.com
The presenter led the Initiative Level Evaluation (ILE) Team for SPARK and is now Principal Investigator for the Ready Kids Follow-up. The presentation is an overview of the methodological challenges and how those challenges were addressed in a multi-site evaluation that began in 2003. The role of the ILE Team progressed from a laissez-faire relationship with the eight SPARK grantees to major responsibility for shepherding the evaluation effort in the last two years of SPARK. Originally, the ILE Team was not clear about its role in the project, but was later challenged to provide intellectual leadership. The final evaluation report remains the only written documentation of SPARK at the initiative level
A So-called “Improved” Evaluation Method Viewed Through an Indigenous Lens
Morris Lai, University of Hawaii, Manoa, lai@hawaii.edu
In an effort to improve the methods used in the earlier evaluations of SPARK, the sites agreed upon a common set of data-collection instruments and methods of administration for the RKF study. While such an increase in consistency can be viewed as a methodological improvement, from an indigenous viewpoint, such “improvements” could result in a lessening of evaluation quality. At the Hawai‘i site, the original SPARK evaluation featured the honoring of oral interview input from participants as primary evaluation data, as opposed to being considered mainly as data that were useful in corroborating primary, often quantitative or written, data. In the “improved” evaluation approach, short responses on written instruments are now primary sources of data. I will discuss how some aspects of the “improved” approach, which when viewed through an indigenous lens, are indeed improvements, whereas other aspects could indicate a lowering of methodological quality
The Tension Between Difference and Commonality in Multi-site Initiative: What are the Challenges to Quality?
Marah Moore, i2i Institute Inc, marah@i2i-institute.com
As one of several states participating in the 5-year WKKF SPARK project, the NM SPARK evaluation straddled the divide between the commonalities of an initiative shared across multiple sites, and the unique demands of an individual site with substantial differences in context and project implementation. This tension was mirrored at our individual project level, as we funded multiple communities to participate in our statewide SPARK project. Finding the balance between common measures and learning through differences is not easily resolved. We will speak to the following questions: What challenges to evaluation quality arose in the various approaches that were taken to find that balance? What did we learn about fostering unique responses while measuring collective change? Site vs. initiative: whose success matters? How has this experience shaped our current thinking about evaluating multi-site efforts?
Including Evaluation From the Start: How Ohio SPARK Used Evaluation for Program Planning and Program Expansion
Peter Leahy, University of Akron, leahy@uakron.edu
This paper discusses how evaluation became integrated into the SPARK Ohio program from the initial planning grant stage. The role of the evaluation team in program planning and program development in the formative years of SPARK Ohio is considered with illustrations of how process evaluation led to continuous program improvement. The outcome evaluation design used in Ohio SPARK is also discussed, along with kindergarten entry and longitudinal program results since 2005. Replications of the SPARK Ohio program are spreading throughout Ohio and the role evaluation plays in the replication process, and the challenges that it has faced, are also discussed.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Measuring Success: Applying Short-Term Indicators to Measure Long-Term Success in Human Capital Investment Programs
Roundtable Presentation 708 to be held in MISSION A on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Human Services Evaluation TIG
Presenter(s):
Margaret McKenna, ConTEXT, mmckenna3@earthlink.net
Abstract: Various programs that invest in human capital such as employment and education services for homeless individuals, typically refer to goals of “breaking the poverty cycle” and moving families to economic independence. These desirable goals are measured in assessing participants’ income increases or financial assets that occur in the long term, yet funders require short-term evaluations. The roundtable discussion raises several questions of how to measure program success at the individual and family level. What are indicators of program success at the local neighborhood or community level that are linked to the ultimate goal that new workers enter the labor market? The results from the evaluations of two programs will be highlighted – a career coaching model and a refugee farming economic independence project. Participants will be asked to share their challenges and accomplishments in assessing short and long-term changes at multiple levels in human capital investment programs.
Roundtable Rotation II: How to Maintain Evaluation Quality in a Changing Environment: The Importance of Mixed Methods
Roundtable Presentation 708 to be held in MISSION A on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Human Services Evaluation TIG
Presenter(s):
Michael Smith, Louisiana Public Health Institute, msmith@lphi.org
Lisanne Brown, Louisiana Public Health Institute, lbrown@lphi.org
Elmore Rigamer, Catholic Charities of New Orleans, erigamer@archdiocese-no.org
Abstract: Ciara Community Services (CCS) in New Orleans offers a continuum of housing and support services to the chronic mentally ill who are homeless or at risk for homelessness. Catholic Charities Archdiocese of New Orleans (CCANO) has been operating Ciara House, the oldest of the CCS facilities, for twenty-five years, but the program has never been formally evaluated. A 12 month longitudinal mixed methods study of Ciara House residents with data collection planned every 3 months was designed. However 3 months into the study, the funding for the program was cut and the program could only keep residents for 90 days instead of a year. Attempts were made to follow individuals once they left the program, but many were lost to follow-up. Qualitative case studies of longer term residents before they were forced to leave has proven to be an important source of data to understand the effectiveness of the program.

Session Title: From Deliverables to Strategies: Experience of Implementing a National Evaluation Policy via Statewide Asthma Programs
Panel Session 709 to be held in MISSION B on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the
Chair(s):
Paul Garbe, Centers for Disease Control and Prevention, plg2@cdc.gov
Abstract: Discussions about evaluation policy stress the importance of developing a better understanding about existing evaluation policies as well as creating an evidence base about the effectiveness of such policies. In 2009, the Air Pollution and Respiratory Health Branch (APRHB) at the Centers for Disease Control and Prevention issued a new Funding Opportunity Announcement that shifted the focus of 36 state asthma programs from implementing evaluations in the first year of funding to systematically identifying high-priority evaluations to conduct throughout the funding lifecycle. To facilitate the successful implementation of this new evaluation policy, APRHB published a guide describing the strategic evaluation planning process and created internal policies to promote the use of consistent messages and procedures across a newly formed group of evaluation technical advisors. Lessons learned in implementing and supporting efforts associated with this new evaluation policy will be presented with reflections on the strengths and weaknesses of this approach.
Shaking It Up: A New Evaluation Policy for Strategic Evaluation Planning
Leslie Fierro, SciMetrika, let6@cdc.gov
In 2009, a team of evaluators within the Air Pollution and Respiratory Health Branch (APRHB) at the Centers for Disease Control and Prevention established a new policy for conducting evaluations within state asthma programs. Prior to implementing evaluations, states are now required to spend their first year of funding collaborating with a small team to create a strategic evaluation plan. This plan outlines the high-priority evaluations the state program intends to conduct over the next four years. To support states in developing strategic evaluation plans, the APRHB developed Learning and Growing through Evaluation, a document that details the envisioned process for creating a strategic evaluation plan. The background and rationale for this policy shift as well as the details of creating a strategic evaluation plan will be discussed.
Reality Check: Benefits and Limitations of Strategic Evaluation Planning in Practice
Eric Armbrecht, Open Health LLC, eric@openhealth.us
Peggy Gaddy, Missouri Asthma Prevention and Control Program, peggy.gaddy@dhss.mo.gov
Sherri Homan, Missouri Asthma Prevention and Control Program, sherri.homan@dhss.mo.gov
This presentation will focus on our state’s experience with implementing guidance and recommendations from the CDC’s Learning and Growing through Evaluation: State Asthma Program Evaluation Guide. It will address benefits and limitations of the six-step method and describe how the method was adapted in practice. We will review how our state uses a strategic evaluation planning process with diverse statewide stakeholders to improve partner engagement and dissemination as well as program effectiveness.
Supporting Internal Evaluation Policy: Development of Evaluation Technical Assistance Sub-team and Policy Implementation Materials
Sarah Gill, Cazador, sgill@cdc.gov
Amanda Savage Brown, Cazador, abrown2@cdc.gov
In support of its internal evaluation policy, CDC’s Air Pollution and Respiratory Health Branch established a sub-team of Evaluation Technical Advisors (ETA) to provide evaluation-related technical assistance to funded state asthma programs. ETAs provide direct support to assigned state evaluators and coordinate with other Branch staff to ensure a programmatically sound, data-driven approach to evaluation. ETAs meet weekly to provide cross-team support and to ensure consistent information is provided to all states. To support ETAs in this new role, staff created an ETA Guide that defines roles and responsibilities and provides guidance on internal and external communication and coordination as well as other routine technical assistance tasks and processes. It also includes supporting resources and tools (e.g., contact log, evaluation plan assessment tool, site-visit checklist, etc.). Use of the Guide and accompanying tools during the first implementation year will be discussed.
Providing Evaluation Technical Assistance: Are We Doing What We Expected and What Exactly Are We Doing?
Sheri Disler, Centers for Disease Control and Prevention, sdisler@cdc.gov
Amanda Savage Brown, Cazador, abrown2@cdc.gov
The role of the CDC Asthma Program Evaluation Technical Advisor (ETA) is to provide direct support to their assigned states’ evaluator, and to coordinate with other ETAs and Branch staff to ensure a programmatically-sound, data-driven approach to evaluation. During the first implementation year of “Addressing Asthma from a Public Health Perspective” (a 5-year funding cycle), grantees were tasked with developing a strategic evaluation plan. State-specific, tailored technical assistance (TA) in support of developing this plan was provided through conference calls, emails, listserv postings, and site visits. State evaluators had a wide range of experience and diverse TA needs. This presentation describes the TA needs identified across States, the content of TA delivered to State Evaluators, and the development of resources to assist the ETA team. Comparisons will be made between activity types anticipated for ETAs at funding outset and the actual provision of TA throughout the first implementation year.

Session Title: Framework for Orchestrating Systems Transformation: Theory and Practice for Promoting Dynamic Systems Change
Multipaper Session 710 to be held in BOWIE A on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Systems in Evaluation TIG
Chair(s):
Pennie Foster-Fishman,  Michigan State University, fosterfi@msu.edu
An Overview of the Framework for Orchestrating Systems Transformation
Presenter(s):
Pennie Foster-Fishman, Michigan State University, fosterfi@msu.edu
Erin Droege, Michigan State University, droegeer@msu.edu
Abstract: While theories of change are recognized as useful tools to guide community change initiative implementation and evaluation, many efforts have failed to bring about their transformative change goals because they do not simultaneously attend to the underlying change processes that support the stated theory. Our presentation attempts to bridge this gap by describing a new framework that includes a collection of essential system change processes that we argue must be developed and maintained throughout the course of a change endeavor to produce transformative outcomes. Called the FROST approach (Framework for Orchestrating Systems Transformation), the approach promotes simultaneous focus on: a) the programmatic theory of change; b) the underlying process of systems change; and c) the dynamic nature of change over time. Potential applications of how to apply the framework to community and systems change implementation and evaluation, as well benefits and limitations of the approach, will be discussed.

Session Title: Where Theory Meets Practice in Assessing Advocacy and Policy Change
Multipaper Session 711 to be held in BOWIE C on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Ehren Reed,  Innovation Network, ereed@innonet.org
How Do You Know You Are Making a Difference? A Collaborative Study With Social Justice Advocacy Organizations in Canada
Presenter(s):
Bessa Whitmore, Carleton University, elizabeth_whitmore@carleton.ca
Maureen Wilson, University of Calgary, mwilson@ucalgary.ca
Avery Calhoun, University of Calgary, calhoun@ucalgary.ca
Abstract: What is the meaning of success and what are the factors or conditions that contribute to it? These are the two questions addressed in a collaborative study with nine very diverse groups and organizations across Canada that are engaged in advocacy work. While policy change is certainly a key aspect when thinking about advocacy effectiveness, our findings broaden the understanding of what success means and how it is achieved. For example, raising public awareness, group functioning and personal experience are regarded as significant aspects of success by those working directly in the field. Factors that contribute to effectiveness include activities that attract and retain participants, elements of group or organizational functioning, and a wide range of methods and strategies. In this presentation, we will discuss our methodology and findings in more detail, and the implications for those working in the field of advocacy evaluation.
Hard Evidence for Hard Times: A Policy Analysis of the Rise of Evidence-based Practice in Evaluation of Home Visiting Programs Using Kingdon’s Multiple Streams Model
Presenter(s):
Stephen Edward McMillin, University of Chicago, smcmill@uchicago.edu
Abstract: This paper examines Kingdon’s problem, policy, and politics streams across two domains: 1) the rise of evidence-based practice in evaluation, and 2) the roughly contemporaneous rise of home visiting programs. This paper then elaborates five significant evaluation barriers that home visiting programs face: 1) Lack of external validity in evaluations due to inadequate attention to implementation issues in complex interventions such as home visiting (American Evaluation Association, 2008); 2) Lack of a coherent sociological perspective in evaluation that acknowledges how social conditions impact individual outcomes; 3) Truncated selection of only individually-focused outcome measures in evaluation that ignore social networks and variables; 4) Inadequate evaluation frameworks that fail to examine all relevant variable domains, such as neighborhood and program factors that influence implementation; and 5) Failure to evaluate home visiting as a broader safety net of potentially universal social supports rather than simply a narrowly targeted intervention.
The Politics and Ethics of Advocacy Evaluation
Presenter(s):
Denise L Baer, Johns Hopkins University, src_dlbaer@hotmail.com
Abstract: Can political advocacy be evaluated? A positive answer presumes that efforts to change the direction of public policy can be objectively identified and measured. The body of political science research on the nature of interests and on the terrain of public opinion change would say “no” because no political interest is ever really neutral. Yet the field of advocacy evaluation is burgeoning as the advent of mission-oriented philanthropy leads foundations to shift from funding programs and services to seeking policy and systems change, This paper compares and contrasts existing approaches to evaluation of advocacy with political science perspectives on the ethics and nature of interests and political change. Five types of strategies and their political impact are defined: agenda setting, public policy, communications and public relations, message framing, and electoral and GOTV (get-out-the-vote) strategies. Each uses different tools, has different effects and require different time frames.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Investing in Long Term Capacity Building Initiatives for African-based Graduates and Professionals Involved or Interested in the Monitoring and Evaluating (M&E) of Development, Business and Education
Roundtable Presentation 712 to be held in GOLIAD on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Maureen Wang'ati, Measure Africa, njemail@yahoo.com
Abstract: Measure Africa, a privately owned consultancy company based in Nairobi Kenya is committed to building capacity of African based professionals. We propose to hold a round table discussion with interested parties including Western based institutions, consultancy firms, consultants and others where we will invite them to listen to a brief PPT presentation short documentary film on the subject. We believe that this topic will be of great interest to conference participants who are aware of the challenge of conducting credible evaluations in Africa and the challenges of African based professionals in accessing training opportunities and relevant knowledge. We propose to build partnerships and networks at the RTF and to form a strong foundation that will help us to move forward in our mission of building African capacity in Evaluation long term on the continent.
Roundtable Rotation II: Improving Methods of Inquiry in Evaluation Practice: Issues and Recommendations to Incorporate Diverse Views and Perspectives in International and Domestic Program Evaluation
Roundtable Presentation 712 to be held in GOLIAD on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Nicole Jackson, University of California, Berkeley, jackson@berkeley.edu
Abstract: Over the past 20 years, program evaluation has grown significantly. With its growth, program evaluation has faced increased criticism for not including diverse perspectives, which embed different cultural viewpoints and paradigms. Scholars such as Taut (2000) relate this issue to cultural relevancy arguments in the use and dispersion of evaluation models, which may impose certain views and paradigms across international contexts. Other scholars describe this issue as part of a broader, more holistic problem given the increased diversity of individuals and belief systems that exist not just across, but also within, country settings. These issues expose a deep-seeded problem in evaluation –whether current evaluation techniques can fully account for diverse perspectives in participant and program values. This proposal introduces two qualitative techniques from anthropology and organizational psychology – pile sorting and action inquiry – which can be used by evaluators to reveal and account for these differences.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: The Internal Evaluator's Dual Role as Project Manager and Evaluator: Lessons in Maintaining Evaluation Quality
Roundtable Presentation 713 to be held in SAN JACINTO on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the
Presenter(s):
Tamika Howell, Institute for Community Living Inc, thowell@iclinc.net
Brian Mundy, Institute for Community Living Inc, bmundy@iclinc.net
William Campbell, Institute for Community Living Inc, wcampbell@iclinc.net
Abstract: For decades organizations in all sectors have utilized the flexibility of internal resources to manage and improve planning and development of existing programs. During this roundtable, presenters will discuss the in-house process of program evaluation, where the evaluator's level of involvement in administrative functions varies and where internal evaluators find themselves in a dually functioning role wherein the commitment to both sound evaluation and high quality implementation intersects in complimentary and paradoxical ways. This roundtable will review the merits of both external and internal evaluation. Presenters will detail their accomplishments and struggles in maintaining evaluation quality. The discussion will offer members of the evaluation community the opportunity to communicate their experiences, and participate in an initial attempt to develop steps in creating standards for internal evaluation processes that reflect the strengths of external evaluation principles.
Roundtable Rotation II: Internal Evaluation: How to Keep the Fox Out of the Hen House
Roundtable Presentation 713 to be held in SAN JACINTO on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the
Presenter(s):
Jane Nell Luster, Louisiana State University, jnl@comlinkllc.biz
Abstract: Those who serve as the internal evaluator for a program or project face unique circumstances that may challenge complete application of the American Evaluation Association Guiding Principles for Evaluators. As the title implies, it may be difficult to objectively evaluate the program or project of which one is a part. So how do we keep the “fox out of the hen house?” The purpose of this round table discussion is to examine the guiding principles and consider what actions can be taken as an internal evaluator to conduct evaluations so the results are credible and valuable. As the guiding principles document states, “The principles are meant to stimulate discussion about the proper practice and use of evaluation” (p. 1). It is expected that participants will take from the roundtable discussion ideas of ways the internal evaluator can adhere to evaluation principles while balancing the desires of the organization.

Session Title: Leading Change Through Assessment
Skill-Building Workshop 714 to be held in TRAVIS A on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Assessment in Higher Education TIG
Presenter(s):
Patrice Lancey, University of Central Florida, plancey@mail.ucf.edu
Divya Bhati, University of Central Florida, dbhati@mail.ucf.edu
Abstract: How do you know that students are learning the key concepts in or outside the classroom and that faculty members are creating an environment that enhances learning? Assessment is intended to improve student learning and provides those evaluating it with an opportunity to meaningfully reflect on how learning is best delivered, gather evidence, and then use information to make appropriate changes and measure the effect of those changes. The workshop will provide an overview of assessment, describe the necessary components and illustrate its value and benefits. This interactive session will focus on developing and assessing student learning outcomes for courses, degree programs, and co-curricular and extracurricular programs. Participants will work in groups to practice effective planning, development and evaluation of student learning. This workshop is appropriate for anyone who would like to broaden their understanding of how to link goals and objectives to student learning outcomes and systematic measurement.

Session Title: Fewer Errors and Faster Results: How to Automate Production of Tables and Reports with Software You Already Own
Demonstration Session 715 to be held in TRAVIS B on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Integrating Technology Into Evaluation
Presenter(s):
Humphrey Costello, University of Wyoming, humphrey.costello@uwyo.edu
Eric Canen, University of Wyoming, ecanen@uwyo.edu
Reese Jenniges, University of Wyoming, jenniges@uwyo.edu
Abstract: As evaluators, we frequently face the challenge of accurately and efficiently producing large numbers of data tables. Similarly, we often produce multiple near-identical versions of reports (for example, reports on county-level data for each county in a state). The Microsoft Office Suite contains underutilized tools that allow automated production of tables, graphs, and even whole reports. In the process, these tools can reduce errors and decrease the time and effort involved. This demonstration will cover three steps in the automation process: 1) exporting results from statistical packages so that they are friendly to automation within Microsoft Office, 2) using Excel as an automation tool to present the correct data in each table or graph automatically, and 3) linking the automated elements in Excel with a Word document to produce an automated report. Presenters will provide session participants with sample templates and automation scripts.

Session Title: Evaluating Social Change Programs: How Does a Culturally Responsive Approach Apply
Think Tank Session 716 to be held in TRAVIS C on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Multiethnic Issues in Evaluation TIG
Presenter(s):
Deloris Vaughn, Innovation Network, dvaughn@innonet.org
Discussant(s):
Laura Ostenso, Innovation Network, lostenso@innonet.org
Melisa March, Innovation Network, mmarch@innonet.org
Abstract: Foundations and nonprofits often have as the focus of their programs, a social change agenda that invariably impacts specific cultural groups. From large scale, multi-pronged advocacy to specific public will or grassroots organizing strategies, such agendas require talking about the root causes of social injustices in order to unearth the program’s theory of change. Evaluating such programs and their theory of change require open, straight forward conversations that can bring to the fore issues of culture, race and gender inequities, power, and privilege. As evaluators, how do we facilitate the dialogue, what role does culturally responsive evaluation play in this context, how do we address our own biases, what tools do we use, what tools do we need? These are the questions that each small group will address and their responses will be used to compile a list of strategies, resources and tools to be shared with the whole group.

Session Title: Moving Towards Improving the Quality of Our Evaluations: Approaches and Lessons Learned From Comprehensive Cancer Control at the Centers for Disease Control and Prevention (CDC)
Panel Session 717 to be held in TRAVIS D on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Health Evaluation TIG
Chair(s):
Debra Holden, Research Triangle Institute International, debra@rti.org
Discussant(s):
Phyllis Rochester, Centers for Disease Control and Prevention, pfr5@cdc.gov
Abstract: Comprehensive cancer control (CCC) is an “integrated and coordinated” approach by the Centers for Disease Control and Prevention (CDC) that begun in 1998 with the ultimate goal of reducing cancer incidence, morbidity, and mortality. CDC’s funded programs developed CCC plans and are currently implementing cancer control interventions along the cancer control continuum to realize these goals. CDC has worked closely with these programs since inception to provide ongoing technical assistance. In 2007, an evaluation conceptual framework was developed to inform evaluation efforts. Using the evaluation conceptual framework, CDC has developed and conducted activities that have been instrumental in establishing a system of accountability for public investments in cancer prevention and control. Panelists will describe activities that have enhanced CDC’s evaluations and will build the capacity of our funded partners to develop and implement quality evaluations.
Performance Measures for the National Comprehensive Cancer Control Program (NCCCP): Current State and Future Directions
Julie Townsend, Centers for Disease Control and Prevention, zmk4@cdc.gov
Chrisandra Stockmyer, Centers for Disease Control and Prevention, cstockmyer@cdc.gov
Angela Moore, Centers for Disease Control and Prevention, armoore@cdc.gov
Phyllis Rochester, Centers for Disease Control and Prevention, pfr5@cdc.gov
In order to ensure accountability, document outcomes, and facilitate quality improvement, a set of performance measures were developed for Comprehensive Cancer Control (CCC) grantees. In 2008, the performance measures were piloted with CCC grantees. Based on a review of the pilot results, some performance measures were refined to better document CCC implementation and outcomes. Results from 2009 show that 60% of CCC grantees have diverse organizational representation in their partnership, 92% conduct regular assessments of the cancer burden, and a median of 60% of partners are implementing the cancer plans. However, results also indicate that some grantees need assistance with leveraging funds, developing complete evaluation plans, and finding evidence-based interventions. This systematic approach to collecting performance measures presents CDC with a unique opportunity to provide enhanced technical assistance to our grantees, effectively communicate the breadth and depth of CCC, measure environmental and policy changes, and demonstrate progress towards outcome indicators.
Comprehensive Cancer Control Evaluation Toolkit: A Resource to Improve the Quality of Evaluation Plans
Angela Moore, Centers for Disease Control and Prevention, cyq6@cdc.gov
LaShawn Curtis, RTI International, lcurtis@rti.org
Cindy Soloe, RTI International, csoloe@rti.org
Phyllis Rochester, Centers for Disease Control and Prevention, pfr5@cdc.gov
The CDC is committed to providing training and technical assistance to our funded partners that will aid in the successful development, implementation, and evaluation of comprehensive cancer control (CCC) programs. In 2008, CDC and evaluators from RTI International conducted a preliminary review of information contained within submitted CCC evaluation plans. The purpose of the evaluation plan review was to assess the extent to which these plans addressed the evaluation performance measure requirements. Results from this assessment coupled with the findings of the performance measures pilot, indicated the need for additional guidance from CDC regarding the development and implementation of evaluation plans. A process was identified that included a review of evaluation resources and the development of working groups to develop an evaluation toolkit. The development of this resource is an initial step towards providing tools and products that will improve the quality of CCC evaluation plans.
The Development of a Menu of Outcomes Database (MOD): A Practical Evaluation Tool to Assist State, Tribal and Territorial Comprehensive Cancer Control Programs
Brenda Stone-Wiggins, Research Triangle Institute International, bwiggins@rti.org
Phyllis Rochester, Centers for Disease Control and Prevention, pfr5@cdc.gov
Chrisandra Stockmyer, Centers for Disease Control and Prevention, cstockmyer@cdc.gov
Angela Moore, Centers for Disease Control and Prevention, cyq6@cdc.gov
Janice Tzeng, RTI International, jtzeng@rti.org
Nathan Mann, RTI International, nmann@rti.org
Debra Holden, Research Triangle Institute International, debra@rti.org
To guide and assist the CDC programs in their evaluations, a menu of outcomes that are feasible across the programs was developed. Outcome indicators were identified through cancer and chronic disease experts, national plans and the literature; mapped to the conceptual evaluation framework of the National Comprehensive Cancer Control Program (NCCCP); and categorized by level of impact and other domains (i.e., cancer continuum, risk factors and cancer site). An environmental scan was conducted to identify national data sources and measures for the respective indicators. The lack of national data sources identified gaps and data needs. The result is a Menu of Outcomes Database (MOD), a searchable MS Access database of 143 indicators linked to 441 measures from 42 national data sources. Database design and function was refined based on usability testing and piloting with CCC programs. Future enhancements and expansion to include state data sources are proposed for subsequent phases.

Session Title: Using Geographic Information Systems (GIS) to Enhance the Quality and Validity of Evaluations in Human Services
Panel Session 718 to be held in INDEPENDENCE on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Human Services Evaluation TIG
Chair(s):
Catherine Batsche, University of South Florida, cbatsche@fmhi.usf.edu
Abstract: Spatial analysis uses Geographic Information Systems (GIS) to capture, store, analyze, and display geographically referenced data. This session will provide three examples that used GIS to enhance the evaluation in the human services: (1) a prototype evaluating affordable, safe, and effective housing for emancipated youth leaving foster care; (2) a criteria-based approach for locating a new mental health facility; (3) a model for reducing risk factors for pregnant and parenting mothers. The session will conclude with a discussion the potential of GIS to enhance the quality and validity of findings by incorporating location-based factors as part of the evaluation criteria. The GIS models will be discussed in terms of three standards for quality (House, 1980): justice-as-fairness, truth as credibility, and beauty in the form of images that add coherence and economy to the findings.
Using GIS to Locate and Evaluate Affordable, Safe, and Effective Housing for Emancipated Youth Leaving Foster Care: Project LEASE
Catherine Batsche, University of South Florida, cbatsche@fmhi.usf.edu
Project LEASE is a GIS prototype to assist with the identification of housing that is affordable, accessible, safe, and effective in supporting the educational goals and parental status of emancipated foster youth. Spatial analysis was used to identify rental properties based on inclusion criteria of affordability and accessibility to public transportation and grocery stores; exclusion criteria based on proximity to areas of high crime, prostitution, and sexual predator residence; and suitability based on proximity to health care, mental health care, and child development services. The outcomes were applied to four scenarios based on the educational goals and parental status of the youth. The results demonstrate that the evaluation of housing options for youth exiting the foster care system can be enhanced by including location-based criteria in the analysis.
Using GIS to Locate a Community Mental Health Center: A Case Illustration
Roger Boothroyd, University of South Florida, boothroyd@fmhi.usf.edu
Estimates suggest that 26% of adults in the US have a diagnosable mental disorder (Kessler, Chiu, Demler, &, Walters, 2005) but only 25% receive treatment (Wang, Lane, Olfson, Pincus, Wells, & Kessler,2005). Barriers associated with access to care are (1) availability of mental health providers (US Department of Health and Human Services, 1999) and (2) distance from available services (Higgs, 2004). GIS was used to identify potential sites to locate a new community mental health center (CMHC) in one county in Florida. The criteria included: accessibility, distance from existing CMHCs, zoning, land use, parcel size, mental health need, and median family income. This presentation will present the outcome of the evaluation and will summarize the potential value and application of GIS to mental health services research and how the selection of appropriate criteria can meet House’s evaluation standards of truth, justice and beauty.
A GIS Model to Reduce Risk Factors for Pregnant and Parenting Mothers
Robert Lucio, University of South Florida, rlucio@bcs.usf.edu
The Healthy Start Coalition of Pinellas, Inc. uses GIS yearly to examine maternal and child health indicators. The coalition uses the resulting maps to evaluate areas of greatest need for services. Risk factors are mapped by zip code to determine where funding and services are most necessary. The risk factors include preterm births, low birth weight, 3 year average infant and fetal mortality, teen pregnancy, maternal Body Mass Index (BMI), and overweight/obesity rates of mothers. The maps are shared with community members to identify intervention strategies that may have lead to increases or decreases over time in risk factors and fetal/infant death rates. Using mapping allows community members to visualize their neighborhoods, personalize the data their surrounding area, and offer insights into their communities.

Session Title: A Practical Comparison of Longitudinal Data Analysis Methods
Multipaper Session 719 to be held in PRESIDIO A on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Mende Davis,  University of Arizona, mfd@u.arizona.edu
Traditional Approaches for Longitudinal Data
Presenter(s):
Mende Davis, University of Arizona, mfd@u.arizona.edu
William Becker, University of Arizona, beckerwj@email.arizona.edu
Abstract: Evaluation studies have measured outcomes at multiple time points for several reasons. For evaluation purposes, multiple time points allow us to at least obtain some data in between, and how participants change over time. In traditional approaches, such as t-tests and analysis of variance, the dependent variable will be the change score or post-test score. When a study has multiple time points and covariates, these simple analyses can only provide results regarding two time-points (e.g., pre-post). With a complete dataset without any missing data, a pre-post approach may provide us with a screen-shot of how people change from the first time point to the last time point. There are riches to be found in longitudinal data; however, using two observations at a time does not do these data justice. In this presentation, we will cover the traditional approaches for longitudinal data.

Session Title: Measuring Evaluation Capacity to Enhance Quality of Evaluation Capacity Building (ECB) Activities in Federal Public Health and Systemic Change Programs
Panel Session 720 to be held in PRESIDIO B on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG and the Health Evaluation TIG
Chair(s):
Maureen Wilce, Centers for Disease Control and Prevention, muw9@cdc.gov
Discussant(s):
Hallie Preskill, FSG Social Impact Advisors, hallie.preskill@fsg-impact.org
Abstract: Measuring evaluation capacity among program evaluation staff can inform the development of high quality evaluation capacity building (ECB) activities. In this panel, speakers will describe the development, adaptation and application of a tool to measure evaluation capacity among program staff in three public health / social service programs. Decentralized programs must build capacity for internal evaluation at all organizational levels, including national, state, and local, as applicable. Panelists will share details of the development of a tool to measure individual and organizational capacity for evaluation among state grantees and the subsequent adaptation of the instrument to assess evaluation capacity among evaluation staff at federal and local levels. Finally, panelists will discuss how they are using evaluation capacity assessment information at multiple organizational levels to tailor ECB activities to better meet needs.
Measuring Evaluation Capacity: Development of a Tool
Carlyn Orians, Battelle Memorial Institute, orians@battelle.org
This presentation describes the development of a tool to measure evaluation capacity among local evaluators to inform the development of ECB activities. First, we review the definition and conceptual basis for ECB. Then we discuss the specific capacities that ECB activities may target and the specific constructs that can usefully be measured at baseline to inform these activities. Then we describe the development and content of an evaluation capacity measurement tool designed to meet these needs. The tool addresses individual knowledge, experience, skills, and attitudes related to evaluation, as well as organizational practices and readiness that support evaluation. Initial pretest results are also presented.
Building Evaluation Capacity for Asthma Control Programs: Assessing What You Don’t Know
Kari Cruz, Centers for Disease Control and Prevention, hgv3@cdc.gov
Maureen Wilce, Centers for Disease Control and Prevention, muw9@cdc.gov
Building evaluation capacity in an organization is a complex, multi-faceted process. CDC’s National Asthma Control Program has embarked on a process to build evaluation capacity among its national program and funded partners to enable them to conduct useful and feasible program evaluations. This presentation will share experiences in adapting and using a new self-assessment tool to strategically assess CDC’s capacity to provide high quality technical assistance and identify programmatic evaluation needs and organizational needs for evaluation capacity development. The self-assessment tool is designed to support the professional development of evaluators practicing at the state, territory and national levels by identifying strengths and needs and by stimulating reflection, both individually and with fellow evaluators. Through this process CDC can tailor its ECB activities, identify opportunities for evaluators to advance their evaluation skills, and recommend additional evaluation resources that enhance evaluation practice.
Building Evaluation Capacity for State Oral Health Programs: Using Data to Plan and Focus Initiatives
Cassandra Martin, Centers for Disease Control and Prevention, bkx9@cdc.gov
Developing and enhancing the evaluation capacity of state oral health programs is an essential and multifaceted component of the CDC Division of Oral Health (DOH) Infrastructure and Capacity Development program. To support grantee evaluation efforts, the DOH evaluation team undertakes evaluation capacity building (ECB) activities utilizing workshops, trainings, technical assistance and resources. Yet, the lack of systematic and ongoing measurement of evaluation capacity at the state and federal level proves to be challenging in understanding the growth in evaluation competency. Thus, an evaluation needs assessment was conducted among the evaluation team and grantees. Results from the assessment pinpointed the staff development needs for evaluation as well as identified the critical components to build a pathway for implementing ECB activities. The presenters will illustrate the experience with conducting the evaluation needs assessment, and unveil the evaluation capacity building plan for state oral health programs.
Targeting Evaluation Capacity Building (ECB) Needs in State and Tribal Child Welfare Systemic Change Projects
Julie Morales, University of Denver, julie.morales@du.edu
In 2008, the Administration of Children and Families expanded their Training and Technical Assistance Network by creating five regional Child Welfare Implementation Centers charged with working with State and Tribal child welfare systems to implement in-depth, long-term systemic change projects aimed at improving outcomes for children and families. As evaluation lead for the Mountains and Plains Child Welfare Implementation Center (MPCWIC), which serves regions 6 (New Mexico, Texas, Oklahoma, Arizona and Louisiana) and 8 (Montana, North Dakota, Wyoming, South Dakota, Utah and Colorado), Julie Morales works with State/Tribal leadership to develop their project evaluation plans to assess the overall effectiveness of their system change projects. In this role, the MPCWIC evaluation team assesses evaluation capacity and provides technical assistance on needed ECB. Data on the first cohort of MPCWIC project sites will be presented and how this information is used to inform targeted evaluation capacity building TA will be discussed.

Session Title: Taking Stock of the Quality of Evaluation Research on School-Based Prevention Programs
Panel Session 721 to be held in PRESIDIO C on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Scott Crosse, Westat, scottcrosse@westat.com
Abstract: The proposed session will take a high level look at the quality of evaluation research on school-based prevention, examine the relationship between aspects of quality and measures of effect, and consider the overall breadth and depth of evidence on school-based prevention. This session draws on the results of an extensive systematic review of evaluation literature on school-based programs intended to prevent youth substance abuse and school crime. To identify effective or “research-based” programs, the review assessed the quality of research methods of studies, and synthesized the results on effectiveness of methodologically strong studies. The presenters consider issues bearing on: (a) criteria for and process of assessing study quality; (b) overall quality of evaluation research on school-based programs and how quality has changed over time; (c) study characteristics, including aspects of methods, that matter most for program outcomes; and (d) overall state of evaluation research on school-based prevention.
The Methodological Quality of Evaluation Research on School-based Prevention Programs: Where Are We Now?
Carol Hagen, Westat, carolhagen@westat.com
Scott Crosse, Westat, scottcrosse@westat.com
Michele Harmon, Westat, micheleharmon@westat.com
Samantha Leaf, ISA Associates, sleaf@isagroup.com
Rebekah Hersch, ISA Associates, rhersch@isagroup.com
Funding organizations and other stakeholders continue to embrace the idea that school-based prevention programming should be research based. Nearly five years ago, we assessed the methodological quality of evaluation research on the effectiveness of school-based youth ATOD and school crime prevention programs. and found that the vast majority of published evaluation research examined failed to meet standard criteria for acceptable methodological quality. For the current study, we extended our systematic review of literature to incorporate more recent literature from 2004 to 2008, which included both published and unpublished evaluations research. We screened over 8000 document abstracts and assessed the methodological quality of nearly 900 documents across 91 school-based prevention programs relevant to our study. Despite the inclusion of more recent and unpublished evaluation literature and a slightly modified methodology for assessing quality, our findings changed little from our previous systematic review.
Main Effects and Moderating Influences Among ‘Effective’ School-based Programs
Aaron Alford, Battelle Memorial Institute, alforda@battelle.org
Jim Derzon, Battelle Memorial Institute, derzonj@battelle.org
Estimates of intervention effectiveness result from the true effect of the program plus intervention- and study-introduced bias. To identify these sources of bias, a systematic review of meta-analyses documenting consequences of intervention and researcher choices was conducted. Identified influences include intervention characteristics (e.g., implementation and fidelity, duration, and interactivity); population characteristics (e.g., risk status, household status, and sample age); and study-introduced influences (e.g., measurement characteristics, attrition, and compensatory services). The influence of these moderators on effectiveness estimates assessing substance use and antisocial behavior from programs of documented effectiveness was tested using meta-regression. Although programs are generally effective in improving youth outcomes, all outcomes, other than tobacco use, varied significantly with type of intervention administrator and the presence of uncontrolled variation in implementation. Intervention duration influenced, somewhat, the effectiveness of intervention evaluations targeting smoking. These results reinforce the observations that programmatic choices influence the documented effectiveness of school-based interventions.
Breadth and Depth of Evidence: Documenting Program Effectiveness Across a Range of Behavioral Outcomes
Jim Derzon, Battelle Memorial Institute, derzonj@battelle.org
Aaron Alford, Battelle Memorial Institute, alforda@battelle.org
To document the impact of available manualized programs on multiple substance use and violence outcomes, we coded evidence of program effectiveness from programs supported by multiple designed studies or implementations. Studies were screened on methodological rigor and evidence for program effectiveness was coded for eight outcomes. Of the 491 programs identified by canvassing the literature, only 46 were manualized, addressed one or more of our outcomes and were supported by multiple studies or implementations. Of these, 25 had no effect, negative effects, or mixed effects while 21 were effective for at least one outcome. Of the 21 programs reporting data for multiple outcomes, eight exhibited a significant positive effect on multiple outcomes. Only three programs reported significant positive findings across all reported outcomes. Interestingly, within reported outcomes program results are often mixed. This presentation will introduce these findings and is intended to stimulate discussion of their meaning and interpretation.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Valuing Stakeholders: Collaborative Evaluation of Professional Development Needs
Roundtable Presentation 722 to be held in BONHAM A on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Wendy Bradshaw, University of South Florida, wbradsha@usf.edu
Jeremy Lake, University of South Florida, lakejp@mail.usf.edu
Abstract: There is a need to identify the professional development history and needs of early intervention personnel providing services to children in Florida. This need arises from multiple licensure paths, differing experiences, and variance in the lengths of time in the field. Provider groups are charged with evaluating staff to ensure and maintain the highest quality of services possible by trained, competent professionals. This evaluation was conducted to ascertain professional development history, strengths, and needs of early intervention personnel providing services through a provider group in Florida. To accomplish this, it was critical to recognize all participants as active stakeholders in the evaluative process. A collaborative approach incorporating focus groups to design an on-line survey was utilized to generate data to answer the questions about the state of training and training needs of service providers. Challenges and successes of this evaluation approach will be discussed.
Roundtable Rotation II: Wearing Different Hats: The Multiple Roles of an Evaluator
Roundtable Presentation 722 to be held in BONHAM A on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Jeremy Lake, University of South Florida, lakejp@mail.usf.edu
Wendy Bradshaw, University of South Florida, wbradsha@usf.edu
Abstract: In conducting an evaluation, it is critical to consider the roles and credibility of the professionals employed as evaluators. According to the Program Evaluation Standards (Joint Committee on Standards for Educational Evaluation, 1994), there is a need to identify the qualifications of the evaluator as well as the social and political factors situated in the context of the evaluation to be conducted. In collaborative evaluations, evaluators may need to balance multiple roles while establishing relationships with collaborative members (Rodriguez-Campos, 2005). In a recently conducted collaborative evaluation of professional development needs in an early intervention organization the evaluators held multiple roles, both internal and external, which had to be considered and balanced during evaluation design and conduction. These roles and their impact on the evaluation will be discussed, as well as the strategic decisions made to maximize the strengths and minimize the weaknesses of each.

Session Title: Beauty is in the Eye of the Beholder: Rationalizing Evaluation Needs Amongst Foundation Boards, Staff, and the Nonprofits They Fund
Panel Session 723 to be held in BONHAM B on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Charles Gasper, Missouri Foundation for Health, cgasper@mffh.org
Abstract: Beauty is indeed in the eye of the beholder. Increasingly, funders are asking their grantees to conduct evaluations. What constitutes these evaluations varies from funder to funder and is driven often by the interests and experience of the staff of foundation. Board members also bring their own concerns and interests to bear and their concepts of truth and justice can greatly impact the questions asked and the methodologies used to support them. However, the nonprofit often has its own culture around evaluation and strategic learning and its needs can differ from those of the funder. This panel will present a foundation’s Board, staff, nonprofit leader’s positions and perceptions of a quality evaluation and the experiences of an evaluation contractor and foundation’s Director of Evaluation in rationalizing these viewpoints to provide a quality evaluation to all concerned stakeholders.
The Director of Evaluation: Part Negotiator, Part Salesman, Part Educator, Part Evaluator, and 100% Facilitator
Charles Gasper, Missouri Foundation for Health, cgasper@mffh.org
The Director of Evaluation position in a foundation is one of finding the balance between the needs of the foundation and the needs of the grantees. The presenter will provide lessons learned in balancing those needs, including interventions such as education of the stakeholders, setting appropriate expectations, developing a supportive policy for evaluation, development of requests for proposals for evaluation, and assessment of respondent proposals.
The Vice President for Program: Looking for Impact, But Also Looking to Learn Lessons That Can Be Applied to Other Initiatives
Martha Gragg, Missouri Foundation for Health, mgragg@mffh.org
The Vice President for Program position is often concerned that the programs funded are achieving their objectives. However, the Vice President is also interested in learning about the process of programs and identifying ideas that can be spread to other programs and initiatives funded by the foundation. The presenter will share how the questions that develop into those lessons are identified as well as what is done with the lessons learned. Further, the presenter will share how they help educate the Board on the needs of the organization around evaluation.
The Director of a Foundation: Strongly Interested in Knowing If the Funding Has Made an Impact, but Open to Learning More From Evaluations
George Gruendel, Missouri Foundation for Health, eggruendel@accubak.net
Directors of Foundations are primarily interested in impact. "Did we move the needle" is a common phrase uttered by someone in this position. The presenter will share their observations of their peers as well as their own around evaluation. Additionally, the Director will share their experience with education in evaluation conducted by the Director of Evaluation for the foundation as well as through presentations made by contractors to the Board of Directors.
The Contractor in the Middle: Balancing the Needs of Different Organizations to Meet the Needs of All the Primary Stakeholders
Carol Brownson, Washington University in St Louis, cbrownso@dom.wustl.edu
So you got the contract to conduct an external evaluation of an initiative for a foundation. You are working with foundation staff and their grantees to get useful information to everyone, develop and implement technical assistance for the grantees that helps them with their internal evaluation as well as supports the questions you need to ask on behalf of the foundation. The presenter will share experiences and lessons learned balancing these needs and wants as well as their own education of foundation staff and nonprofits around the practicalities of a quality evaluation.

Session Title: Systems and Logic Models as Complementary Tools for Educational Evaluation
Skill-Building Workshop 724 to be held in BONHAM C on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Janice Noga, Pathfinder Evaluation and Consulting, jan.noga@pathfinderevaluation.com
Abstract: Educational programs are not self-contained. They are integrated, complex systems that interact with a larger social, political, and organizational context. If the function of educational evaluation is to make decisions about improvement as well as effectiveness, systems thinking can provide the broader perspective needed to understand the quality of what is going on. In this workshop, participants will learn how to integrate systems thinking and logic modeling to guide evaluation of educational programs. Using a simple systems model and a basic logic model, participants will engage in defining a system, developing a program logic model, and developing a basic evaluation approach that integrates both models in its overall design. Ultimately, it is hoped that participants will come away with new ideas about how to capture critical elements of process, context, and program outcomes in their own evaluation practice.

Session Title: Classroom Observations: Lessons Learned About Five Protocols From Five Multi-site Education Evaluations
Panel Session 725 to be held in BONHAM D on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Elizabeth Autio, Education Northwest, elizabeth.autio@educationnorthwest.org
Abstract: the same time, classroom observations are incredibly time- and budget-intensive. It is therefore important that evaluators use these resources of time and budget wisely. Getting the most from classroom observations requires the selection and/or development of a protocol that is aligned with the content, goals, and intended outcomes of the program being studied. To help guide decisions about observation protocols, in this session, evaluators will share what they have learned about five different classroom observation protocols – three that are commercially available, and two that were developed for a specific evaluation – from five different multi-site education evaluations. The session will result in tangible lessons learned from each protocol that participants can “take away” after the session and apply to their own work.
Using the Classroom Assessment Scoring System (CLASS) to Measure Classroom Organization, Interactions, and Support
Jason Greenberg Motamedi, Education Northwest, jason.greenberg.motamedi@educationnorthwest.org
The Classroom Assessment Scoring System (CLASS) is a rubric of ten items that assess classroom organization, the quality of instructional and social interactions and support observed between teachers and student within a classroom (Pianta, La Paro & Hamre, 2008). The CLASS is available at pre-K, elementary, and secondary levels, however this presentation will concentrate upon the Pre-K and early elementary levels, discussing and describing the CLASS, detailing its intended uses, the training and certification procedure, as well as a description of why the CLASS was selected for an evaluation of Early Reading First classrooms in Montana, and how it was employed and reported. Finally, this presentation will discuss the lessons learned from using the CLASS as an observational tool alongside the ELLCO, as well as a discussion of other factors that evaluators may want to take into account when considering this protocol.
Looking at Literacy With the Early Language and Literacy Classroom Observation Tool (ELLCO) and the Sheltered Instruction Observation Protocol (SIOP)
Elizabeth Autio, Education Northwest, elizabeth.autio@educationnorthwest.org
The Early Language and Literacy Classroom Observation Tool (ELLCO) (Smith, Brady & Anastasopoulos,2008) is a rubric of 19 items that assess quality of classroom environment and teacher practices in literacy. The Sheltered Instruction Observation Protocol (Echevarria & Short, 2001; Echevarria, Vogt, & Short, 2007) is a research-based model of instruction for English language learners, which also includes an observation checklist of 30 items. Both of these protocols are used for feedback loops among educators, but are increasingly employed for evaluation data collection purposes. This session will discuss what the protocols are, why they were selected by the evaluators, cost, ease of use, and lessons learned, including feedback on the appropriateness of developer-created rubrics in the evaluation context. This information will be helpful particularly to others evaluating literacy programs and programs addressing the needs of English language learners.
Development of an Observation Protocol for a Reading Intervention
Caitlin Scott, Education Northwest, caitlin.scott@educationnorthwest.org
Developed in 1991 by Dee Tadlock, Read Right is a reading intervention program designed to improve the reading of struggling students of all ages. Education Northwest was hired by the Sherwood Foundation in 2009 to evaluate Read Right in Omaha Public middle and high schools. As part of this evaluation, Education Northwest created an observation protocol. Because Read Right is a relatively scripted program delivered in a setting with one adult tutor per five students, the observation protocol is primarily a count of activities. For example, the observer counts and categorized each statement the tutor makes that corrects the student’s reading. The description of the development of this protocol—from first draft to field tested product—will help the audience understand how protocol can be developed based on the content of the observed program. This information will be helpful particularly to others evaluating scripted educational programs.
Development of an Observation Protocol for Adolescent Literacy Interventions
Kari Nelsestuen, Education Northwest, kari.nelsestuen@educationnorthwest.org
The federal Striving Readers program requires evaluators to measure fidelity of implementation of selected reading interventions. Since valid and reliable observation protocols for adolescent literacy do not exist, the presenter will describe the protocols developed to measure fidelity of implementation of two adolescent literacy interventions– Read to Achieve (Marchand-Martella & Martella, 2010) and Phonics Blitz (Farrell and Hunter, 2007). The presentation will focus on the development process, including how evaluators worked closely with the publishers to identify and define the critical components of the program model, quantify these components, and field test multiple versions of the instruments (Mowbray et al, 2003). The final protocols are a combination of activity counts and ratings of the identified critical components such as teacher support. Presenters will also describe how the protocol is used to calculate fidelity ratings of high, medium, and low implementation.

Session Title: Interim Evaluation: The Quality of Research and the Quality of Evaluation - Case Study of the FP7 Interim Evaluation
Panel Session 726 to be held in BONHAM E on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Neville Reeve, European Commission, neville.reeve@ec.europa.eu
Abstract: The 7th Framework Programme was a major step forward for the EU’s multiannual framework programmes for research and technology development. With a more than doubling of the overall budget, a duration of 7 rather than the previous 4 years and a vastly expanded range of research and policy related objectives, FP7 demonstrates the increasing importance of research amongst the policy aims of the EU. In keeping with these changes, the policy and decision makers have asked for a much enhanced approach to evaluation and monitoring. Major evaluation exercises are held every two to three years, supported by annually many different types of specific evaluation study at the level of the component research activities. This session looks at one such large-scale exercise: the FP7 Interim evaluation, which addresses specifically the questions of research quality, linking these with attempts to measure performance through progress against objectives.
Case Study of the FP7 Interim Evaluation: Notions of Quality in Evaluation Design
Neville Reeve, European Commission, neville.reeve@ec.europa.eu
The interim evaluation of FP7 has involved a combination of different evaluation methods at different levels including modified peer review, analytical studies, statistical analysis and self-assessment. The aim is to provide good overall coverage and a balanced approach between the analysis of issues in depth and at the macro level. The presentation will expound on the benefits gained from the different approaches and the different notions of quality in designing the evaluation.
Evaluation Quality and Long-term Impact
Peter Fisch, European Commission, peter.fisch@ec.europa.eu
The FP7 interim evaluation will not only provide a first look at the implementation of the programme it will also lay the basis for future EU research activities. Developing the terms of reference, the timing and the implementation of the evaluation has involved difficult trade-offs between getting ‘results now’ versus the types of analysis that are better carried out over the long–term. The presentation will explore these issues, examine the results of the evaluation and how these will be used in policy making.
The Use of Independent Experts to Verify Quality and Strengthen Evaluation Process
Wolfgang Polt, Joanneum Research, wolfgang.polt@joanneum.ac.at
The FP7 interim evaluation brings together a modified peer review style exercise based on the use of experts from across the spectrum of academic, public and commercial life as well as experts in the field of evaluation and science policy to support the process of generating data, analysis and for reasons of quality control. The presentation will examine the different roles played by the ‘independent experts’, examining questions of selection, role and their impact on the process.
The Role of Stakeholders on the Quality of Evaluation Design, Implementation and Impact
Iain Begg, London School of Economics, 
The presentation will examine the different approaches for connecting with stakeholders in the FP7 interim evaluation. These include the preparation of the terms of reference, gathering evidence for the evaluation itself and disseminating the results of the evaluation to different audiences including policy makers and decision makers. The results of specific exercise used in the evaluation to ask the opinions of stakeholders will be described and analysed.

Session Title: Use of Evaluation: Overcoming the Challenges
Panel Session 727 to be held in Texas A on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Government Evaluation TIG
Chair(s):
Joseph Wholey, University of Southern California, joewholey@aol.com
Discussant(s):
Joseph Wholey, University of Southern California, joewholey@aol.com
Abstract: Use of Evaluation: Overcoming the Challenges. The demand for evaluation information is growing. Congress, state legislatures, local legislative bodies, public agencies, foundations, nonprofit organizations, and other funding agencies are increasingly demanding information on how program funds were used and what those programs produced. Use of evaluation findings remains problematic, however. In this panel, presenters will identify challenges that agencies and evaluators face in producing credible, useful evaluations as well as the challenges they face in getting evaluations used. Each presenter will then suggest ways to overcome the challenges, produce credible evaluations, and get evaluation findings used to support policy and management decisionmaking.
The Challenge of Producing Credible Evaluations
Kathryn Newcomer, George Washington University, newcomer@gwu.edu
The Challenge of Producing Credible Evaluations: Kathryn Newcomer’s presentation will discuss fundamental elements that evaluators and organizations sponsoring evaluations should consider before undertaking any evaluation work, including how to: select programs to evaluate; match evaluation approaches to information needs; identify key contextual elements shaping the conduct and use of evaluation; produce the methodological rigor needed to support credible findings; design responsive and useful evaluations; and get evaluation information used.
The Challenge of Producing Useful Evaluations
Harry Hatry, Urban Institute, hhatry@urban.org
Harry Hatry’s presentation will describe how pitfalls that may be encountered before and during evaluations may hinder the validity, reliability, credibility, and utility of evaluations. He will suggest how these pitfalls can be avoided and how advances in technology can contribute to credible, useful evaluation work. Hatry will then discuss the selection and training of evaluators, quality control of an organization’s entire evaluation process, and how to overcome challenges in getting evaluation findings used to improve programs and services.
Contracting for Credible, Useful Evaluations
James Bell, James Bell Associates, bell@jbassoc.com
James Bell’s presentation will describe how government agencies and other evaluation sponsors can procure needed evaluation services and products from appropriately qualified evaluation contractors. His advice will focus on five areas: creating a feasible, agreed-upon concept plan; developing a well-defined request for proposals (RFP); selecting a well-qualified evaluator team that will fulfill the sponsor’s intent; constructively monitoring interim progress; and ensuring the quality and usefulness of major evaluation products.
Current Challenges in Performance Monitoring
Theodore Poister, Georgia State University, tpoister@gsu.edu
Theodore Poister’s presentation will identify challenges in developing performance monitoring systems that add value – and then suggest that those trying to develop such systems should: clarify the purpose and intended uses of the monitoring system; build ownership by involving managers and decisionmakers; generate leadership to develop buy-in; identify “results owners”; delegate increased authority and flexibility in exchange for accountability for results; establish a regular process for reviewing performance data; initiate follow-up when persistent problems emerge from performance data; and monitor and improve the usefulness and cost-effectiveness of the monitoring system itself. After identifying an emerging challenge (that of monitoring the performance of programs that are delivered through networked environments), Poister will suggest several ways in which those developing monitoring systems in those environments can increase the likelihood that the monitoring systems will add value; for example, by working to develop consensus among key stakeholders regarding goals, measures, and data collection systems; encouraging networked systems to use available statistical data; using logic models to clarify relationships among different agencies’ activities, outputs, and outcomes; and incorporating performance data into grant processes, incentive systems, and recognition processes throughout the network.

Session Title: Quality in Evaluation: How Do We Know It When We See It in Qualitative Evaluations?
Panel Session 728 to be held in Texas B on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Qualitative Methods TIG
Chair(s):
Leslie Goodyear, National Science Foundation, lgoodyea@nsf.gov
Abstract: What is quality in evaluation and how do we know it when we see it? As editors of the forthcoming book, Qualitative Inquiry in the Practice of Evaluation, we have had to wrestle with our assumptions about evaluation theory, evaluation practice and what constitutes quality in evaluation. In planning the book and reviewing submissions, we have confronted questions of quality as they relate to contextual issues, data issues, and evaluator roles and responsibilities. In the process, we have wondered: Are there elements of quality that are unique to qualitative evaluations? In this session, we share, in a dialogic fashion, some of what we have learned in that process and discuss the complexities of gauging evaluation quality. Participants in this session will be encouraged to join in the conversation and share their own perspectives on what constitutes quality in qualitative evaluation.
Leslie Goodyear, National Science Foundation, lgoodyea@nsf.gov
Leslie Goodyear lives a double life: She is a Senior Research Scientist at Education Development Center in Newton, MA, but is currently on loan to the National Science Foundation where she works as a program officer in the Division of Research on Learning. In her role at NSF, she reviews grant proposals, funds worthy projects and advises the Division and its programs about project and program evaluation. In her spare time, she consults on multiple evaluation projects and serves as a dissertation proposal reviewer with an eye toward qualitative methods. Dr. Goodyear is the current section editor for the Ethics section of the American Journal of Evaluation and has recently completed her term on the Board of Directors for AEA. Her primary evaluation interests include integration of high quality qualitative inquiry into evaluation studies, creative and innovative representations of evaluation findings to diverse audiences, and evaluation capacity building. Every conversation she has with her co-editors leaves her thinking new thoughts and exploring new territory related to evaluation quality and qualitative evaluation.
Jennifer Jewiss, University of Vermont, jennifer.jewiss@uvm.edu
Jennifer Jewiss is a Research Assistant Professor in the Department of Education at the University of Vermont. Dr. Jewiss specializes in qualitative research and evaluation that is often developmental or formative in nature. Over the past twelve years, she has conducted numerous studies of environmental, human service, education, and health initiatives supported through federal, state, and foundation funding. Several recent projects have been conducted in partnership with the Conservation Study Institute, a research and think tank of the U.S. National Park Service. Dr. Jewiss has taught a graduate course on program evaluation that introduces education, health, and human service providers to the field. She is frequently asked to serve as a methodological advisor for qualitative research and evaluation projects being carried out by colleagues in academic and governmental institutions. She credits her students, colleagues, and clients with raising many compelling questions about what constitutes quality in the realm of evaluation.
Janet Usinger, University of Nevada, Reno, usingerj@unr.edu
Janet Usinger is an Associate Professor at the University of Nevada, Reno. She has an outreach teaching responsibility, working with K-12 and higher education institutions throughout the state in the areas of P-16 articulation and leadership. Of particular interest are first generation college going students from both urban and rural settings and has had overall responsibility for the evaluation of the Nevada State GEAR UP project since 2001. Her research interests include perceptions, understanding and relationships that individuals hold regarding the educational institutions which they are affiliated. She has formally and informally advised numerous doctoral students whose dissertations involve qualitative inquiry. Prior to joining the Department of Educational Leadership, she was an administrator for Cooperative Extension at the state and national level. She has particularly valued the collaboration among the co-editors of this book in that the conversations have stretched her thinking about both evaluation and qualitative inquiry.
Eric Barela, Partners in School Innovation, ebarela@partnersinschools.org
Eric Barela is the Chief Organizational Performance Officer at Partners in School Innovation, a San Francisco-based nonprofit working to enable public schools in low-income areas to achieve equity through school-based literacy reform. In this role, he is working on implementing rigorous and meaningful qualitative evaluation into the organization’s performance management system. Previously, Dr. Barela spent seven years working for the Los Angeles Unified School District’s Program Evaluation and Research Branch, where he conducted several qualitative evaluations designed to determine effective school-based practices in high-performing, high-poverty schools. He has also taught teachers and administrators to use both qualitative and quantitative research methods to drive improvement in instruction and leadership. Eric currently serves on the Editorial Advisory Board of the American Journal of Evaluation. He considers himself very fortunate to be collaborating with his co-editors, who always advance his thinking around evaluation quality.

Session Title: A Mixed Methods Assessment of the Impact of Three Dropout Prevention Strategies on Student Academic Achievement in Grades 6th through12th in Texas
Multipaper Session 729 to be held in Texas C on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the
Chair(s):
Thomas Horwood,  ICF International, thorwood@icfi.com
Discussant(s):
Candace Macken,  Texas Education Agency, candace.macken@tea.state.tx.us
An Evaluation of the Intensive Summer Program (ISP) in Texas Schools
Presenter(s):
O'Conner Rosemarie, ICF Macro, ro'conner@icfi.com
Aikaterini Passa, ICF International, apassa@icfi.com
Abstract: The Intensive Summer Program (ISP) provides summer instruction for "at risk" students in Texas with the goals of reducing dropout, increasing school achievement, and promoting college and workforce readiness skills among students. Using a mixed methods approach, this presentation examines statistical analyses of student achievement data and the survey results from school administrators, teachers, and students. Hierarchical linear models (HLM) are used to examine the results of student achievement in standardized tests to determine whether the ISP program positively affected student academic achievement over time. Student surveys are used to further understand the relationships uncovered in statistical analyses, while surveys from the school administrators and teachers shed light on the additional positive benefits from the ISP program. Concluding remarks focus on how case study research was used to provide context from select schools to the overall findings from all participating grantee stakeholders.

Session Title: Metrics for the National Institute of Environmental Health Science (NIEHS): Measuring Outcomes to Advance Partnerships for Environmental Public Health
Panel Session 730 to be held in Texas D on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Christie Drew, National Institute of Environmental Health Sciences, drewc@niehs.nih.gov
Discussant(s):
Gretchen Jordan, Sandia National Laboratories, gbjorda@sandia.gov
Abstract: The purpose of this session is to present and discuss evaluation metrics for two activities conducted by grantees as part of the NIEHS Partnerships for Environmental Public Health (PEPH) program, an umbrella program focused on community partnerships for environmental health research. Environmental health researchers, healthcare professionals, educators and others with interest in the effects of environmental exposures on public health identified the lack of standardized evaluation tools and metrics as one of the biggest challenges for the program. In response, the PEPH team developed logic models and identified metrics to evaluate PEPH initiatives around major program themes. This session will focus on two of these: partnerships and development and dissemination of training/education/curricula materials. The approaches discussed help establish a common language among groups involved in PEPH. Importantly, these approaches represent innovative opportunities to measure progress and success in PEPH programs, and they can be easily translated to other program contexts.
Metrics and Examples for Evaluating Education and Training Evaluation in Environmental Public Health Programs
Helena L Davis, National Institute of Environmental Health Sciences, davishl@niehs.nih.gov
Stephanie Shipp, Science and Technology Policy Institute, sship@ida.org
Beth Anderson, National Institute of Environmental Health Sciences, tainer@niehs.nih.gov
Sharon Beard, National Institute of Environmental Health Sciences, beard1@niehs.nih.gov
Caroline Dilworth, National Institute of Environmental Health Sciences, dilworthch@niehs.nih.gov
Christie Drew, National Institute of Environmental Health Sciences, drewc@niehs.nih.gov
Liam O'Fallon, National Institute of Environmental Health Sciences, ofallon@niehs.nih.gov
Ashley Brenner, Science and Technology Policy Institute, atbrenne@ida.org
Cara O'Donnell, Science and Technology Policy Institute, codonne@ida.org
Sarah Ryker, Science and Technology Policy Institute, sryker@ida.org
Training and education are primary communications strategies for NIEHS PEPH programs. This presentation will focus on metrics and examples that result from creating and implementing training and curricula in the community for environmental health programs. The activities include communicating with partners to establish education partnerships and priorities, and developing and implementing the curricula. Outputs from these activities include access to, attendance at, and information uptake from participating in training and educational events with the goal to grow a pro-active community. Outcomes include increased awareness of environmental public health issues and training opportunities, secondary information transfer, informed decision making, and identification of future training needs.
Evaluating Partnerships in Environmental Health Programs
Ashley Brenner, Science and Technology Policy Institute, atbrenne@ida.org
Beth Anderson, National Institute of Environmental Health Sciences, tainer@niehs.nih.gov
Sharon Beard, National Institute of Environmental Health Sciences, beard1@niehs.nih.gov
Helena L Davis, National Institute of Environmental Health Sciences, davishl@niehs.nih.gov
Caroline Dilworth, National Institute of Environmental Health Sciences, dilworthch@niehs.nih.gov
Christie Drew, National Institute of Environmental Health Sciences, drewc@niehs.nih.gov
Liam O'Fallon, National Institute of Environmental Health Sciences, ofallon@niehs.nih.gov
Sarah Ryker, Science and Technology Policy Institute, sryker@ida.org
Stephanie Shipp, Science and Technology Policy Institute, sship@ida.org
The creation of partnerships is an important way to involve stakeholders in the process to address environmental public health challenges. This presentation focuses on the metrics and examples of effective partnerships. Activities such as identifying partners, building relationships with partners, involving partners in framing research questions and in the research process, developing and implementing communication outreach tools and maintaining and improving partnerships were identified as important activities. Outputs include reciprocal communications, investments in project mission, translations or scientific information among partners, and community involvement in research. Outcomes include innovation and cultural changes, sustainable partnerships, increased awareness in environmental health and research, and the capacity of the partners to identify changes and to form future collaborations.
Partnerships and Training: The United Steelworkers Health and Safety Department Worker Training and Education Program
Thomas McQuiston, United Steelworkers Health and Safety Department, tmcquiston@uswtmc.org
Tom McQuiston is an NIEHS Worker Education and Training Program grantee at the Tony Mazzocchi Center, the training, research, and evaluation arm of the United Steelworkers Health and Safety Department. The center is concerned with communicating results at a partner, regional, and national level and protecting the health of workers through training and education programs. The strong partnerships formed in this program have helped it thrive as measured by the increasing number of trained workers in the Union and elsewhere. Dr. McQuiston will briefly describe his program and then comment on the applicability of the new PEPH Evaluation manual to his work with the United Steelworkers Health and Safety Department Worker Training and Education program.
Environmental Health and the Navajo Nation: Products, Dissemination, and Partnerships
Johnnye Lewis, University of New Mexico, jlewis@cybermesa.com
With funding from EPA and NIEHS, Johnnye Lewis, director of the Community Environmental Health Program at the University of New Mexico, participates in a number of translational research projects, a large part of which depends on the involvement of partners in the communication of public health messages. Dr Lewis will first describe her program and then will comment on the applicability of the new PEPH Evaluation Manual and its applicability to her work with research to aid uranium cleanup in the Navajo Nation

Session Title: Strategies for Developing Evaluation Capacity in Educational Partnerships
Demonstration Session 731 to be held in Texas E on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Evaluation Use TIG , the Pre-K - 12 Educational Evaluation TIG, Indigenous Peoples in Evaluation TIG, and the Organizational Learning and Evaluation Capacity Building TIG
Presenter(s):
Ed McLain, University of Alaska, Anchorage, afeam1@uaa.alaska.edu
Susan Tucker, Evaluation & Development Associates, sutucker1@mac.com
Abstract: Building the capacity of school-based "data teams" to use improvement-oriented evaluation methodologies across diverse contexts, while exhorted by funding agencies, is rarely evaluated. The presenters have been engaged in capacity building since 2004-05 in collaboration with a USDE-funded Teacher Quality Enhancement (TQE) grant. Grounded in the context of nine Alaskan high-need urban and rural districts experiencing a crisis in attracting (and holding) quality teachers, this session will focus on demonstrating methods for institutionalizing an infrastructure for sustainable data teaming and evaluation use. Participants will gain a clearer understanding of the indicators of successful data use development and partnering between a university and project schools. The session will begin with an overview of the past six years experience with data teaming, and address emerging findings & challenges in relationship to the three points posed under the “Relevance” section. Finally, we present and discuss a data-teaming template with rubrics.

Session Title: Studies of Evaluation Practice Across Multiple Contexts
Multipaper Session 732 to be held in Texas F on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Research on Evaluation TIG
Chair(s):
Deborah Grodzicki,  University of California, Los Angeles, dgrodzicki@gmail.com
An Examination of the State and Quality of Monitoring and Tracking (M&T) for Contemporary Program Evaluation
Presenter(s):
Antionette Stroter, University of Iowa, a-stroter@uiowa.edu
Douglas Grane, University of Iowa, douglas-grane@uiowa.edu
Abstract: Monitoring and Tracking (M & T) is an integral yet overlooked component of evaluation. While evaluators have much experiential knowledge about M &T, the profession lacks the profession lacks research, empirical studies, and peer reviewed literature on M&T. We examine the state of M & T through the following objectives 1) conceptualizing M &T, 2) examining the use of M&T in different program contexts, 3) investigating how M&T meets evaluation purposes, 4) understanding M&T’s role in program implementation, and 5) exploring the effects of M&T on decision making processes. We conduct document analysis using ATLAS.ti for 1909 articles in the American Journal of Evaluation, New Directions for Evaluation, Evaluation and Program Planning, Educational Evaluation and Policy Analysis, Journal of MultiDisciplinary Evaluation , Evaluation, and Studies in Educational Evaluation and transcripts from interviews with evaluation thought leaders. We find that quality M & T varies substantially between and within programs overtime.
Inquiry on the State of Evaluation Practice, Experience, and Use in Kenya
Presenter(s):
Douglas Grane, University of Iowa, douglas-grane@uiowa.edu
Karen Odhiambo, University of Nairobi, karenodhiambo1@yahoo.co.uk
Abstract: While most evaluation theories and scholarship originate from the perspectives of evaluators working in or based out of countries in the Global North such as the U.S., evaluation professionals in countries in the Global South carryout substantial evaluation work. Evaluation is an integral part of international development programs and projects in Kenya. Additionally, Kenyan government and non-governmental programs also integrate evaluation into their activities. Kenya has an active local community of evaluators, is home to the Kenya Evaluation Association, and Kenyan evaluators play active roles in international evaluation organizations such as the African Evaluation Association (AfrEA). This paper explores practices, uses, highlights, and challenges of evaluation in Kenya. We use document analysis of interview transcripts with Kenyan evaluators and evaluation stakeholders to represent the current state of evaluation in Kenya. This paper provides an opportunity to develop new evaluation theories incorporating the experiences of evaluators from the Global South.
The Role of Program Evaluation in Improving and Sustaining State-Supported School Counseling Programs: A Cross Case Analysis of Best Practices
Presenter(s):
Ian Martin, University of San Diego, imartin@sandiego.edu
Abstract: Recent work has shown that many state-supported school counseling programs across the United States have not developed working statewide evaluation schemas. This study examined two exemplary examples of state level school counseling program evaluation. Mixed-method case studies were created and then analyzed across cases to reveal common themes and best practices. The findings indicated that these cases were able to build evaluation capacity within very different contexts. Implications for increasing evaluation capacity and use within other state-supported school counseling programs were discussed.

Session Title: Mapping Extension’s Networks: Using Social Network Analysis to Explore Extension Outreach
Demonstration Session 733 to be held in CROCKETT A on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Extension Education Evaluation TIG
Presenter(s):
Scott Chazdon, University of Minnesota, schazdon@umn.edu
Tom Bartholomay, University of Minnesota, barth020@umn.edu
Mary Marczak, University of Minnesota, marcz001@umn.edu
Kate Walker, University of Minnesota, kcwalker@umn.edu
Abstract: A major component of the infrastructure supporting Extension’s educational mission is the network of relationships connecting Extension professionals to organizations across the state. Yet few formal evaluations have explored the Extension-wide connections or networks with external organizations. Given the importance of relationships in Extension’s delivery strategies, University of Minnesota Extension recognized a need to better understand the breadth and depth of its organizational network using Social Network Analysis (SNA) methodology. This demonstration session reports key insights on the methodology, project planning, results, and uses of the Mapping Extension’s Networks study. The presenters will demonstrate how the data were collected (using online survey software), analyzed (using Excel and SPSS) and displayed (using UCINET, a Windows software product developed for SNA). Four categories of uses for the data are described: 1) accountability and reporting; 2) promoting internal collaboration; 3)generating new resources; and 4) creating the groundwork for Extension-wide impact studies.

Session Title: Critical Concepts for Introductory Evaluation Courses: Multiple Perspectives- Part 2
Demonstration Session 734 to be held in CROCKETT B on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Teaching of Evaluation TIG
Presenter(s):
Katye Perry, Oklahoma State University, katye.perry@okstate.edu
David Diehl, University of Florida, dcdiehl@ufl.edu
Maura Harrington, Center for Nonprofit Management, mharrington@cnmsocal.org
Shana Pribesh, Old Dominion University, spribesh@odu.edu
Faye Belgrave, Virginia Commonwealth University, fzbelgra@vcu.edu
John Stevenson, University of Rhode Island, jsteve@uri.edu
Kathleen Norris, Plymouth State University, knorris@plymouth.edu
Melissa Chapman, University of Iowa, melissa-chapman@uiowa.edu
David Nevo, Tel Aviv University, dnevo@post.tau.ac.il
Anne Hewitt, Seton Hall University, hewittan@shu.edu
Arthur Hernandez, Texas A&M University, art.hernandez@tamucc.edu
Abstract: This Demonstration Session represents Part 2 of two Demonstration Sessions that have as their target those who teach program evaluation or have an interest in doing so. It is unique in that it will present strategies for teaching the top five concepts identified as critical in introductory evaluation courses. These concepts will be presenters by evaluation instructors representing multiple perspectives. In addition to the address of these critical concepts, the presenters will also share unique instructional strategies for teaching other topics as well.

Session Title: The Role of Monitoring in Evaluation Quality
Panel Session 735 to be held in CROCKETT C on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Chung Lai, International Relief & Development, clai@ird-dc.org
Abstract: This panel features perspectives on the role of monitoring in evaluation quality in international settings, presented by three panelists working in M&E for international development. The role of monitoring contributes to improved program management and evaluations. Practitioners and independent evaluators have many overlapping data needs; good monitoring can address many of them. Quality monitoring data - that track key performance indicators and targets – is key to evaluations. Developing monitoring plans and systems at the design stage of programs has many advantages. Planning for monitoring provides an opportunity to analyze the planned program logic model and indicators to determine if, in fact, the plans are measurable and can be monitored. Monitoring plans can also accelerate thinking about data collection instruments that must be developed for monitoring and later evaluation efforts. Finally, regular reporting and utilization of monitoring data by program implementers and funders foments a culture of evaluation within organizations.
Data Collection Instruments for Quality Evaluation
Chung Lai, International Relief & Development, clai@ird-dc.org
This session will discuss the linkage between quality evaluation and quality data through data collection instruments using a project in Indonesia as a case study. Data collection instruments are important to increase evaluation quality through quality data collected using instruments designed and reviewed for the project. In the rush of getting activities started often data collection instruments are done on an adhoc basis or using a template. Some projects do not have the capacity to customize the template. Some projects create something that is sufficient for them but it may not meet all the project needs or match with the higher level indicators. Given a performance monitoring plan, the project field staff may still have questions: What do I use to collect the data? What format is the best to link with our database? What user-friendly format is appropriate for grantees who will collect some of the data too?
Monitoring Tools and Systems: Contributing to Better Information for Program Teams and Evaluators
Maurya West Meiers, World Bank, mwestmeiers@worldbank.org
This session presents a range of tools and systems that have been used by program teams for their routine monitoring and decision-making, as well as by evaluators in the range of evaluation types conducted. Featured will be the following: examples of monitoring systems that inform on the entire program logic (from inputs to impacts); the use of key performance indicators for managing programs and decision-making; and data collection tools to complement the systems. It will present examples of how and under what circumstances monitoring data has been used in evaluations. A range of examples of tools and systems from international contexts will be provided, from those used by individual projects to those used by national and regional ministries, particularly in Latin American countries.
Monitoring: An Evaluator’s Friend or Foe
Cheyanne Scharbatke-Church, Tufts University, cheyanne.church@tufts.edu
This session will explore the impact of monitoring on evaluation quality from the perspective of an external evaluator. It will identify the ways in which monitoring brings value to the evaluative process; seeking to draw out key success factors. It will also explore the opposite side of the coin, identifying ways in which monitoring can negatively affect evaluations. Using a broad definition of quality in this case, the paper will discuss where monitoring detracts from a quality process. This paper will draw from experience conducting evaluations in international development and peacebuilding and where possible will identify the nuances of each field to this discussion.

Session Title: Infusing Evaluation Theory Into Practice in Government Safety Programs: Process Examples From the United States Department of Transportation
Panel Session 736 to be held in CROCKETT D on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Government Evaluation TIG
Chair(s):
Michael Quinn Patton, Utilization-Focused Evaluation, mqpatton@prodigy.net
Discussant(s):
Joyce Ranney, Volpe National Transportation Systems Center, joyce.ranney@dot.gov
Abstract: Al Roth, a Harvard economist, once said "In theory, there is no difference between theory and practice, but in practice there is a great deal of difference." In evaluation, quality is what differentiates theory from practice. This session chronicles how teams of professional evaluators have endeavored to support quality evaluation in three transportation safety programs and communicate evidence-based messages to their respective constituencies. This panel session will discuss what happened when theory was put into practice at different phases of three transportation safety programs. The goal of quality was represented by the attention given to intended use by intended users during the program process, rather than in retrospect to program implementation. All three programs aimed for effective utilization of the knowledge they had amassed. Presenters will share a few remarkable successes along with "other" lessons learned.
Planning for Quality: Creating an Evaluation-Focused Safety Council for the United States Department of Transportation
Michael Coplen, United States Department of Transportation, michael.coplen@dot.gov
Stephen Popkin, Volpe National Transportation Systems Center, stephen.popkin@dot.gov
In October, 2009 Secretary of Transportation Ray LaHood created a newly formed U.S. Department of Transportation Safety Council to tackle critical transportation safety issues facing the department's 10 operating administrations. It's mission is "to serve as DOT's safety advocate and to bring together each part of DOT in addressing transportation safety as a critical national health issue." Its guiding principles reveal its evaluative underpinnings, including: systematic data-driven decision-making, open and frank dialog, and a transparent process. From its inception, a utilization-focused evaluation approach was taken emphasizing stakeholder engagement strategies to help ensure cross modal collaboration among senior level decision makers, resulting in the identification and prioritization of high priority cross modal safety topics needing to be addressed.
Promoting Positive Utilization of Risk Analysis Findings
Juna Snow, Innovated Consulting, jsnow@innovatedconsulting.com
Debbie Bonnet, Fulcrum Corporation, dbonnet@fulcrum-corp.com
Switching Operations Fatality Analysis (SOFA) is conducted by a group of ten representatives of labor, management, and government who have evolved a high-quality methodology for identifying possible contributing factors and extenuating circumstances involved in individual deaths. They issued reports of aggregate findings and associated recommendations in 1999 and 2004. In spite of extensive dissemination efforts, the group was disappointed in the railroad industry's response - in particular, the translation of recommendations into punitive-based rules, antithetical to the spirit of their intended use - promoting safe choices to save lives. As they prepared for another report in 2010, the group requested the assistance of a utilization-focused evaluation team.
From Research to Practice: The Utility of Evaluation With International Fatigue Management Conference in Transportation
Michael Coplen, United States Department of Transportation, michael.coplen@dot.gov
Stephen Popkin, Volpe National Transportation Systems Center, stephen.popkin@dot.gov
Conferences that focus on specific issues within a profession present opportunities for awareness and convergence of new ideas. The International Conference on Fatigue Management in Transportation (March 2009, Boston) brought together over 275 industry leaders, academicians, government policy makers, and transportation professionals. This presentation will discuss the lessons learned when the theory and methods of evaluation were introduced to researchers and developers of fatigue management interventions. The strategy involved aligning individual evaluators with a conference strand to observe sessions and report to a conference-closing symposium assertions about the ways and the extent to which evaluation theory and tools could provide value-added to the R&D work presented at the conference. This presentation will share reflections on the resistance raised by the conference attendees and the barriers that emerged relating to the utility of evaluation in that context.

Session Title: Emerging Evaluators and the Future of Culturally Responsive Evaluation
Panel Session 737 to be held in SEGUIN B on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Graduate Student and New Evaluator TIG
Chair(s):
Helga Stokes, Duquesne University, stokesh@duq.edu
Discussant(s):
Daniel McDonald, University of Arizona, mcdonald@ag.arizona.edu
Karol K Harris, University of Texas, Austin, kk.harris@mail.utexas.edu
Michelle Jay, University of South Carolina, mjay@sc.edu
Ricardo Millett, Ricardo Millett and Associates, ricardo@ricardomillet.com
Abstract: The Robert Wood Johnson Foundation (RWJF) Evaluation Fellowship Program seeks to diversify the evaluation field by providing emerging scholars with the necessary skills and training to become program evaluators. Each RWJF fellow will share her professional and academic experiences in the following areas: gaining technical expertise, training and development opportunities, and cultural competence in evaluation. Three discussants will also share practical examples from the field. The RWJF fellows and discussants will explore the implications for continuing to diversify the evaluation field during this interactive session.
Teaching a New Dog Old Tricks: Incorporating Evaluation Skills Into Academic and Work Settings
Natalie Alizaga, Amherst H Wilder Foundation, nalizaga@gmail.com
Monica Getahun, OMG Center for Collaborative Learning, monica.getahun@gmail.com
Denise Herrera, Decision Information Resources Inc, herrera_denise@hotmail.com
Afabwaje Jatau, National Cancer Institute, abwaje@yahoo.com
Natalie Alizaga currently serves as a Robert Wood Johnson Foundation Evaluation fellow. As a student in the field of Public Health, she often found it difficult to find courses that focused specifically on evaluation. She also encountered few opportunities to build evaluation skills in her research work. As a woman of color, she has sought methods of incorporating culturally responsive evaluation to public health practice and research. Situations where programs involve ethnically diverse participants call for the creation of multiethnic evaluation teams to increase chances of really hearing the voices of underrepresented students (NSF, 2002). An emerging scholar in the field of evaluation, Ms. Alizaga will discuss methods of building evaluation skills into academic as well as work settings. She will also share her thoughts on remaining culturally competent throughout the evaluation process, in order to build the diverse teams necessary in multiethnic evaluation.
Culturally Responsive Evaluation: Engaging Community Members and Stakeholders Toward Meaningful Policy and Systems Change
Monica Getahun, OMG Center for Collaborative Learning, monica.getahun@gmail.com
Monica Getahun is a Robert Wood Johnson Foundation Evaluation fellow at OMG Center for Collaborative learning. She has previous experience evaluating the efficacy of programs that primarily serve communities disproportionately affected by health inequities. Through her experience in community planning of human services, she has come to value the importance of culturally responsive evaluation as a means of engaging community members and stakeholders toward meaningful policy and systems change. Culturally responsive evaluation assures that individuals from all sectors have the chance for input as those in the least powerful positions can be the most affected by the results of an evaluation (NSF, 2002). As a woman of color, the importance of training and including emerging evaluators in cultural competency and responsiveness cannot be stressed enough. Ms. Getahun will share her experiences with culture and evaluation and the role of emerging evaluators in this field.
Mentoring Novice Evaluators Throughout the Evaluation Process: An Investment Worth Making
Denise Herrera, Decision Information Resources Inc, herrera_denise@hotmail.com
Dr. Denise Herrera is a graduate of UT-Austin and a Robert Wood Johnson Foundation Evaluation fellow. While reflecting on her graduate school and professional experiences, Denise will highlight the value of including and mentoring emerging scholars in the evaluation process. To develop the most efficacious evaluation plan possible, creating and fostering a shared vision among stakeholders, community members, and the evaluation team is essential. Including novice scholars throughout the evaluation process can be one cost effective strategy to increase the overall awareness and global valuing of evaluation, while contributing to the skills set of individuals pursuing a career in evaluation or a similar field (Bennion, 2004; Hezlett & Gibson, 2007). The investment in emerging scholars could be returned many times over among academic institutions, small or large governmental organizations, the private sector, or communities that already conduct or have a need for systematic evaluation.
An Inconvenience or a Necessity?: The Role of Culturally Responsive Evaluation in Agencies When Resources May Be Limited
Afabwaje Jatau, National Cancer Institute, abwaje@yahoo.com
Afabwaje Jatau is a Robert Wood Johnson Foundation Evaluation fellow at the National Cancer Institute, Office of Science Planning and Assessment. Because of her experiences in public health, she has gained insight on the value of the services and programs that nonprofit agencies provide to their surrounding communities. In addition, her work with nonprofit organizations that serve minority or low-income communities has afforded her a unique perspective on the role of evaluation in understaffed or under-resourced agencies, where evaluation may be viewed as more of a burden or an inconvenience than a necessity. Though in some non-profit agencies the significance of evaluation may be known, a lack of funding and the availability of staff time often prevent implementation of evaluation activities (Hoefer, 2000). As an emerging evaluator, it is essential to establish and effectively communicate the role of culturally responsive evaluation in these agencies.

Session Title: Impact Evaluation and Beyond: Methodological and Cultural Considerations in Measuing What Works
Multipaper Session 738 to be held in REPUBLIC A on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Tessie Catsambas,  EnCompass LLC, tcatsambas@encompassworld.com
Assessing the Relevance of International Development and Humanitarian Response Projects at Aggregate Levels: A Key Consideration in Achieving Equality of Wellbeing Outcomes and Just Resource Allocation- An Example of Summary Child Wellbeing Outcome Reporting From World Vision International
Presenter(s):
Nathan Morrow, World Vision International, nathan_morrow@wvi.org
Nancy Mock, Tulane University, mock@tulane.edu
Abstract: A key assumption of many impact evaluations conducted by International Non-Governmental Organizations is that programs were designed to be relevant to community needs. Yet demonstrating this relevance with respect to community needs, NGO organizational policy, and organizational capacity is rarely documented. International NGOs often collect essential information for understanding the relevance of their programming portfolio, but lack the tools or experience to analyze the question of relevance at aggregated levels. In 2009, World Vision International undertook a large thematic review of their programming portfolio at the national, regional, and global levels. The paper describes the availability and quality of data that can be used to assess relevance. It presents several novel conceptual tools used to distill indicators of relevance from participatory workshops and document review. The thematic review of relevance found mixed results and scope to improve the relevance of programming through more effective use of routinely collected data.
Taking the Long View in Evaluating International Development Assistance
Presenter(s):
R Gregory Michaels, Chemonics International, gmichaels@chemonics.com
Alphonse Bigirimana, Chemonics International, abigirimana@chemonics.com
Abstract: Program evaluations addressing the sustainability of international development assistance have not adequately informed understanding of development effectiveness (the focus, for example, of OECD’s DAC Evaluation Network). Program evaluations are typically limited to the specific project life span, missing possibilities of evaluating projects after closure. In contrast, appraising project results well after closure offers a robust standard for evaluating sustainability –what actually worked and what did not in the long run. Ex post project sustainability investigations offer an untapped mine of information. This paper reports experience from two retrospective studies examining the sustainability of USAID-funded projects’ benefits several years after closure. The first study (conducted in 2008) evaluated the promotion of agricultural exports from Central America (1986-1994). The second study (2009) investigated the performance of a Moroccan wastewater treatment facility constructed in 2000. These studies illustrate the power of ex-post sustainability appraisals to offer valuable insights into the durability of development.
Using rUbrics Methodology in Impact Evaluations of Complex Social Programs: The Case of the Maria Cecilia Souto Vidigal Foundation’s Early Childhood Development Program
Presenter(s):
Thomaz Chianca, COMEA Evaluation Ltd, thomaz.chianca@gmail.com
Marino Eduardo, Independent Consultant, eduardo.marino@yahoo.com
Abstract: Rubrics are important tools to describe evaluatively how well an evaluand is doing in terms of performance or quality related to specific dimensions, components and/or indicators. Even though extremely useful to synthesize and communicate evaluative conclusions, its adoption as part of evaluations of complex interventions seems to be quite limited. This paper will discuss the main methodological aspects involved in the development and use of rubrics in the context of an impact evaluation: (i) values identification, (ii) description of standards to score performance or quality, (iii) definition of data collection strategies, (iv) development of tools for data entry and analysis, (v) dealing with causal attribution strategies, and (vi) communicating findings. The external impact evaluation of an early childhood intervention, supported by the Maria Cecília Souto Vidigal Foundation in six municipalities in São Paulo (Brazil), will be discussed as the case study for this session.
Quality is More Than Rigor: A Political and a Scientific Perspective for Impact Evaluation in Development
Presenter(s):
Rahel Kahlert, University of Texas, Austin, kahlert@mail.utexas.edu
Abstract: The paper reflects on the current debate about impact evaluation in international development. One controversial issue remains whether the randomized controlled trial (RCT) represents the “gold standard” approach. The author analyzes the randomization debate in international development since 2006. The presenter employs both a political perspective and scientific perspective (cf., Weiss 1972; Vedung 1998) to explain the rationales behind the promotion strategies of the respective sides in the debate. Finally, the paper analyzes the issue of transferability of randomized evaluation findings—as brought to the table by both “randomistas” and skeptics of randomized experiments. The focus is on “validity”—the balance of internal and external validity—to both ensure rigor and relevance. The paper proposes ways in which this debate could be mediated.

Session Title: National Evaluation of Team Science in the Interdisciplinary Research Consortium Program
Multipaper Session 739 to be held in REPUBLIC B on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Health Evaluation TIG
Chair(s):
Sue Hamann,  National Institutes of Health, sue.hamann@nih.gov
Why Evaluating Team Science is Important: Perspectives of the National Institutes of Health (NIH)
Presenter(s):
Sue Hamann, National Institutes of Health, sue.hamann@nih.gov
Abstract: The Roadmap for Medical Research was launched in 2004 by the National Institutes of Health to transform biomedical research. Many Roadmap programs, including the Interdisciplinary Research Consortium Program, funded research that was inherently risky. The Consortium Program differs from traditional programs in several ways: a) it spans multiple health and disease conditions; b) it is funded by multiple institutes within the National Institutes; and, c) it is costly, because, by its nature, team science consumes resources, both at the program (grantor) and project (grantee) levels. The Interdisciplinary Research Consortium Program was initially funded for five years and a decision about re-issuing the program will be made in late 2010. The national evaluation of the program will inform that decision by addressing the promise, risks, and costs of interdisciplinary research. In this paper, the key evaluation questions, methods, instruments, and national findings to date will be presented.

Session Title: Collaborative Evaluations: Successes, Challenges, and Lessons Learned
Multipaper Session 740 to be held in REPUBLIC C on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Anne Cullen,  Western Michigan University, anne.cullen@wmich.edu
Enhancing Evaluation Stakeholder Responsiveness Through Collaborative Development of Data Collection Instruments
Presenter(s):
Karen Kortecamp, George Washington University, karenkor@gwu.edu
Abstract: This paper reveals that stakeholder responsiveness to evaluation findings of a multi-year Teaching American History funded professional development project was enhanced through collaborative development of data collection instruments. Prior to the launch of the professional development program, the collaborative process engaged project historians, the academic project director and the evaluator in examining the underlying assumptions about what knowledge and skills were of most value for teachers and students. Those discussions led to a series of thoughtful deliberations about how best to measure the knowledge and skills that were identified. This paper discusses the collaborative process and decision-making that led to development high quality data collection instruments and to contributed to stakeholder responsiveness.
Reflecting on Practice in Evaluating Culturally Competent Teaching Strategies
Presenter(s):
Corina Owens, University of South Florida, cmowens@usf.edu
Michael Berson, University of South Florida, berson@coedu.usf.edu
Liliana Rodriguez-Campos, University of South Florida, liliana@usf.edu
Abstract: Cultural competence is an illusive social construct that has a tendency towards socially acceptable answers and can be thought of as a hot button topic. People have a tendency to respond to certain types of questions in a manner that will be viewed in a favorable light by others. Items discussing one’s self -perception or self-rating of culturally competent practices in a classroom can be easily categorized as socially desirable. Evaluators need to understand the issues around social desirability when attempting to measure cultural competence as evidenced by the lessons learned through a collaborative evaluation that measured culturally competent instructional strategies, which were infused throughout a civic education professional development program. The purpose of this paper is to share these lessons learned to help other evaluators gain a better understanding of ways to evaluate cultural competent teaching strategies.
Summative Evaluation for Marketing of Science Teachers and Induction (MOSTI): An Eclectic Evaluation Approach to the Improvement of Science Teacher Recruitment, Induction, and Retention in the Middle School Grades
Presenter(s):
Bryce Pride, University of South Florida, bryce_pride@msn.com
Merlande Petit-Bois, University of South Florida, petitbois.m@gmail.com
Robert Potter, University of South Florida, potter@cas.usf.edu
John Ferron, University of South Florida, ferron@usf.edu
Abstract: This paper focuses on lessons learned from a program providing recruitment, induction, and retention activities for three cohorts of career change individuals moving into teaching as an alternative career. Taking an eclectic approach (Fitzpatrick, Sanders & Worthen, 2004), we evaluated the extent to which this program helped second career teachers prepare to be middle school science teachers and be retained as permanent teachers in the school system. To inform the report, quantitative and qualitative data were collected from surveys, content exams, classroom observations, mentor logs, and a focus group. Ideas for improvement were provided from the perspective of teachers, mentors, administrators and evaluators. Multiple perspectives are used to gain an understanding of the effectiveness of program implementation and to recommend suggestions for improvement. Utilizing feedback from teachers and mentors regarding training sessions and program needs has assisted MOSTI administrators with decisions for program improvement.
Evaluation Capacity Building in Health Disparities Research: Achieving Empowerment Using the Model for Collaborative Evaluations (MCE)
Presenter(s):
LaShonda Coulbertson, Center for Equal Health, ltcoulbertson@msn.com
Desiree Rivers, Center for Equal Health, drivers@health.usf.edu
Abstract: Evaluation capacity-building in research oriented settings is not without its challenges. While research is easily adopted and planned for, evaluation, particularly in the context of community based research, finds itself relegated to an afterthought in the planning process. Utilizing a framework, such as the Model for Collaborative Evaluations (MCE) (Rodriguez-Campos, 2005), evaluation capacity can be increased, while simultaneously demonstrating the significance and impact of a well planned, “standardized” evaluation to the research process. The labor of capacity-building should equip organizations to evaluate the evaluator by empowering them to critically view the evaluation process and the methods undertaken, on a sturdy foundation. The authors will discuss an evaluation capacity building process for a $6M federally funded health disparities project, the potential for evaluation in health disparities research, and the level to which advocating in this direction will lead to innovation in the health models currently utilized (or not) in this field.

Return to Evaluation 2010
Search Results for All Sessions