2010 Banner

Return to search form  

Session Title: Mixed Methods Contributions to Evaluation Quality
Panel Session 782 to be held in Lone Star A on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Presidential Strand
Chair(s):
Donna Mertens, Gallaudet University, donna.mertens@gallaudet.edu
Discussant(s):
Melvin Mark, Pennsylvania State University, m5m@psu.edu
Kataraina Pipi, Independent Consultant, kpipi@xtra.co.nz
Abstract: Evaluators have been mixing methods for decades, or even longer. As the most applied of all social inquirers, evaluators base methodological decisions, in large part, on contextual practicality and design potency for answering evaluation questions. If both structured surveys and open-ended interviews are needed in a given context, then evaluators – unproblematically and without losing sleep – build these different methods into their design. So, it is not always apparent what the emerging ‘theory’ of mixing methods has to offer to the evaluation community. This session will present an argument that the conceptual ideas and aspirations of mixed methods inquiry can indeed make vital contributions to evaluation practice, and will focus this argument on mixed methods’ contributions to evaluation quality. The session will engage diverse understandings of the philosophical, theoretical, and methodological frameworks in mixed methods and their relevance to the meanings of evaluation quality, with illustrations from practical exemplars.
What is Quality Empirical Social Science From a Mixed Methods Perspective?
Abbas Tashakkori, University of North Texas, abbas.tashakkori@unt.edu
The mixed methods community welcomes a rich diversity of philosophical, methodological, practical, and ethical views about quality. With diversity come challenges of reconciliation around what constitutes quality work, including explicit and implicit differences in criteria for evaluating the separate qualitative and quantitative strands, nomenclature, the relative importance and the separate versus integrated character of study components, and the necessity of evaluating inquiry consequences. Quality audits offer potential for common ground. Such audits would focus on mixed methods inquiry as a systemic attempt to answer questions, clearly differentiating four elements of such attempts: inputs (problems, data), processes (analysis), outcomes (inferences, recommendations), and consequences (impacts) of inquiry while recognizing that these might not operate in a linear manner. Issues in auditing the components of a mixed methods evaluation will be discussed, including the challenges of the evaluator as a stakeholder, divergence in quality assessors, and the role of outcomes/consequences.
Bridging Quality in Mixed Methods Inquiry With Evaluation Theory and Practice
Donna Mertens, Gallaudet University, donna.mertens@gallaudet.edu
How well do the ideas about quality in contemporary mixed methods theory and practice translate to the challenges of quality in evaluation? How are the conceptualizations of quality similar and different in these two pluralistic domains and diverse communities of practice? What can evaluators learn about quality from mixed methodologists? This presentation will engage these questions primarily from a standpoint of quality as justice (House), but will also consider the quality dimensions of truth and beauty. In particular the mixed methods idea of quality audits as a vehicle for assessing inquiry quality will be engaged and critiqued for its applicability to the contexts and challenges of evaluation.
How Does the Mix of Methods Enhance Evaluation Quality in the Field?
Jori Hall, University of Georgia, jorihall@uga.edu
Jennifer Greene, University of Illinois at Urbana-Champaign, jcgreene@illinois.edu
The contributions of mixed methods thinking to the quality of evaluation practice will be featured in this presentation. Using examples of mixed methods evaluation studies, the presentation will discuss the ways in which a mixed methodology can engage multiple dimensions of evaluation quality, from the methodological to the social, political, and aesthetic.

Session Title: Writing Effective Items for Survey Research and Evaluation Studies
Skill-Building Workshop 783 to be held in Lone Star B on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the
Presenter(s):
Jason Siegel, Claremont Graduate University, jason.siegel@cgu.edu
Eusebio Alvaro, Claremont Graduate University, eusebio.alvaro@cgu.edu
Abstract: The focus of this hands-on workshop is to instruct attendees how to write effective items for collecting survey research data. Bad items are easy to write. Writing good items is more challenging than most people are aware. Writing effective survey items require a complete understanding of the impact that item wording can have on a research effort. Only through adequate training can a good survey items be discriminated from the bad. This 90-minute workshop focuses specifically on Dillman’s (2007) principles of question writing. After a brief lecture, attendees will then be asked to use their newly gained knowledge to critique items from selected national surveys.

Session Title: Collaborative Evaluations
Skill-Building Workshop 784 to be held in Lone Star C on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Liliana Rodriguez-Campos, University of South Florida, liliana@usf.edu
Rigoberto Rincones-Gomez, Maryland Distribution Council Inc, rrincones@mdc.org
Abstract: This highly interactive skill-building workshop is for those evaluators who want to engage and succeed in collaborative evaluations. In clear and simple language, the presenter outlines key concepts and effective tools/methods to help master the mechanics of collaboration in the evaluation environment. Specifically, the presenter is going to blend theoretical grounding with the application of the Model for Collaborative Evaluations (MCE) to real-life evaluations, with a special emphasis on those factors that facilitate and inhibit stakeholders’ participation. The presenter shares her experience and insights regarding this subject in a precise, easy to understand fashion, so that participants can use the information learned from this workshop immediately.

Session Title: A Foundation-to-Foundation Partnership: What Went Right, What Took Time- A Look at the Robert Wood Johnson Foundation and the Northwest Health Foundation Partnership- Partners Investing in Nursing’s Future
Demonstration Session 785 to be held in Lone Star D on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Ricardo Millett, Ricardo Millett and Associates, ricardo@ricardomilllet.com
Chantell Johnson, TCC Group, cjohnson@tccgrp.com
Abstract: Collaborations and partnerships among nonprofits are critical strategies in this economy and toward mission achievement. During the past ten years there has been a surge in research around the most effective measures to evaluate them. The Foundation Center and Grantcraft have published interesting and useful articles on the subject and has shared important learnings on evaluation approaches being used; however, what seems to be missing are more candid examples from the philanthropic sector. What role is evaluation playing in funder-to-funder partnerships? What evaluation design approach, measures and indicators are used to assess and learn from partnerships? How well do these measures address funder’s information priories? These are the kinds of questions that either remain unanswered or lack real world examples. Review the ‘findings’ of this evaluation approach, designed and implemented by Ricardo Millett and Chantell Johnson, TCC Group and explore its utility and applicability to other foundation to foundation partnerships.

Session Title: Using Rasch Measurement to Strengthen Evaluation Designs and Outcomes
Demonstration Session 786 to be held in Lone Star E on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Christine Fox, University of Toledo, chris.fox@utoledo.edu
Svetlana Beltyukova, University of Toledo, svetlana.beltyukova@utoledo.edu
Abstract: While most evaluation plans use raw ordinal survey data to assess attitudes, beliefs, or behaviors, the Rasch model can be used to determine the extent to which these data yield meaningful linear measurements from these survey responses. This model also allows for obtaining evidence of the functioning the rating scales, suitability of the instrument for specific populations, and facilitates a better understanding of how precisely different samples can be evaluated. Using a survey of high school teachers as an example of one evaluative instrument in a comprehensive evaluation for the Department of Education, we will demonstrate how a variety of Rasch diagnostics can aid in using typical rating scales to construct scientifically defensible measures. Specific focus will be on determining the appropriate number of rating scale categories, designing questions to target the appropriate sample, exploration of dimensionality, and construct interpretation. By the end of the demonstration, participants will be able to identify properties of measures, understand the limitations of working with ordinal data, and be familiar with the way in which the Rasch model overcomes these limitations to provide empirically defensible ways to build and test the quality of measures from survey data.

Session Title: Extended Learning: A Conversation Among Evaluators of the National Science Foundation (NSF) Extension Services Projects
Panel Session 787 to be held in Lone Star F on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Beverly Farr, MPR Associates Inc, bfarr@mprinc.com
Abstract: This panel will include a group of evaluators from the National Science Foundation (NSF)-funded Gender in Science and Engineering (GSE)Extension Services Grants. Extension Service projects present unique challenges to evaluation because they “extend” services across tiers or layers of service, and their most direct strategies are often far removed from the ultimate target outcomes. From this basic dilemma faced in evaluating these projects, the panel will use a Question and Answer discussion format to delve into a range of issues that characterize the evaluation challenge. The panel members will pose questions to each other and discuss ideas and strategies for meeting the challenges and emphasize the value of establishing a community of practice for those undertaking evaluation of multi-site and multi-level projects.
Can We Really Do It All? Yes... Within Reason
Elizabeth Bachrach, Goodman Research Group Inc, bachrach@grginc.com
Between every project and its evaluation, there is a unique and dynamic working relationship. However, regardless of program content, evaluators share a primary goal of designing and implementing a feasible and practical evaluation plan that will address the key research questions and follow our established guiding principles. When a project, by nature, is multi-layered and aims to reach multiple audiences via various levels of service, such as those under the NSF GSE Extension Service grants, the evaluation must include room to evolve and stretch with the project. With this in mind, this part of the session addresses the question, "How do we keep the evaluation practical and feasible within multi-level projects?"
Use of Technology in Evaluation: How Does It Help and How Does It Hinder?
Vicky Ragan, Evaluation and Research Associates,  vragan@eraeval.org
Most, if not all, GSE Extension projects are supported by technology. The technologies used in these projects are varied in how they address goals and in how they support the evaluation. Within the evaluation process these technologies may provide data collection, tracking of project processes, and dissemination of evaluation result and reports. At a higher level, they may serve as a mechanism to translate,transfer, and diffuse knowledge. The panelists recognize the relevance of examining the role of technology in these multi-level projects where its use is an integral part of the project. This part of the session addresses the question, “How is technology used in multi-level projects and how does that relate to the evaluation?”
Finding Common Ground: Is It possible?
Beverly Farr, MPR Associates Inc, bfarr@mprinc.com
The projects included in the NSF Extension Services Grants Program all have the ultimate goal of increasing female participation in STEM course taking and STEM careers. They vary, however, in the levels they address--from state departments to community colleges to schools--and in the strategies they use to achieve their objectives. Nonetheless,they were funded to provide services that would build the capacity of their recipients and reach toward the ultimate goal. The activities of the projects cannot always be directly linked to the ultimate goal,however, and there is a need to examine intervening outcomes to assess the impact of the projects overall. As the funder,NSF has a desire to know what the projects together contribute to the accomplishment of the ultimate goal. With this in mind, the evaluators began discussions about establishing common indicators, and this part of the session addresses the question, “What is the value of establishing common indicators across projects?”
Strategies for Guiding and Tracking Sustainability: Will It Last?
Donna Brock, Evaluation Consulting Services Inc, djbrock.ecs@cox.net
Evaluators play a role in guiding and tracking program sustainability. Mancini and Marek (2004) and Marek, Mancini, and Brock (2003) identify “Demonstrating Program Results” as one successful sustainability factor. This factor includes: 1) evaluation plans developed prior to program implementation, 2) regularly conducted evaluation, and 3) the use of evaluation to inform program modifications. However, tracking and guiding effective sustainability practices necessitates a larger lens that considers the macro-system of program function (e.g.,leadership, collaboration, funding, staff involvement, project adaptability, and fidelity to project implementation) and the community. Thus, evaluators need to integrate various streams of data to inform the sustainability process. Evaluators of these particular Extension Service projects provide a unique perspective into this process given the multi-layers, constantly changing contexts, and challenges to measuring impacts and tracking fidelity. This part of the session will address the question, "What is the role of the evaluator in facilitating sustainability?"

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Preparing Teacher Candidates for Parent Partnerships: An Evaluation of a Preservice Course in Teacher Education
Roundtable Presentation 788 to be held in MISSION A on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Special Needs Populations TIG
Presenter(s):
Michael Wischnowski, St John Fisher College, mwischnowski@sjfc.edu
Marie Cianca, S John Fisher College, mcianca@sjfc.edu
Susan Hildenbrand, St John Fisher College, shildenbrand@sjfc.edu
Daniel Kelly, St John Fisher College, dkelly@sjfc.edu
Abstract: This session will describe an evaluation of a partnership between an undergraduate teacher education program and a not-for-profit advocacy organization for people with disabilities. The partnership’s purpose is to prepare future teachers to work collaboratively with parents of children with disabilities. Parents, trained in public speaking and advocacy strategies, become an integral part of an undergraduate course with the theme of educational collaborations—parent collaboration being a centerpiece. Teacher education faculty work with these trained parents to assist teacher candidates with a) parent communication etiquette; b) critical analysis of parent and teacher roles in education; c) research skills to address parent concerns; and d) establishing collaboration between both parties to improve student outcomes. A logic model was developed to describe the resources and intentions of the partnership. Roundtable facilitators will share the results of the partnership evaluation, including perspectives and outcomes of teacher candidates, parent participants, and faculty.
Roundtable Rotation II: Quality Evaluations for Educational Programs: Mixed Methods Adds Value Beyond Proficiency Testing Results
Roundtable Presentation 788 to be held in MISSION A on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Special Needs Populations TIG
Presenter(s):
Paula Plonski, Praxis Research Inc, pplonski@windstream.net
Abstract: The environment of No Child Left Behind and its emphasis on testing have influenced educational evaluations even for those programs not specifically targeted at improving annual yearly progress. This evaluation involved three southern elementary schools where most of the teachers and administrators participated in a program called Schools Attuned. The All Kinds of Minds Institute’s Schools Attuned program seeks to positively impact students, parents, teachers, school practices and school culture by engaging teachers in professional development involving the understanding of neurodevelopmental constructs and practical classroom application. In addition to the collection and analysis of Measures of Academic Progress (MAP) student scores, the evaluation plan included classroom observations, teacher surveys, administrator interviews, and focus groups with teachers, parents, and students. Utilizing this mixed methods design, the qualitative component complemented and informed the quantitative analysis to provide insight on how differing levels of school-wide implementation impacts outcomes.

Session Title: Cross-National Evaluation Policies: Where We've Been, Where We're Going, and What We Need for Quality Evaluation
Multipaper Session 789 to be held in MISSION B on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the
Chair(s):
Marie Gaarder,  International Initiative for Impact Evaluation (3ie), mgaarder@3ieimpact.org
Discussant(s):
Jim Rugh,  Independent Consultant, jimrugh@mindspring.com
Strengthening Evaluation Policy in Saudi Arabia for Higher Evaluation Quality
Presenter(s):
Mohammed Alyami, Western Michigan University, mohammed.alyami@wmich.edu
Abstract: Saudi Arabia government and organizations have begun to consider the importance of professional evaluation. However there are few formal evaluation policies to support professional evaluation standards and practices. The Program Evaluation Standards by the Joint Committee (1994) is used as a guiding standard (and informal policy) for evaluation, although there are other frameworks that may influence how evaluation is practiced. This proposed paper will address three main issues. First, what evaluation standards are used to guide evaluation policy in Saudi Arabia and how can these be modified or adapted to insure high quality evaluation practice? Second, what are existing requirements for professional evaluators, and what policy changes need to be considered to insure a highly qualified pool of professionals? Third, what are good evaluation models and how can these approaches improve the overall quality of evaluation in this national context? Implications for evaluation policy in other nations will be considered.
Institutionalising Evaluation: By Decree or by Persuasion?
Presenter(s):
Marie Gaarder, International Initiative for Impact Evaluation (3ie), mgaarder@3ieimpact.org
Bertha Briceno, World Bank, bbriceno@worldbank.org
Abstract: This paper compares experiences of institutionalizing government evaluation efforts through a discussion of the three leading models in Latin America – Mexico, Colombia and Chile – the non-centralized system of monitoring and evaluation adopted in South Africa, and the policy-learning approach taken in China. Some developed country experiences are also presented. It concludes that there is no unique model for strengthening and institutionalizing a monitoring and evaluation system, but that elements of independence and enforceability are part of the recipe but are often at odds. The success of institutionalization requires strong political will to ensure that results are being used to improve performance. It also requires having a clear powerful stakeholder, such as the Congress, the Ministry of Finance, or the President to champion the process.
Evaluation and the Shifting Meanings of Accountability in Education in Five Nations
Presenter(s):
Christina Segerholm, MidSweden University, christina.segerholm@miun.se
Jenny Ozga, University of Edinburgh, jenny.ozga@ed.ac.uk
Abstract: This paper analyses results from a research project on governing and evaluation in education in five nations: Denmark, England, Finland, Scotland and Sweden. The influence of global policies on national and local evaluation and evaluation (QAE) systems, and the shift to governing by objectives and outcomes, forms a contextual base. Different conceptions of accountability are presented and used in the analysis of the five nations’ governing and QAE systems. The analysis show that the shift in governing (now including several levels and actors) is paralleled with shifting meanings of accountability, more directed to the results of education. ‘Accountability’ is also incrementally filled with hybrid meanings (e.g. consumer, managerial, individual, performance) but differently emphasized in the five nations, England perhaps harboring the most complex notion in that sense, while in Finland, with a less extensive QAE system, accountability is more about process, professional and political responsibility in education.

Session Title: The Home Energy Audit: An Exercise in Complex Systems Thinking for Practitioners and Evaluators Alike
Multipaper Session 790 to be held in BOWIE A on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Systems in Evaluation TIG and the Environmental Program Evaluation TIG
Chair(s):
Daniel Folkman,  University of Wisconsin, Milwakee, folkman@uwm.edu
A Small Contractor’s Perspective: From Business Operations, to Customer Education, to Changing Rules and Regulations in the Home Improvement and Restoration Industry
Presenter(s):
Juanita M Ellias, Rivercity Woodworking LLC, rivercitywoodwrk@sbcglobal.net
Abstract: This presentation provides an overview of the opportunities and challenges that small home improvement contractors faces when dealing with residential energy conservation. Multiple systems are at play including a) sustaining and growing a small business operation, b) educating customers about the realistic energy savings opportunities, c) dispelling exaggerated and misleading claims of energy saving technologies, and d) entering a treadmill of technical certifications that are mandated by state and local governments. These issues will be discussed in light of this presenters experience in transforming a small artisan shop into a home performance energy consulting business. This shift in business plan was made to provide an educational service to prospective clients interested in both historic restoration and energy conservation.

Session Title: The REWA System of Transformative Evaluation: Founded on Pono (Truth), Ahua (Beauty) and Tika (Justice)- Evaluating Health and Well-being From Te Ao Maori / Indigenous World-view, Protocol, and Practice
Demonstration Session 791 to be held in BOWIE C on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Indigenous Peoples in Evaluation TIG
Presenter(s):
Tania Wolfgramm, Pou Kapua Consulting, tania.wolfgramm@gmail.com
Wikuki Kingi, Pou Kapua Consulting, wikuki.kingi@gmail.com
Abstract: A world-class, innovative, integrated ‘Whanau Ora’ Health Care System in New Zealand aims to achieve positive Whanau Ora outcomes through supporting the delivery of high quality health and social services to Maori whanau / families and high-needs populations. ‘Whanau Ora’ is manifest in whanau who are nurtured, healthy, engaged and knowledgeable, confident and productive who are on a journey to achieving self-determined success. This session demonstrates the REWA System of Transformative Evaluation; founded on Pono (Truth), Ahua (Beauty) and Tika (Justice); and supported by the core values of the Whanau Ora System itself, namely Whanaungatanga (Relationships), Manaakitanga (Support), Rangatiratanga (Sovereignty, Leadership), and Tikanga (Transactional Justice, Ethics, Protocols)as evidenced in the evaluation framework of the Whanau Ora System. Understanding Maori and Indigenous ways of assessing and evaluating merit based on traditional values and cultural expressions, this dynamic evaluation generates continuous learning and ground-breaking transformation for all stakeholders.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Resources to Guide Non-evaluators in the Design of Educational Program Evaluations
Roundtable Presentation 792 to be held in GOLIAD on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Teaching of Evaluation TIG
Presenter(s):
Rick Axelson, University of Iowa, rick-axelson@uiowa.edu
Susan Lenoch, University of Iowa, susan-lenoch@uiowa.edu
Abstract: This session will discuss approaches and share resources for guiding non-evaluators through the process of designing educational program evaluations. As a starting point for the discussion, we will review a self-study guide recently developed by the Office of Consultation and Research in Medical Education at the University of Iowa. The guide has been distributed in workshops that walk participants through the evaluation design process. The guide and workshops feature a case study of an educational intervention that illustrates the recommended design process. Participants then apply these principles in designing evaluations for selected components or interventions in their own educational programs. After a brief discussion of this approach, roundtable participants will have the remainder of the session to share their successful practices and resources for teaching evaluation to non-evaluators.
Roundtable Rotation II: Who Do They Think We Are? Issues and Dilemmas Raised by Others' Perceptions of Evaluators and Evaluation
Roundtable Presentation 792 to be held in GOLIAD on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Teaching of Evaluation TIG
Presenter(s):
Loretta Kelley, Kelley, Petterson and Associates, lkelley@kpacm.org
Philip Henning, James Madison University, henninph@jmu.edu
Abstract: An evaluator’s work is affected by our client’s view of us. A quality evaluation requires an honest exchange between the evaluator and subjects of the evaluation. It requires trust on both sides and a shared belief that the evaluation performs an important formative and/or summative function. This roundtable describes several views clients and stakeholders may have of evaluators and evaluation, the issues that may arise with each, challenges in collecting data in these situations, and strategies for developing a more productive relationship with the subjects of our evaluation, along with discussions of how the AEA Guiding Principles apply in each situation. Participants will share their experiences and discuss strategies for dealing with issues and dilemmas they have faced, and benefit from the experiences of others. Views of evaluators that will be presented include judge, friend of the “boss”, pipeline to the funder, necessary inconvenience, and partner/collaborator.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Epistemological Distinctions and Values in the Evaluation Process: A Reflective Analysis on the Quality Standards of Truth, Beauty, and Justice Using Findings From an Actual Evaluation Study
Roundtable Presentation 793 to be held in SAN JACINTO on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Theories of Evaluation TIG and the Teaching of Evaluation TIG
Presenter(s):
Sarah Wilkey, Oklahoma State University, sarah.wilkey@okstate.edu
Zarrina Azizova, Oklahoma State University, zarrina.azizova@okstate.edu
Zhanna Shatrova, Oklahoma State University, zhanna.shatrova@okstate.edu
Katye Perry, Oklahoma State University, katye.perry@okstate.edu
Abstract: The purpose of this session is to emphasize the pedagogical importance of epistemological discussions in evaluation courses in order prepare students to think reflectively regarding issues of quality in evaluation practices. During this roundtable discussion, we will use as an example a completed evaluation of a staffing program in Family and Graduate Student Housing at Oklahoma State University. We will discuss how the process of formulating and completing each step of the evaluation, to include finding a project (or a project ‘finding’ an evaluator), determining the evaluation methodology, interpreting the findings, and presenting the results, can be different depending on the evaluator’s epistemology. Further, we will discuss how epistemology affects the interpretation of the different standards presented by House (1980)—truth, beauty, and justice—throughout the evaluation process.
Roundtable Rotation II: Evaluation in Late Victorian Literature
Roundtable Presentation 793 to be held in SAN JACINTO on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Theories of Evaluation TIG and the Teaching of Evaluation TIG
Presenter(s):
David D Williams, Brigham Young University, david_williams@byu.edu
Abstract: Evalutors are committed to ensuring quality through adherence to various formal evaluation standards, which have evolved from social science disciplines. In contrast, what might humanities and understanding informal evaluations contribute to evaluation theory, practice and quality? This presentation examines evaluations portrayed in late-Victorian literature to identify informal approaches to establishing credibility. Through analyses of books by Dickens, Hardy, Chopin and others, we learned that some literary characters’ criteria and decision methods lead to problematic evaluations that serve as foils for promoting the choices of other characters. These classic stories invite readers to learn from characters’ evaluation experiences and improve their own informal evaluations. In this presentation we share literary examples that lead us to conclude that understanding informal evaluation lessons taught through literature could help formal evaluators extend stakeholders’ positive informal evaluations, while countering their poor informal evaluation choices, thus improving formal evaluation quality through better informal evaluations.

Session Title: Making Results Relevant: Designing Evaluations Stakeholders Will Value, Understand, and Use
Think Tank Session 794 to be held in TRAVIS A on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Evaluation Use TIG
Presenter(s):
Anita Drever, University of Wyoming, adrever@uwyo.edu
Discussant(s):
Paul St Roseman, Sakhu and Associates, pstroseman@sakhuandassociates.com
Javan Ridge, Colorado Springs School District 11, ridgejb@d11.org
Abstract: Evaluators aim to impact organizational and policy decisions. However, the currency of our trade—the technical report—in and of itself rarely achieves that end. This think tank will be a forum for evaluators to discuss strategies they have used to communicate the value and utility of their work to practitioners. The following questions will guide our discussion, “How does one present data so that stakeholders not only interpret one’s findings correctly, but also see their value? What report formats or other products have proven most effective with stakeholders? What kinds of collaborative partnerships with clients have been most successful in helping stakeholders to utilize evaluation products and to use evaluation findings for program improvement or program sustainability?" We anticipate these questions will feed into a larger discussion regarding creative ways to improve the quality of our evaluation and help stakeholders get the most from our services.

Session Title: An Integrated Web-based Assessment, Planning, and Evaluation System for Strengthening Families Programs Across the Nation
Demonstration Session 795 to be held in TRAVIS B on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Integrating Technology Into Evaluation
Presenter(s):
Lauren Pugh, Mosaic Network Inc, lpugh@mosaic-network.com
Michael Bates, Mosaic Network Inc, mbates@mosaic-network.com
Abstract: In March 2010, the Center for the Study of Social Policy (CCSP) unveiled a newly revised online assessment and planning tool—GEMS for Strengthening Families— for the Strengthening Families National Network. The tool consists of two main components: a Self Assessment that helps early care and education and other child-serving professionals identify concrete and practical ways of incorporating the strengthening families model in their day-to-day work, and a Protective Factors Survey that helps parents identify protective factors known to reduce child abuse and neglect. In this demonstration, we will guide users through the new online tool, including how programs can complete the Self Assessment and create Action Plans, how family members can complete the Protective Factors survey, and how all users can use the system for reporting and evaluation purposes.

Session Title: Construct Validity of Race and Its Impact on the Quality of Research and Evaluation
Demonstration Session 796 to be held in TRAVIS C on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Presenter(s):
Kelly Robertson, Western Michigan University, kelly.robertson@wmich.edu
Diane Rogers, Western Michigan University, diane.rogers@wmich.edu
Abstract: The demonstration will provide practitioners with awareness, understanding, and the skills to address construct validity of race and how it impacts research and evaluation. First we will discuss race as it relates to evaluation context followed by an overview of the construct of race. Then we will examine how racism has changed over time and the state of racism today. The majority of the demonstration will focus on real world examples of how constructs of race impact research/evaluation and strategies practitioners can use to counter this impact. Both strengths and weaknesses of suggested strategies will be presented, in addition to how they relate to current evaluation concepts and tools. Throughout the demonstration we will highlight the importance of the topic to practitioners and how addressing it can improve the quality of scientific research/evaluation and promote racial justice. Finally, we will conclude with next steps for personal and professional growth.

Session Title: Strategies and Tools to Evaluate the Comprehensive Picture of Health Policy
Multipaper Session 797 to be held in TRAVIS D on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Jenica Huddleston,  University of California, Berkeley, jenhud@berkeley.edu
Issues in Evaluating Health Care Information Dissemination Programs
Presenter(s):
Boris Volkov, University of North Dakota, bvolkov@medicine.nodak.edu
Abstract: This paper is focused on issues in evaluation for learning, accountability, and quality improvement in health-related projects. Specifically, it examines challenges and opportunities of using a utilization-focused, multidimensional, context sensitive evaluation approach to provide information contributing to program development and effectiveness. The case study includes the context of two federally funded Health Workforce Information Center (HWIC) and Rural Assistance Center (RAC) operated by the Center for Rural Health at the University of North Dakota. The HWIC/RAC evaluation’s purposes include being accountable; determining whether the program has achieved its goals; identifying user needs and areas of interest; and identifying opportunities for improvement. To date, there has been limited research that investigated program evaluation implementation in the context of health care information dissemination and no published studies that used the utilization-focused, multi-dimensional evaluation framework. This paper illustrates how this approach can be used to assess health related programs.
Assessing Health Policy Change Using an Online Survey Instrument
Presenter(s):
Annette Gardner, University of California, San Francisco, annette.gardner@ucsf.edu
Claire Brindis, University of California, San Francisco, claire.brindis@ucsf.edu
Lori Nascimento, California Endowment, lnascimento@calendow.org
Sara Geierstanger, University of California, San Francisco, sara.geierstanger@ucsf.edu
Abstract: From 2007 to 2009, the University of California, San Francisco administered an online survey to 18 grantees funded under The California Endowment’s Clinic Consortia Policy and Advocacy Program. The objective was to quickly characterize the advocacy strategies and outcomes of 3 health policy issues targeted by grantees in the prior year. The survey takes 20-minutes to complete. UCSF compared federal, state and local policies, with the goal of informing advocacy planning for the following year. The findings indicate grantees undertake diverse activities to achieve a policy change, although nearly all focus their advocacy efforts on decision makers. Grantees partner with traditional allies to achieve these policy changes and use of the media varies by policy. Last, we identified the benefits to clinics, such as increased funding and visibility of clinics as key players in safety net.
Evaluating the Impact of the Louisiana Campaign for Tobacco-Free Living: The Importance of Comprehensive Evaluation Strategies
Presenter(s):
Nikki Lawhorn, Louisiana Public Health Institute, nlawhorn@lphi.org
Lisanne Brown, Louisiana Public Health Institute, lbrown@lphi.org
Jenna Klink, Louisiana Public Health Institute, jklink@lphi.org
Abstract: Launched in 2003, the Louisiana Campaign for Tobacco-Free Living (TFL) is a statewide tobacco prevention and control program. TFL utilizes a comprehensive multi-level evaluation strategy tailored to the needs of differing audiences including funders, key stakeholders, and statewide partners. A key component of the evaluation strategy is an integrated and comprehensive evaluation plan developed in coordination with the Louisiana Department of Health and Hospitals Tobacco Control Program (LTCP). As a result of significant programmatic investment and success with other goals, Louisiana adult smoking prevalence decreased significantly from a high of 26.5% in 2003 to 20.5% in 2008, however annual reductions in smoking prevalence were not statistically significant. Tracking of relevant short and intermediate term indictors as well as ongoing process evaluation provided program leadership the evidence needed to persuade funders and other key stakeholders of program success.
Emerging Standards for Evaluating Interactive Social Media Campaigns: Findings From Interviews With Early Innovators
Presenter(s):
David Dowler, Oregon Public Health Division, david.dowler@state.or.us
Abstract: Many public health interventions have begun using interactive social media technologies to develop viral and peer-led momentum for targeted changes in knowledge and behaviors. While these emerging techniques hold potential for programs targeting younger and other hard-to-reach groups, evaluation methods for these interventions are not yet well understood or documented. To explore and summarize early standards for evaluation methods, we conducted telephone interviews with a small national panel of experts determined to have implemented and evaluated such interventions. The purpose of the project was to summarize emerging ideas and promising practices for conducting process and impact evaluations for public health interventions that used interactive online or mobile technologies. Respondents were selected across academic, marketing, and public health sources and were asked to describe evaluation methods they used, and which worked well or could have been improved. This presentation will summarize findings and offer practical recommendations for conducting similar evaluations.

Session Title: Mixed Methods and Multiple Measures in Quality Human Services Evaluation: Lessons Learned
Multipaper Session 798 to be held in INDEPENDENCE on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Human Services Evaluation TIG
Chair(s):
Barry Cohen,  Rainbow Research Inc, bcohen@rainbowresearch.org
Evaluating a Child Welfare Demonstration Program: Evolution, Considerations, and Lessons Learned
Presenter(s):
Heather L Scholz, Action Consulting and Evaluation Team (ACET) Inc, heather@acetinc.com
Stella SiWan Zimmerman, Action Consulting and Evaluation Team (ACET) Inc, stella@acetinc.com
Kirsten L Rewey, Action Consulting and Evaluation Team (ACET) Inc, kirsten@acetinc.com
Ellie Skelton, The Wayside House Inc, ellies@waysidehouse.org
Abstract: Wayside House is implementing the “Incarnation Family Connections” (IFC), a comprehensive substance abuse program where chemically dependent mothers receive treatment while maintaining custody of their children. The goals of IFC are to improve children’s safety, permanency, and well-being by treating the entire family using a comprehensive, systems approach. Mothers receive a wide range of services, including parenting support, while children receive early interventions (e.g., speech, psychological). The evaluation will utilize a mixed method quasi-experimental approach with a comparison group added mid-Year 1. Families, staff, and community partners will complete focus groups and individual interviews and families will complete additional assessments for family functioning and children’s development. The presentation will summarize findings from the first six months of the program, the evolution of the evaluation design, considerations made in selecting the final design, and lessons learned, especially in the context of evaluating this demonstration program while collaborating with multiple external agencies.
A Multi-faceted Implementation Assessment: Comparing Ratings From Observers, Supervisors, Staff, and Clients to Examine Program Implementation
Presenter(s):
Kristin Duppong Hurley, University of Nebraska, Lincoln, kdupponghurley2@unl.edu
Nikki Wheaton, University of Nebraska, Lincoln, nikkiwheaton@gmail.com
Abstract: The objective of this presentation is to summarize ongoing efforts of a grant project to (1) develop multifaceted measures to assess the program context, adherence, and competence of implementation of a manualized treatment intervention for youth in residential care and (2) examine the psychometrics of these implementation assessment measures. These multifaceted implementation measures assess the key components of the program from a variety of perspectives (observers, supervisors, youth, and direct-care staff). A key area of interest is if there is agreement among the assessment methods to identify low, adequate. and high levels of implementation. Audience members will learn about how the implementation measures were constructed, their preliminary psychometric information, and how the different implementation measures correlate with each other.
Evaluating a Child Care Quality Rating and Improvement System: Lessons Learned
Presenter(s):
Michel Lahti, University of Southern Maine, mlahti@usm.maine.edu
Allyson Dean, University of Southern Maine, adean@usm.maine.edu
Sarah Rawlings, University of Southern Maine, srawlings@usm.maine.edu
Abstract: This paper will present findings from three methods of evaluating a state level quality rating and improvement system for licensed child care programming. The Quality for ME program is Maine's initiative to improve the quality of state licensed child care programs. The presentation will describe the process and current findings from three related methods; on site observations of child care, parent report on child care programming, and staff reports on child care programming. While evaluation findings will be presented, the paper will focus on the challenges associated with maintaining quality of the evaluation in the following areas; maintaining the reliability of data collection activities, reporting results and effect on relationships with evaluation participants, and presenting information from three related and different methodological approaches.
Synthesis of a Multi-component Five-Year National Evaluation: Results and Lessons Learned
Presenter(s):
Allan Porowski, ICF International, aporowski@icfi.com
Aikaterini Passa, ICF International, apassa@icfi.com
Kelle Basta, ICF International, kbasta@icfi.com
Susan Siegel, Communities In Schools, siegels@cisnet.org
Abstract: Communities In Schools, Inc. (CIS) is a nationwide initiative to connect community resources with schools to help at-risk students successfully learn, stay in school, and prepare for life. Five years ago, CIS commissioned a third-party evaluation of its entire network. This multi-level evaluation has ten sub-studies (components) that build together to result in a comprehensive national evaluation of the entire federation model, which includes a CIS National Office, state offices, local affiliates, and local sites. As the National Evaluation neared completion, our challenge was to synthesize the components of the ten studies and build a comprehensive set of findings that will be of value for various stakeholders. Members of the national evaluation team will discuss principles of cross-design synthesis, and demonstrate how results from a large, multi-component national evaluation can be wrapped up into a coherent set of evaluation findings.

Session Title: Application of Propensity Score Analysis in Assessing Outcomes
Multipaper Session 799 to be held in PRESIDIO A on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
MH Clark,  Southern Illinois University, mhclark@siu.edu
Using Propensity Scores with Small Samples
Presenter(s):
William Holmes, University of Massachusetts, Boston, william.holmes@umb.edu
Lenore Olsen, Rhode Island College, lolsen@ric.edu
Abstract: Propensity scores are increasingly being used in large sample studies to control for pre-group differences. Because these scores are often used to match cases, they can result in sample attrition. In smaller sample studies, such attrition leaves too few cases for meaningful analysis. An alternative approach when working with small samples is to use propensity scores as covariates to control for pre-group differences. The presenters examine the use of propensity scores with small samples and compare their use with the alternative of using baseline measures to control for pre-group differences. The paper also presents a procedure for empirically testing whether construct integrity holds. The presentation uses data from a dosage specific study of substance abusing families receiving clinical services and coordinated case management. Program outcomes are examined, comparing the use of propensity scores with the use of time one measures alone.
The Utility of Propensity Score Matching in the Context of Evaluation
Presenter(s):
Corina Owens, University of South Florida, cmowens@usf.edu
Connie Walker-Egea, University of South Florida, cwalkerpr@yahoo.com
Abstract: Propensity score matching has gained attention as a potential method for estimating the impact of treatment or causal treatment effects in the absence of experimental evaluations. Experimental evaluations require preparation and planning, and cannot be conducted post-hoc. Propensity score matching, as an alternative, is a quasi-experimental method that attempts to reduce the bias of treatment-effect estimates from observational studies. In these types of studies, participants have not been randomly assigned to treatment or control group, which can be a common scenario in many evaluations. This presentation will focus on the utility of propensity score matching in a program evaluation context. Specifically, two methods of matching will be illustrated: one-to-one matching and propensity grouping or strata matching.
A Comparison of Genetic Matching and Propensity Score Matching Methods for Covariate Adjustment in a Reading Intervention Program Evaluation
Presenter(s):
Ning Rui, Research for Better Schools, rui@rbs.org
Debra Coffey, Research for Better Schools, coffey@rbs.org
Abstract: Considerable attention has been given to various matching techniques to adjust for baseline covariate imbalances in order to properly estimate the impact of a program under evaluation. However, little is known on whether these techniques tend to reliably provide accurate and consistent measures of the treatment effect. Drawing upon two years of experimental data about a comprehensive reading intervention program from a large southern school district, this paper presents a case study of comparing genetic matching, a non-parametric matching technique that applies an innovative search algorithm to assign weight to each covariate (Sekhon and Grieve, 2009), to the traditional propensity score matching as well as regression-based methods. Both propensity score and genetic matching significantly improved baseline covariate balance. Neither multiple regression nor propensity score analysis rendered statistically significant impact on student achievement. Surprisingly genetic matching overturned this conclusion and detected statistically significant yet model-dependent results about the program impact.

Session Title: A Radically Different Approach to Evaluator Competencies
Multipaper Session 800 to be held in PRESIDIO B on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Jane Davidson,  Real Evaluation Ltd, jane@realevaluation.co.nz
Discussant(s):
Michael Scriven,  Claremont Graduate University, mjscriv1@gmail.com
Rodney Hopson,  Duquesne University, hopson@duq.edu
Privileging Culture and Cultural Competence in an Evaluator Competency Framework
Presenter(s):
Nan Wehipeihana, Research Evaluation Consultancy Limited, nanw@clear.net.nz
Abstract: Title: Privileging culture and cultural competence in an evaluator competency framework Being Maori is ‘privileged’ in the commissioning and selection of evaluators for evaluations that have a primary focus on Maori, in Aotearoa New Zealand. It is generally not considered appropriate for non-Maori evaluators to lead Maori focused evaluations. More broadly, quality in evaluation and being a ‘good’ evaluator means being able to engage with, collect and analyze data ideally from within the cultural context; be it Maori, Pasifika or another cultural context or setting. This paper will discuss how in the development of anzea’s evaluator competency framework, a number of key issues were grappled with including: • The balancing of indigenous values, and the values of other cultures. • The context of indigenous rights and the Treaty of Waitangi and concerns about the privileging of worldviews.

Session Title: Use of Administrative Data for Management and Policy Making
Multipaper Session 801 to be held in PRESIDIO C on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Stephen Magura,  Western Michigan University, stephen.magura@wmich.edu
Workforce Challenges in Behavioral Healthcare: A Model Approach to Gathering Systematic Information About Staffing Problems Faced by State Agencies, Programs, and Staff
Presenter(s):
John Hornik, Advocates for Human Potential, jhornik@ahpnet.com
Jenneth Carpenter, Advocates for Human Potential, jcarpenter@ahpnet.com
Jeanine Hanna, Advocates of Human Potential, jhanna@ahpnet.com
David Wright, Oklahoma Department of Mental Health and Substance Abuse Services, dwright@odmhsas.org
Lorrie Byrum, Oklahoma Department of Mental Health and Substance Abuse Services, lbyrum@odmhsas.org
Abstract: Although state human service agencies (i.e., mental health, substance abuse, corrections, child welfare, juvenile justice, Medicaid, and health) operate and fund many behavioral health programs, they rarely have detailed workforce information. With the exception of state employees, they do not have data on staff turnover and retention rates, difficulties in recruitment, salaries and benefits, training in core competencies and evidence-based practices, and job satisfaction. This and other, related information, are necessary for planning to assure a stable, competent workforce. The challenge is to fill this information gap in a reliable and efficient way. We undertook a multi-level, web-based survey of organizations, program managers, and direct care staff providing mental health and substance abuse services in Oklahoma. We also made extensive use of several secondary data sources. We found high levels of staff turnover and vacancies, low levels of compensation, and various training needs across six types of behavioral healthcare positions.
Community Data Collection Systems: Gaps and Recommendations
Presenter(s):
John Carnevale, Carnevale Associates LLC, john@carnevaleassociates.com
Beverlie Fallik, United States Department of Health and Human Services, beverlie.fallik@samhsa.hhs.gov
Abstract: This proposed study will review local-level data collection and analysis processes that provide community organizations and local governments with important information about the scope and scale of community substance abuse and related factors. This special topic study on the existing data gap can assist Federal agencies with policy decisions and can offer ways to provide Federal, state, and community agencies with additional, more proximal, and timely performance data. It will examine local communities and coalitions which have demonstrated success in developing surveillance and indicator systems with minimal resources. The final product will provide guidance for communities seeking to maximize the effectiveness of their efforts to develop substance abuse related indicators and track them over time.
A Web Survey for Drug Treatment Service Costing
Presenter(s):
Laura Dunlap, RTI International, ljd@rti.org
Gary Zarkin, RTI International, gaz@rti.org
Natalie Hodges, RTI International, nhodges@rti.org
Abstract: With scarce resources, substance abuse treatment providers are increasingly required to show that the services they provide are a good investment. Service-level costing helps satisfy this demand by providing detailed cost information; and when combined with outcomes data, service cost estimates also inform about cost-effectiveness for specific treatment services that can help with decisions regarding the best allocation of resources across different treatment options. In response to the growing popularity and utility of Web surveys, we have developed a web-based cost survey (the Substance Abuse Services Cost Analysis Program [SASCAP]) that capitalizes on the advantages of a web-based platform; specifically lower costs and less time associated with survey implementation and data collection. However, survey research suggests that mode of survey implementation may affect data quality. In this study we evaluate two alternative survey methods—the Web SASCAP survey against its established paper-and-pencil version—for estimating service-level costs in treatment programs.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Standardizing Literacy Data Analyses and Reporting Across Multiple Instruments and Grades
Roundtable Presentation 802 to be held in BONHAM A on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Ashley Kurth, City Year Inc, akurth@cityyear.org
Gretchen Biesecker, City Year Inc, gbiesecker@cityyear.org
Abstract: City Year unites more than 1,500 17-24 year-olds for a year of full-time service in over 19 urban school districts. In 2008, City Year established a more standardized model of school service (Whole School Whole Child) to address the academic, social, and emotional needs of children in their school environment. Working with school personnel to differentiate instruction using data, City Year volunteers tutor and mentor students across grades 3-9, and one focus of their work is on literacy. A challenge to collecting and using student-level literacy performance data for formative and summative purposes is the diversity of assessments used and benchmarks across and even within districts. Districts, evaluators, and large-scale organizations, including AmeriCorps, face this challenge in addressing student needs and reporting outcomes. In this roundtable, we will share ways we have standardized data collection and reporting and elicit feedback and discussion from others struggling to standardize data across sources.
Roundtable Rotation II: Overcoming Data Quality Challenges to Evaluation of School-based Programs
Roundtable Presentation 802 to be held in BONHAM A on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Presenter(s):
Lisa Garbrecht, EVALCORP, lgarbrecht@evalcorp.com
Shanelle Boyle, EVALCORP, sboyle@evalcorp.com
Tronie Rifkin, EVALCORP, trifkin@evalcorp.com
Mona Desai, EVALCORP, mdesai@evalcorp.com
Abstract: One of the biggest obstacles to evaluating school-based programs is obtaining quality data from schools and students. This roundtable will discuss challenges to collecting school and student data and how partnerships with programs and school districts can be developed and utilized to overcome those challenges. Presenters will share case examples of effective data collection and evaluation methods used with three diverse programs in California, including the Woodcraft Rangers on-site K-12 after-school program in Los Angeles County, the Murrieta Valley Unified School District Breakthrough Student Assistance Program in Riverside County, and the Project REACH off-site after-school program in San Diego County. In addition, attendees will have the opportunity to share their challenges and experiences evaluating school-based programs and engage in a discussion about cultivating partnerships and other effective strategies for providing quality evaluations for a range of school-based programs.

Session Title: Evaluating Family and Community Change
Multipaper Session 803 to be held in BONHAM B on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Joanne Carman,  University of North Carolina at Charlotte, jgcarman@uncc.edu
Capturing Community-level Outcomes Through Responsive Evaluation: An Example of a New York City Settlement House
Presenter(s):
Elizabeth Coker, Lenox Hill Neighborhood House, lcoker@lenoxhil.org
Abstract: Settlement Houses are unique among non-profit organizations for their ties to a specific historical movement that emphasized social solidarity with a focus on immigrants and the urban poor. While the details of their practices have changed with the social and political climate, settlement houses continue to provide vital community services in urban areas around the world. As Fabricant and Fisher have argued, however, what has changed settlement houses the most in recent decades is the “The new contractually defined structure of services [that] offers less space and time and fewer rewards to accomplish what is most basic to the provision of services: The building of relationships” (2002, p. 237). The present paper will explore how research and evaluation has been used to revive and support the historical mandate of one of the original settlement houses in New York City, and help to preserve its focus on community renewal and relationships.
Uniting for Community Change: Lessons from Charlotte, North Carolina
Presenter(s):
Joanne Carman, University of North Carolina at Charlotte, jgcarman@uncc.edu
Abstract: This paper presents the findings of a process evaluation of the United Agenda for Children, a community-based initiative that helped to identify a community-wide consensus about service priorities for children in Charlotte, NC. In 2004, more than 1,000 people participated in a community-wide meeting about the safety, health and education of the region’s children. From this unprecedented gathering, 14 priorities were identified and a coalition of nonprofit organizations, foundations, citizens, civic leaders, corporations, and public agencies were charged with uniting in an effort to ensure a positive future for all youth. Using data gathered from focus groups, interviews, program records, and media reports, this paper examines the initiative’s coverage (who participated), components (operations at each stage), participant feedback (how well the initiative met participant expectations), and short term outcomes (results). The findings are intended to help us understand how to improve the implementation and success of these initiatives.
Getting to the Core: A Multi-site Mixed Methods Evaluation of Neighbor-to-Neighbor Helping, Also Known As Neighboring
Presenter(s):
Brandee Menoher, Points of Light Institute, bmenoher@pointsoflight.org
Colleen Kassouf Mackey, Points of Light Institute, cmackey@pointsoflight.org
Abstract: Points of Light Institute (POLI) has embraced Neighboring, a strategy of informal volunteering or neighbor-to-neighboring helping to strengthen families and communities, as a grant making mechanism funded by the Annie E. Casey Foundation for nearly 10 years. This asset-based approach focuses on authentic engagement of residents to empower them to be the change they seek in their Neighborhood. Historically, POLI required minimal reporting procedures for grantees. Evaluation comprised self reported activities and outputs, which made it difficult to identify program characteristics, targets, outcomes, and program sustainability factors. In 2009, POLI addressed evaluation quality through a summative evaluation of five former grantees paired with a real time assessment of current grantees and POLI training and technical assistance offerings. This paper presentation will cover how participant-based data added to knowledge on Neighboring; challenges experienced collecting data from multi-site, low-income, and at-risk populations; and evaluation capacity building efforts POLI evaluators offered.
An Evaluation of the Whole Child Website: Lessons Learned From a Low-Budget Community Focus Group
Presenter(s):
Breanne Porter, Florida State University, bep07e@fsu.edu
Mercedes Nalls, Florida State University, mercedes.nalls@gmail.com
Abstract: Whole Child is a web-based referral agency that connects parents from diverse backgrounds with services in their community. Their online family needs assessment and resource guide is a one-stop referral program. Parents use the resource guide to learn about local resources and complete a profile on which their referrals are based. We conducted a focus group and usability analysis to gain information on parents' experiences with the website, including the resource guide, the profile, and the six dimensions used to categorize diverse family needs. Parents were given time to explore the website on their own, complete surveys, and give feedback in a group discussion. In addition to reporting results, this presentation describes the difficulties encountered in conducting a low-budget, community, non-profit focus group. This paper presents lessons learned involving setting clear expectations and roles, improvisation in the face of technical difficulties, and practical issues such as childcare for participating families.

Session Title: Practical Issues in Educational Measurement and Assessment
Multipaper Session 804 to be held in BONHAM C on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Wendy DuBow,  National Center for Women and Information Technology, wendy.dubow@colorado.edu
Discussant(s):
Emily Lai,  University of Iowa, emily-lai@uiowa.edu
What Teachers Need in Terms of Results Versus What is Commonly Reported: Some Standard Deviations?
Presenter(s):
Guido Gatti, Gatti Evaluation Inc, gggatti@gattieval.com
Katya Petrochenkov, Gatti Evaluation Inc, katyap@gattieval.com
Abstract: The authors surveyed 109 teachers about what answers they require from educational research and how they need those answers reported to them. Teachers as practitioners are basically interested in how they can best implement a program, how many of their students will benefit, and how much students will benefit. In digesting information as to 'how much' and 'how many', teachers have little knowledge of, or use for, traditional effect size statistics such as Cohen's d. When described in teacher friendly terminology, only 34% were familiar with Cohen's d, only 24% thought it useful, and only 41% found it understandable. In contrast, teachers found simple comparisons in metrics familiar to them (e.g., number questions correct, grade equivalence, percent meeting standard or above average) most understandable (i.e., 90%) and useful (i.e., 73%). In the authors’ view, teacher friendly measures should be reported along with traditional statistics when reporting educational research results.
The Impact of Releasing Rubrics on Performance Assessment Scores
Presenter(s):
Ashlee Lewis, University of South Carolina, ashleealewis@hotmail.com
Min Zhu, University of South Carolina, helen970114@gmail.com
Xiaofang Zhang, University of South Carolina, jae2008@gmail.com
Abstract: Rubrics are commonly used as scoring tools in educational assessments. Although assessment specialists argue that releasing scoring rubrics to educators can improve student performance and teachers’ instructional practices, most studies have focused on scoring procedures and have not attempted to provide empirical evidence for the value of releasing rubrics to teachers. The study addresses that gap by examining the relationship between releasing rubrics to teachers and student performance task scores on a statewide arts assessment. Evaluators will conduct a longitudinal study across four years to investigate trends in scores before and after rubrics were released. To further contextualize score changes and to examine the impact of releasing rubrics on instructional practices, evaluators will collect qualitative information from teachers who administered the task. The study contributes substantially to literature on performance assessment and rubric use by investigating the assertion that releasing scoring rubrics can offer instructional benefits and improve student performance.
The Instructional Impact of North Dakota State Accountability System: A Consequential Validity Study
Presenter(s):
Xin Wang, Mid-continent Research for Education and Learning, xwang@mcrel.org
Abstract: The purpose of this investigation is to examine both intended and unintended consequences of North Dakota state accountability system on curriculum change and local instruction, including the use of proper accommodation practices. A mixed-methods approach is used to gather both qualitative and quantitative data from an online survey and focus groups. Teachers and principals from 70 schools will complete an online survey about the implementation of the state assessment and accountability systems and related impact on their instructional practices. Focus groups of teachers and school administrators will verify and expand upon school personnel’s survey data regarding the extent of support for the new direction of education in North Dakota, whether the organization of school district curriculum has changed, and whether the classroom teachers are aligning their instruction with the state standards. Initial findings from survey data and focus group protocols will be reported.
Defining Domains of Coaching Knowledge Using a Modified Delphi Process
Presenter(s):
John Sutton, RMC Research Corporation, sutton@rmcdenver.com
Beth Burroughs, Montana State University, bburroughs@math.montana.edu
David Yopp, Montana State University, dyopp@math.montana.edu
Abstract: With a growing interest in the use of instructional coaches to improve mathematics instruction in K-12 schools, this paper will share how a modified Delphi process was used to engage national coaching experts and practitioners to define coaching knowledge. The eight knowledge domains identified and defined through this process represent the first agreed upon definitions of coaching knowledge and are being used in a national research study funded by the National Science Foundation. The paper provides a detailed description of the three step process, along with respondents ratings of agreement and the full definitions.

Session Title: Quality Evaluation: Avoiding Hypocrisy by Formative Evaluation of Evaluation's Outcomes, Processes, and Costs
Think Tank Session 805 to be held in BONHAM D on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
Presenter(s):
Sarah Hornack, American University, sarah.hornack@gmail.com
Discussant(s):
Brian Yates, American University, brian.yates@me.com
Jose Hermida, American University, hermidaj@gmail.com
Abstract: This Think Tank addresses the Presidential Theme of Evaluation Quality by asking, “Is evaluation worth it?” More specifically, does evaluation achieve the outcomes desired, at what cost, and how does evaluation compare to other paths to achieving similar outcomes with lower costs? We begin with the common hypocrisy of evaluations not being themselves evaluated. Then, three questions of meta-evaluation are raised in break-out groups: a) how should we judge the outcomes of an evaluation? b) how, and should, we judge the process of being evaluated, and c) how, should, we measure the costs of an evaluation? We reassemble the groups for summaries directed at answering d) how can we measure the cost-effectiveness or cost-benefit of evaluation? and finally e) how can one best conduct an evaluation within the constraints of limited resources while achieving the highest possible quality?

Session Title: Evaluating Climate Change: International Perspectives, Methods, and Estimation Techniques
Multipaper Session 806 to be held in BONHAM E on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Environmental Program Evaluation TIG
Chair(s):
Kara Crohn,  Research Into Action, kara.crohn@gmail.com
Is There An Impact of China’s Three Gorges Dam Project on Its Carbon Dioxide Emissions?
Presenter(s):
Jiaqi Liang, American University, jl3510a@student.american.edu
Abstract: This paper conducts an empirical program evaluation (from 1980-2007) gauging the impact of China’s Three Gorges Dam on its CO2 emission and thermal energy use, by testing the substitute effect of Three Gorges on electricity generation from thermal energy source, as well as the mitigating effect of Three Gorges on carbon dioxide emissions. Since only the carbon dioxide emission data is systematically compiled and accessible, the report focuses on this particular kind of greenhouse gas. The implementation of the project works as a natural experiment. In particular, a single interrupted time-series design applies. The primary findings of the study indicate that Three Gorges hydropower project falls short of policymakers’ expectation in terms of curbing the carbon dioxide emissions. The electricity contribution of Three Gorges is perplexed by the macro environmental factors, the overriding thermal energy structure, as well as the uneven energy supply and demand across regions.
Proposed Methodology to Evaluate Global Warming Impacts on a High Andean Ecosystem
Presenter(s):
Clemencia Vela, Independent Consultant, clemenvela@aol.com
Abstract: The present paper proposes a methodology to answer the question How is Global Warming Affecting a Paramo Ecosystem? It takes in consideration that temperatures decrease at higher altitudes, and that different species or communities are adapted to different microclimates. Thus, the hypothesis proposed is that global warming reflects in a temperature rise along the slope affecting the vegetation at different altitudes. The methodology applied was to choose a National Park where there is a cone shaped Volcano with no human disturbance, to establish 10*10 m. permanent sample plots, at different altitudes ranging from 4635 to 3861 meters above sea level (asl). Data would be collected yearly for at least ten years to compare species presence and its abundance measured as a means of percentage of ground cover, and record significant changes. In additions, data would be compared with data reported in the scarce bibliography, 24 -25 years ago.
Estimating Climate Change
Presenter(s):
Mende Davis, University of Arizona, mfd@u.arizona.edu
Owen Davis, University of Arizona, palynolo@geo.arizona.edu
Abstract: The estimation of climate change continues to come under rigorous scrutiny. This presentation provides a gentle introduction to the estimation of climate over time. Continuous historic records of temperature and rainfall have only been kept for 200 years. To estimate previous temperatures and rainfall over time, researchers must rely on proxy records such as ice cores, tree rings, corals, and other continuous deposits. To estimate past climate from a data source, multiple factors must be taken into account. The observed climate data will be composed of the climate signal, human impact, biases associated with the data source, other factors (solar variability, cyclic variations in the earth’s orbit), and random variations (error). We will demonstrate the estimation of past temperature and rainfall from one data source, taking multiple factors into account. The results from multiple data sources will provide the strongest estimates for past climates.

Session Title: Conceptualizing and Conducting Quality Peer Reviewed Portfolio Evaluations: Approaches and Lessons Learned From the Centers for Disease Control and Prevention (CDC)
Multipaper Session 807 to be held in Texas A on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Government Evaluation TIG
Chair(s):
Sue Lin Yee,  Centers for Disease Control and Prevention, sby9@cdc.gov
Discussant(s):
Sue Lin Yee,  Centers for Disease Control and Prevention, sby9@cdc.gov
Maximizing Quality in a Portfolio Review of the National Center for Injury Prevention and Control’s (NCIPC) Core State Injury Program
Presenter(s):
Elyse Levine, Academy for Educational Development, elevine@aed.org
Sue Lin Yee, Centers for Disease Control and Prevention, sby9@cdc.gov
Derek Inokuchi, Academy for Educational Development, dinokuchi@aed.org
Angela Marr, Centers for Disease Control and Prevention, aiy4@cdc.gov
Abstract: The Core State Injury Program was initiated by CDC in 1997 to help states develop “core” capacity building and surveillance activities to prevent and control injuries. Since 2005, 30 states have been awarded seed money towards this objective. With the program nearing the end of a five-year cycle, CDC sought a portfolio review to inform future development of the initiative. The presentation will describe use of state progress reports, survey data, and interviews to assess the implementation of the Core State Injury Program and triangulate on factors associated with its success. Having these ingredients for quality evaluation was a strength going into the portfolio review. However, as with most complex programs, the difficulty lay in the details. We will briefly discuss the challenges of ensuring a quality evaluation in this context, including operationalizing measures, political sensitivities, and quality control measures instituted to analyze large volumes of qualitative and quantitative data.

Session Title: Successfully Managing Evaluation Projects: Quality Solutions to Common Project Management Challenges
Demonstration Session 808 to be held in Texas B on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Evaluation Managers and Supervisors TIG
Presenter(s):
Kathy Brennan, Innovation Network, kbrennan@innonet.org
Veena Pankaj, Innovation Network, vpankaj@innonet.org
Abstract: It takes more than strong evaluation skills to successfully manage an evaluation project. This session looks at key management and consulting skills that are necessary for evaluation projects to be successful--including client management, budget management, contract management and understanding the context of each evaluation engagement.

Session Title: Using Mixed Methods to Evaluate Program Implementation and Inter Agency Collaboration
Multipaper Session 809 to be held in Texas C on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the
Chair(s):
Sandra Bridwell,  Cambridge College, sandra.bridwell@go.cambridgecollege.edu
Discussant(s):
Michele Tarsilla,  Western Michigan University, michele.tarsilla@wmich.edu
Systems Approach and Mixed Methods in Evaluation Within a Collaborative Context
Presenter(s):
Jianglan White, Georgia Department of Community Health, jzwhite@dhr.state.ga.us
Dafna Kanny, Centers for Disease Control and Prevention, dkanny@cdc.gov
Abstract: This paper introduces a system strategy for transforming evaluation into a collaborative and participatory process, with a special emphasis on how to engage program stakeholders in evaluation planning, implementation and dissemination. Modeled on an application of the Model for Collaborative Evaluations (Collaborative Evaluations by Liliana Rodríguez-Campos, 2005) and on the CDC Framework for Program Evaluation (http://www.cdc.gov/eval/framework.htm), the paper provides practical, step-by-step suggestions on how to apply such conceptual approaches to a real world evaluation practice. It demonstrates a collaborative and mixed methods evaluation of a state-wide multi-level obesity prevention initiative, which addresses physical inactivity, poor nutrition, and obesity, through policy, environmental support strategies for behavior changes within a social-ecological context. In addition, the paper will introduce cross-setting mixed data analysis methodologies (both quantitative and qualitative) which were employed in the evaluation, in particular, how to measure health related objectives and outcomes, particularly intermediate outcomes.
Assessing Collaborative Functioning: Identification of Factors Associated With Longevity and Perceived Effectiveness
Presenter(s):
Ann Peisher, University of Georgia, apeisher@uga.edu
Virginia Dick, University of Georgia, vdick@cviog.uga.edu
Amy Laura Arnold, University of Georgia, alarnold@uga.edu
Robetta McKenzie, Augusta Partnership for Children Inc, rmckenzie@arccp.org
Katrina Aaron, Augusta Partnership for Children Inc, kaaron@arccp.org
Don Bower, University of Georgia, dbower@uga.edu
Abstract: In an effort to reduce duplication and maximize resources, collaboration is often a requirement of grant funding; therefore, frequently collaboratives are formed for specific purposes and survive only for the duration of the funding period. Typically, real world community issues and problems outlive the specific streams of funding and the associated collaborative; however, there are some exceptions to the short lived collaborative. In the present study, the collaborative framework developed by Berstrom, Clark, Hogue, Iyechad, Miller, et al.(1995) is used as the theoretical basis of a mixed-method collaborative functioning assessment that studies the factors associated with a collaborative with over 20 years tenure. Data and analysis included document review, researcher-developed survey (N=94), and key informant interviews (N=8). There is significant congruence between the methods of assessment and a surprising history of resource sharing that is a probable key to the current level of commitment and continued cooperation.
Evaluating and Enhancing Inter-agency Collaboration and Project Outcomes Using Social Network Analysis Within a Mixed Methods Design
Presenter(s):
Debra Heath, Albuquerque Public Schools, heath_d@aps.edu
Jennifer Cross, Colorado State University, jeni.cross@colostate.edu
Abstract: This paper describes how Social Network Analysis is used within a mixed methods design to evaluate changes in school-community partnerships as well as the impacts of those changes on student health, safety and performance. Social network data are collected not only to measure changes over time in the structure of inter-agency relationships, but also to engage stakeholders in concrete discussions about collaboration, including what level of collaboration each partner needs to have with each of the others in order to achieve desired project outcomes. Numeric ratings of inter-agency linkages and narrative descriptions are collected in group sessions and via individual surveys and interviews. Results are depicted in visually powerful network diagrams that validate stakeholder efforts and reveal structural ‘holes’ that must be filled with new or expanded relationships. These methods provide highly reliable information about collaboration development and also energize stakeholders around concrete improvement plans.

Session Title: Evaluation of National Research and Development (R&D) Programs as a Tool for Increasing Efficiency of Public Finance
Multipaper Session 810 to be held in Texas D on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Boojong Gill,  Korea Institute of Science & Technology Evaluation and Planning (KISTEP), kbjok@kistep.re.kr
Trends in the Performance Evaluation System of the National R&D Program in Korea
Presenter(s):
Ho-Dong Lee, Ministry of Finance and Strategy, hdlee@mosf.go.kr
Jae-Young Kim, Ministry of Strategy and Finance, engpub@mosf.go.kr
Jae-Young Kim, Ministry of Strategy and Finance, jykim72@mosf.go.kr
Abstract: This presentation introduces the trends in performance evaluation of national R&D program in Korea. Evaluation of national R&D programs is carried out by self/meta evaluation and in-depth evaluation. Trends of performance evaluation of national R&D Program in Korea focus on the qualitative excellence in addition to the efficiency of performance. According to the trend, self/meta evaluation adopts an indicator of qualitative excellence with weight of 10%. On the other hand, In-depth evaluation focuses on checking and verifying the efficiency and success of the program and provides the alternatives to program improvement when the problems were found. Altogether, evaluation of national R&D programs is performed as a tool for better management of public finance adopting methodologies for checking qualitative excellence and the efficiency/effectiveness.

Session Title: Evaluators Thinking Evaluatively About Use: Tips for the Trade
Multipaper Session 811 to be held in Texas E on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Evaluation Use TIG
Chair(s):
Helene Jennings,  ICF Macro, helene.p.jennings@macrointernational.com
Reframing the Goals of an Evaluation During Program Dissolution: What Can Evaluation Offer?
Presenter(s):
Christine Doe, Queen's University at Kingston, christine.doe@queensu.ca
Michelle Searle, Queen's University at Kingston, michellesearle@yahoo.com
Lyn Shulha, Queen's University at Kingston, lyn.shulha@queensu.ca
Abstract: Current evaluation literature, in addition to focusing on judging worth and merit, emphasizes learning from and in evaluative contexts. A central purpose of program evaluation theory and practice is to contribute to a more refined understanding of complex social problems and the programs intended to address these problems. Evaluations can promote learning in many ways; this paper explores what evaluations can offer the process of downsizing a program. In recent literature, this area is relatively undocumented, but regrettably, in some contexts inevitable. This case study explores uses of an evaluation during program dissolution to understand ways to abridge its staff and clientele, while maintaining some key program elements for possible restructuring. Preliminary results suggest that when terminating a program, an evaluation that emphasizes use has potential to maintain and strategically determine the critical structures and processes that allow a program to go dormant but not to disappear.
Using Evaluation Data for Secondary Independent Research: Lessons From Two Studies
Presenter(s):
Kari Nelsestuen, Education Northwest, kari.nelsestuen@educationnorthwest.org
Caitlin Scott, Education Northwest, caitlin.scott@educationnorthwest.org
Theresa Deussen, Education Northwest, theresa.deussen@educationnorthwest.org
Abstract: Evaluation is not synonymous with basic research, although they share similar methods and rules of evidence. Because of shared methods, evaluations often collect data that initially appears ripe for secondary analyses for research purposes. However, using evaluation data for secondary research analyses can be challenging because these data were not collected to answer traditional research questions. The challenges of using evaluation data to address research questions are described in this paper. The paper draws on two research studies based on secondary analyses of survey and interview data from six evaluations. As other evaluators may believe about their evaluation data, the opportunity for secondary analyses was initially very promising and eventually led to publication. However, the nature of the data collected for evaluation purposes led to several challenges and methodological limitations in the secondary research studies.
Evaluator Perceptions of Process Use
Presenter(s):
Lennise Baptiste, Kent State University, lbaptist@kent.edu
Abstract: There are two aspects to the problem of not having a working definition of process use. The first is that evaluators are still unable to say definitively what examples of stakeholder behavior are in fact illustrations of process use. Second, even if evaluators can correctly identify process use, they need to be sensitized about its presence in their work settings, to strengthen the recommendations they make for stakeholders. The presenter will share the findings of a study regarding evaluators’ perceptions of process use after reviewing 33 examples of stakeholder behavior. These examples were taken from reports of evaluations which ranged from low to high stakeholder involvement. These findings can contribute to building the construct validity of process use.
Federal Mandates and Guidelines and How This Impacts Program Evaluation
Presenter(s):
Andrea Wood, Western Michigan University, andrea.s.wood@wmich.edu
Rashell Bowerman, Western Michigan University, 
Gary Miron, Western Michgian University, gary.miron@wmich.edu
Patricia Moore, Western Michigan University, patricia.a.moore@wmich.edu
Abstract: This paper presents the findings from an exploratory study of evaluation practices involving federally funded programs and projects. The study examines the nature of evaluation processes within the context of federally funded grants to determine challenges affiliated with and opportunities for evaluator growth. Surveys and interviews are used to collect data from program and project directors and principal investigators. The key question examined in the study is: What are the positive and negative implications of federal requirements for evaluation? Mandated guidelines for evaluations from federal agencies have the potential to restrict or enhance the work of the evaluator in every stage of evaluation. On the positive side, these guidelines are likely to increase the number of evaluations that are undertaken. On the negative side, these guidelines may narrow the focus of the evaluation and restrict the utility and purpose of evaluations.
Emerging Concepts and Tools for Developing and Assessing Evaluation Capacity of Educational Networks and Partnerships: An Iterative Approach
Presenter(s):
Ed McLain, University of Alaska, Anchorage, afeam1@uaa.alaska.edu
Susan Tucker, Evaluation & Development Associates, sutucker1@mac.com
Abstract: Building the capacity of school-based teams and larger networks to use improvement-oriented evaluation methodologies across diverse contexts, while exhorted by funding agencies, is rarely evaluated. The authors have been engaged in network capacity building since 2004 as part of a federally-funded USDE Teacher Quality Enhancement (TQE) grant. Grounded in the context of nine Alaskan high-need urban and rural districts experiencing a crisis in attracting (and holding) quality teachers, this session will focus on demonstrating methods and tools for sustainable data teaming and evaluation use. Participants will gain a clearer understanding of the indicators of successful teaming and network development between university and districts. Finally, we present a network development evaluation matrix.

Session Title: Research on Participatory Evaluation
Multipaper Session 812 to be held in Texas F on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Research on Evaluation TIG
Chair(s):
Michael Szanyi,  Claremont Graduate University, michael.szanyi@cgu.edu
A Systematic Map of the Empirical Literature on Participatory Evaluation in Relation to Evaluation Use: Toward Reality Testing
Presenter(s):
Pierre-Marc Daigneault, University of Laval, pierre-marc.daigneault.1@ulaval.ca
Abstract: This presentation studies the relationship between participation and evaluation use through a scoping study of the empirical literature. A scoping study is “a technique to ‘map’ relevant literature in the field of interest” (Arksey & O’Malley, 2005, p. 20) that stands somewhere between traditional literature reviews and systematic reviews. It uses systematic and transparent techniques to search for studies, determine their relevance and extract data, but it does not assess study quality or synthesize their findings. The results will be addressed in terms of relevant dimensions such as the number of studies identified, research questions and methodology, conceptualization and measurement of variables, context (country, policy sector, level of analysis) and findings. The objectives are twofold: (1) to catch a glimpse of the main findings on the relationship between PE and evaluation use, and (2) to identify gaps in the empirical literature in order to guide future inquiry on evaluation.
An Overview of the Empirical Studies of Stakeholder Involvement in Program Evaluation
Presenter(s):
Landry Fukunaga, University of Hawaii, lfukunag@hawaii.edu
Paul Brandon, University of Hawaii, brandon@hawaii.edu
Abstract: The interaction of evaluators with program stakeholders for the purpose of improving evaluations has a long history in program evaluation. Evaluators have reported involving stakeholders to enhance the use of evaluation findings, enhance evaluation validity, improve stakeholders’ evaluation capacity, help ensure social justice, and empower stakeholders. Broad summaries of this participation have been reported, but to our knowledge, no comprehensive overview of published studies that report stakeholder involvement has been published. We address this deficit in this paper in a review of 181 studies that we identified in a comprehensive search for studies about stakeholders in evaluation published since 1985. We summarize the purposes, methods, and effects of stakeholder involvement in the studies, the professions or disciplines targeted, and the methods for collecting data about stakeholder involvement. Finally, we arrive at conclusions about the strength of the evidence about the breadth and depth of stakeholder involvement.
An Investigation of the Relationship Between Participatory Evaluation and Use of Evaluation in Three Multi-site Evaluations
Presenter(s):
Denise Roseland, University of Minnesota, rose0613@umn.edu
Abstract: This dissertation research describes an investigation that explores the nature of the relationship between participation in evaluation and the use of evaluation findings and processes within three large-scale multisite evaluations. The purpose of this study is to test whether assumptions and theories about participation translate into evaluation use in the same ways as seen in single evaluation sites. Using canonical correlation analysis and a collection of 20 interviews, this paper describes and tests the relationship between these two critical conceptual powerhouses in evaluation. Using data that were collected as a part of the NSF-funded research “Beyond Evaluation Use” (Lawrenz & King, 2009), this study found that some theories and beliefs about participatory evaluation contribute to use and influence in similar ways as single-site evaluations. The differences identified in this research highlight potential planning and implementation considerations that might allow multi-site evaluators to enhance use and influence of multi-site evaluations.
Boldly Going Inward, Outward, and Forward: Studying (How to Study) the Intersection of Theory and Practice in Evaluation
Presenter(s):
Jeehae Ahn, University of Illinois at Urbana-Champaign, jahn1@illinois.edu
Abstract: As a set of principles, values, and assumptions about what evaluation is, what it can and should accomplish, as well as what it means to be an evaluator, evaluation theory can serve as a powerful guiding tool that can usefully inform important dimensions and dynamics of one's evaluation practice--from conceptualizing particular evaluation purposes to selecting certain methodological procedures of data collection, analyses, interpretation and reporting. This paper explores ways to systematically “study” this organic conjunction between evaluation theory and practice in “real time” and on the ground as integral part of one’s evaluation work, as well as in connection to evaluation quality. Interweaving stories and examples along the way, the paper discusses specific built-in processes of examining the structure, process and character of how one’s theory converses with one’s practice (and vice versa) with the aspired end result being enriched evaluation theory and improved evaluation practice.

Session Title: Evaluating Advocacy Efforts in Cooperative Extension and Other Outreach Organizations
Skill-Building Workshop 813 to be held in CROCKETT A on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Extension Education Evaluation TIG
Presenter(s):
Allison Nichols, West Virginia University Extension, ahnichols@mail.wvu.edu
Teresa McCoy, University of Maryland, tmccoy1@umd.edu
Florita Montgomery, West Virginia University Extension, florita.montgomery@mail.wvu.edu
Abstract: This workshop will focus on helping professionals learn skills in designing evaluation protocols for advocacy efforts. It will begin with defining advocacy and determining what kinds of work in Extension or other outreach efforts can be considered advocacy. The presenters will discuss what is different and what is alike in typical program evaluations versus advocacy evaluation. Participants will then be asked to brainstorm, within small groups, the design of advocacy logic models focused on 1) teaching or capacity-building efforts; 2) service, such as network formation, relationship building, or organization; 3) policy-change efforts; and 4) research focused on understanding the impact of the effort. From the logic models, participants will be asked to suggest new and innovative methods for measuring the success of advocacy efforts.

Session Title: Experiencing Quality in Evaluation Training in Brazil and Ethiopia
Multipaper Session 814 to be held in CROCKETT B on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Teaching of Evaluation TIG
Chair(s):
Thereza Penna Firme,  Cesgranrio Foundation, therezapf@uol.com.br
Resistance and Adoption: Challenges to Change Public Health Professionals in Evaluators in Situation
Presenter(s):
Elizabeth Moreira dos Santos, Oswaldo Cruz Foundation (Fiocruz), bmoreira@ensp.fiocruz.br
Marly Cruz, National School of Public Health (ENSP/Fiocruz), marly@ensp.fiocruz.br
Pedro Paulo Chrispim, National School of Public Health (ENSP/Fiocruz), chrispim@ensp.fiocruz.br
Ana Reis, National School of Public Health (ENSP/Fiocruz), anareis@ensp.fiocruz.br
Ana Roberta Pascom, National Aids Program, Brazil, ana.roberta@aids.gov.br
Abstract: The Masters in Health Evaluation is a innovative experience in M&E capacity building organized by the Escola Nacional de Saúde Pública Sergio Arouca (ENSP/FIOCRUZ) in Brazil. Its goal is to build capacity among health professionals in order to evaluate programs to control endemic processes in the country. Social historical and technical operation dimensions are considered, based on communication, ongoing education and evaluation knowledge production to transform government attitudes regarding its programs. The course is conducted with a dynamic pedagogical approach, oriented by several learning objectives and through a methodology based in real cases and problems. The curriculum gathers learning and practice which are defined by different profiles and skills, so that a new evaluator is formed. Up to now, 19 masters have concluded the program and there are 32 current students. All of them develop evaluations to answer major questions of the public health sector in Brazil.

Session Title: Quality and International Evaluation
Think Tank Session 815 to be held in CROCKETT C on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Presenter(s):
Ross Conner, University of California, Irvine, rfconner@uci.edu
Alexey Kuzmin, Process Consulting Company, alexey@processconsulting.ru
Discussant(s):
Michael Bamberger, Independent Consultant, jmichaelbamberger@gmail.com
Tessie Catsambas, EnCompass LLC, tcatsambas@encompassworld.com
Thomaz Chianca, COMEA Evaluation Ltd, thomaz.chianca@gmail.com
J Bradley Cousins, University of Ottawa, bcousins@uottawa.ca
Abstract: In recent years, evaluation has spread around the globe, with evaluation activities and organizations underway in every region and most nations. There are references to ‘international evaluation’ in many documents and there are groups and networks focused on this area. This increase in evaluation beyond and across national borders raises the question of evaluation quality in the international context. The session will begin with a brief historical review of the development of international evaluation and an overview of some salient aspects of it. The session will then shift to small-group discussions among attendees to consider answers to the following questions: 1.What makes an evaluator ‘international’? 2.What makes an evaluation ‘international’? 3.What determines quality of ‘international evaluation’ as opposed to evaluation in general? At the end, four discussants experienced in international evaluation will synthesize the comments from the small groups and share their thoughts.

Session Title: Development and Selection of Frameworks and Constructs for Disaster Preparedness and Recovery Evaluation
Multipaper Session 816 to be held in CROCKETT D on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Disaster and Emergency Management Evaluation TIG
Chair(s):
Scott Aminov,  Food For The Hungry, saminov@fh.org
Discussant(s):
Patricia Bolton,  Battelle Memorial Institute, bolton@battelle.org
Homeland Security: Evaluating With Management System Standards
Presenter(s):
Sharon Caudle, Texas A&M University, scaudle@bushschool.tamu.edu
Abstract: Almost ten years after the 2001 terrorist attacks, difficulty remains in setting homeland security preparedness goals. Evaluators charged with systematically assessing homeland security actions and results face difficulties in gathering evidence, demonstrating results, and posing recommendations for further improvement. Drawing on relevant evaluation literature, this AEA paper presents an evaluative framework based on management system standards covering homeland security, societal security, disaster management, and business continuity. Standards are generally defined as a uniform set of measures, agreements, conditions, or specifications that establish benchmarks for performance, such the ISO 9000 quality management standard. The paper would specifically illustrate such an evaluation approach and its strengths and weaknesses. It draws on examples from mission areas such as transportation security, border control, counter-terrorism, and emergency management.
Tribal Disaster Preparedness Assessment: Assessing the Competency, Capacity, and Capability Needs of American Indian Nations
Presenter(s):
Lisle Hites, University of Arizona, lhites@uab.edu
Jessica Wakelee, University of Arizona, wakeleej@email.arizona.edu
Abstract: While disaster emergency preparedness needs assessments are periodically conducted for most regions, counties, and communities within the United States, it is much less common for such assessments to be focused on Tribal Nations. This session will present a needs assessment tool that was developed specifically for assessing the needs of American Indian communities, discuss the development, implementation, and results of the assessment, and share lessons learned.
How Smarter Monitoring and Evaluation (M&E) Can Improve Disaster Recovery: A Critical Examination of Performance Accountability Frameworks
Presenter(s):
Margaret Stansberry, American Red Cross, margaret.stansberry@ifrc.org
Abstract: International as well as domestic aid agencies are rightly being pressured to demonstrate the difference they make with donated resources in disaster relief and development work. Due, in part, to this pressure, performance accountability frameworks are becoming more widely used. However, in attempting to apply such frameworks many organizations focus only on common indicators and reporting results to donors which can straight jacket critical thinking and innovation; such narrow a focus misses opportunities for downward accountability and learning. If properly designed, performance accountability frameworks can encourage greater coordination, reduce project level M&E costs and help ensure accountability both downwards and sideways. This paper examines the lessons learned from 3 disparate accountability frameworks: TRIAMS by IFRC/WHO/UNDP, Katrina Aid Today by UMCOR, and Tsunami Recovery Program by American Red Cross. It then makes recommendations for donors and implementers alike on necessary conditions and components of a successful system.

Session Title: Reflections of Emerging Professionals: The Culturally Responsive Path Ahead
Panel Session 817 to be held in SEGUIN B on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Graduate Student and New Evaluator TIG
Chair(s):
Jill Jim, Independent Consultant, jilljim2003@hotmail.com
Discussant(s):
Pauline Brooks, Independent Consultant, pbrooks_3@hotmail.com
Abstract: The guiding principles of the American Evaluation Association call on evaluators to “understand and respect differences, such as differences in culture” and “account for potential implications of these differences” when moving through the cycle of evaluation. Despite the recognized importance of culturally responsive evaluation, much of the field remains unsure about the path an evaluator must take to develop and employ these skills. In February, 2010, four emerging evaluators from historically underrepresented minority groups completed a one-year training and skill building fellowship utilizing culturally responsive evaluation as part of the Robert Wood Johnson Foundation Evaluation Fellows Program. At agencies across the country, these emerging evaluators sought to engage their work and surroundings through culturally sound methodology. This panel presentation offers insights into the experiences of these emerging evaluators, describes their successes and challenges in applying a cultural lens to their work, and offers suggestions for addressing culture in evaluative practice.
Seeing the Iceberg Under the Surface: Reflections on Developing as a Culturally Responsive Evaluator
Katrina Ellis, University of Michigan, kahe@umich.edu
Katrina Ellis was a Fellow in the RWJF Evaluation Fellowship Program. She completed her fellowship at HighScope Educational Research Foundation in Ypsilanti, Michigan and is currently continuing her work there as a Project Coordinator. In this role, she conducts research and evaluations of early childhood programming. Prior to this work, she participated in evaluations of health services as a Peace Corps volunteer in Fiji. Ms. Ellis earned an M.P.H. and an M.S.W from the University of Michigan in 2008. She is a Senior Fellow in the Melton Foundation, an organization devoted to cross-cultural exchange and collaboration. Ms. Ellis has strong interests in the influence of cultural and social factors on the health of minority and marginalized populations and in participatory research and evaluation methods within these populations. She will begin doctoral study at the University of Michigan School of Public Health in the Fall of 2010.
Cultural Competence in Philanthropy: Reflections From an Emerging Professional
Summer Jackson, Independent Consultant, snjackson22@gmail.com
Summer Jackson has Masters of Arts in Clinical Psychology from Roosevelt University. She brings a combination of practical and social science research experience to the field of evaluation. Ms. Jackson recently completed the Robert Wood Johnson Foundation Evaluation Fellowship. During her time as a fellow she worked with the evaluation team at the David and Lucile Packard Foundation as an internal evaluator to increase the evaluation capacity of internal staff and other stakeholders throughout the Foundation. Her professional experience also includes work as an individual and family therapist, anger management group facilitator and research assistant on various national social science projects. Ms. Jackson‘s training includes methods for strategic learning, cultural responsive evaluation, and organizational capacity building. In addition to her experience in these areas, her interests include capacity building in communities of color, issues of social equity, and evaluation in the field of philanthropy.
Perceptions and Education for Researchers to Work With American Indian Groups to Improve Quality Evaluation Research
Jill Jim, Independent Consultant, jilljim2003@hotmail.com
Jill Jim was a Research Fellow in the RWJF Evaluation Fellowship Program. She completed her fellowship at Amherst H. Wilder Foundation in Saint Paul, Minnesota. Jill holds a Master of Public Health and a Master of Healthcare Administration from the University of Utah. Jill provided research and evaluation to a range of projects relative to public health, health care, and education at Wilder Research. She worked on projects for American Indians during her time at Wilder Research, as well. She is currently working with a diabetes program for the Navajo Area Indian Health Service in Window Rock, Arizona. Jill’s experience and interests and include culturally responsive evaluation, building evaluation capacity, public health, family and youth wellness, health disparities, and American Indian/Alaska Native issues.
I Have Needs Too: Culturally Responsive Evaluation Meeting the Needs of Nonprofits and Evaluators Alike
Domingo Moronta, St Barnabas Hospital, domingomoronta@gmail.com
Domingo José Moronta is a first-year alum of the RWJF Evaluation Fellowship program where he was placed with the non-profit evaluation and capacity building firm, the OMG Center for Collaborative Learning. Following his experiences as a fellow, Domingo has newly accepted the position Director of Teen Leadership for the St. Barnabas Teen Health Center's Community Based Adolescent Pregnancy prevention program. This work falls in line with his academic history of a MPH in Community Public Health from NYU's Steinhardt School of Culture, Education and Human Development. Domingo is also currently consulting on an evaluation of a 12-grantee diabetes care program funded by the New York State Health Foundation. He looks to continue culturally responsive evaluation in the field as a vital indicator of successful solutions to socio-economic issues in the Latino community and the implications of obesity.

Session Title: Evaluation Capacity Building in International Contexts
Multipaper Session 818 to be held in REPUBLIC A on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the International and Cross-cultural Evaluation TIG
Creating a Developmental Framework Through Capacity Building and Quality Evaluation: A Country Focus Analysis on Nigeria and Japan’s Developmental State (1860’s-1970’s)
Presenter(s):
Olanrewaju Olaoye, Tiri, olaoyelanre@yahoo.com
Abstract: This paper aims at identifying the possible factors that are prerequisites for having a quality evaluation structure, and effective capacities which will establish a developmental framework in Nigeria using evidences from Japan’s developmental state (1860’s-1970’s). Argument shall be made for the use of effective capacity building and the need to establish creative monitoring/evaluation teams which will recognize and develop planned and emergent approaches to mitigate obstacles to effective evaluations. Furthermore, the need for constitutional reform and the restructuring of the legal system among various institutions shall be emphasized. The model of Japan’s developmental state shall be examined while necessary lessons shall be adapted and situated in Nigeria’s socio-political context.
Capacity Building in Monitoring and Evaluation: A Real Challenge for Afghan Ministry of Education, International Donors, and Evaluation Community
Presenter(s):
Mohammad Javad, University of Massachusetts, Amherst, mjahmadi@gmail.com
Sharon Rallis, University of Massachusetts, Amherst, sharonr@educ.umass.edu
Abstract: Strengthening the monitoring and evaluation (M&E) systems of the Ministry of Education (MoE) is a critical element of the ministry’s strategy, articulated in the MoE strategic plan. Accordingly, the MoE, with the World Bank support, has begun an ‘M&E capacity development’ initiative. This initiative is facing important challenges, such as low institutional and personnel capacity, limited resources, lack of infrastructure, low demand inside the MoE, and highly centralized administrative system. Considering these challenges and importance of this initiative, this paper discusses the following questions: -What approaches to evaluation will work in this situation? Logical framework versus participatory or utilization-focused? Quantitative versus qualitative? -What are indicators of success for this initiative? Accountability versus improving performance? -What are best strategies for developing the capacity? What are existing opportunities and strengths the system can use? -What cultural and social factors must be considered in designing and implementing this initiative?
Evaluating Capacity Building of Supreme Audit Institutions
Presenter(s):
Kristin Amundsen, Office of the Auditor General of Norway, kristin.amundsen@riksrevisjonen.no
Jorild Skrefsrud, Office of the Auditor General of Norway, jorild.skrefsrud@riksrevisjonen.no
Tora Jarlsby, Office of the Auditor General of Norway, tora.jarlsby@riksrevisjonen.no
Abstract: Capacity building of Supreme Audit Institutions is a key factor for ensuring good governance, sound financial management and a transparent public sector. Strategic planning is a key capacity building measure and a pivotal instrument to increase the Supreme Audit Institution's capacity and ability to perform core functions. To ensure quality of impact evaluations of capacity building measures, it is important to assess whether or not the institution actually has increased its ability to perform its core functions. This may supplement the assessment of the outcomes of the specific capacity building measures or technical assistance. The paper will explore dilemmas and opportunities in measuring impacts of strategic planning in Supreme Audit Institutions. This will be elaborated through a case based on ongoing institutional cooperation between the Supreme Audit Institutions of Norway and a developing country.
Quality Evaluation and Capacity Building in the Public Service: Kenyan and Tanzania Experience
Presenter(s):
Karen Odhiambo, University of Nairobi, karenidhambo1@yahoo.co.uk
Abstract: This paper arises from experience gained while carrying out M&E mainstreaming and capacity building in Kenya and Tanzania Water Sector on climate change and adaptations. Focus has been on Results Based Management(RBM). The need to account for and report on impact at local level by governments and development partners has resulted in demand on M&E within public sector. However the process almost always ends up in a disjointed process that does not reflect M&E knowledge or it’s application. The situation is worse where community resilience is the impact. This has resulted in questioning the likelihood of advancing evaluation quality in Africa. The author raises the argument that if the evaluation community is to seriously address evaluation quality, there is need to go beyond quantitative expansion, or numbers trained as well as content relevance to theoretical perspectives that arise as well as organisational. This author will share their experience.
Practical Roles for Evaluation Partners: Challenges to and Opportunities for Evaluation Quality When Evaluation Responsibilities are Shared With Clients
Presenter(s):
Patty Hill, EnCompass LLC, phill@encompassworld.com
Abstract: This paper provides the examples of two international capacity-building projects in which the outside evaluation partner was engaged from the beginning, but in a role that shares evaluation responsibilities with the project staff. One project was a complex leadership initiative working with Ministries of Health in five countries; the other project focused on improving the media’s reporting on women and agriculture in three countries. In both projects, the outside evaluator is responsible for facilitating evaluation planning, for developing the evaluation plan, for creating data collection tools for project staff to use, and for conducting the final evaluation. Managing this type of evaluation partnership requires directly addressing specific challenges to evaluation quality, but also presents some creative opportunities for adding value through evaluation; these challenges and opportunities are both explored in this paper.

Session Title: Operational Research and Monitoring and Evaluation: Can We Forge a Partnership?
Panel Session 819 to be held in REPUBLIC B on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Thomas Chapel, Centers for Disease Control and Prevention, tchapel@cdc.gov
Abstract: The term “operational research” (OR) is used with increasing frequency in public health. Although OR techniques such as mathematical modeling and simulation originated with military operations and despite its traditional use in the private sector, the Centers for Disease Control and Prevention (CDC), among other agencies, supports OR for improving disease prevention, control, and health promotion programs. CDC’s Division of HIV/AIDS Prevention recently established an Operational Research Team (ORT) in the Prevention Research Branch, and the division also includes a Program Evaluation Branch (PEB). ORT’s mission is to improve the efficiency, effectiveness, and sustainability of evidence-based HIV prevention program activities. PEB’s mission is to evaluate processes, outcomes, and impacts of HIV prevention programs, activities, and policies for improvement and accountability. The distinction between OR and program evaluation is not clear. We will discuss the development of OR and program evaluation, OR and evaluation studies in HIV prevention, and ways to differentiate the two disciplines.
Operational Research and HIV Prevention
Jeffrey Herbst, Centers for Disease Control and Prevention, jherbst@cdc.gov
This presentation will provide an overview of the Operational Research Team in the Prevention Research Branch of CDC’s Division of HIV/AIDS Prevention. We will provide information on the development of operational research and examples of operational research studies in HIV prevention. We will also discuss the team’s mission, current activities, and plans for the future. In addition, we will explain how operational research is defined in the context of HIV prevention at CDC and how operational research can inform the implementation of biomedical and behavior change HIV prevention strategies.
Program Evaluation and HIV Prevention
Dale Stratford, Centers for Disease Control and Prevention, bstratford@cdc.gov
This presentation will provide an overview of the Program Evaluation Branch in CDC’s Division of HIV/AIDS Prevention. We will discuss the branch’s mission in terms of program improvement and accountability and will furnish examples of activities that illustrate these fundamental evaluation objectives. We will discuss the historical development of program evaluation and provide examples of domestic monitoring and evaluation projects in HIV prevention.
Program Evaluation and Operational Research: Is There a Difference - Does It Matter?
Marlene Glassman, Centers for Disease Control and Prevention, mglassman@cdc.gov
Program evaluation and operational research (OR) are two different disciplines with their own histories and frames of reference. Nevertheless, the terms are often used interchangeably in public health, including published literature on HIV prevention studies. During this panel, we will explore differences between program evaluation and OR in general and specifically for HIV prevention. We will discuss a proposed framework for distinguishing between the two domains and will conclude with discussion of ways that OR and program evaluation may complement each other.

Session Title: Empowerment Evaluations: Insights, Reflections, and Implications
Multipaper Session 820 to be held in REPUBLIC C on Saturday, Nov 13, 10:55 AM to 12:25 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Candace Sibley,  University of South Florida, csibley@health.usf.edu
Engaging Youth in Program Evaluation: An Exploration of Current Practices
Presenter(s):
Kristi Lekies, The Ohio State University, lekies.1@osu.edu
Abstract: This study examined the extent leaders of a youth development organization engaged youth in evaluation. It also examined gender, leadership experience, evaluation experience, skills, and attitudes in predicting youth involvement. 4-H Educators in a Midwestern state (n=55) completed an online survey about their experiences. Over 70% had some experience involving youth in evaluation planning, decision-making, data collection, and discussing implications. Over 40% had involved youth in pilot testing, data entry, interpreting results, or had youth doing their own evaluations. However, few had done these activities five times or more. Regression analysis F(5, 49) = 4.29, p<.01, indicated that attitudes toward evaluation were significant in explaining youth involvement (B=.30, p<.05). Adjusted R2 = .25. Findings suggest the importance of raising awareness about the benefits of this evaluation approach, and providing training and support, which can encourage more favorable attitudes and assist program leaders in engaging youth more fully.
Forecast: Applications, Innovations, and Contributions to Formative Evaluation Theory and Practice
Presenter(s):
Abraham Wandersman, University of South Carolina, wandersman@sc.edu
Jason Katz, University of South Carolina, katzj@email.sc.edu
Sarah Griffin, University of South Carolina, sgriffi@exchange.clemson.edu
Robert Goodman, Indiana University, rmg@indiana.edu
Dawn Wilson, University of South Carolina, wilsondk@mailbox.sc.edu
Abstract: There has been only minimal research on the effectiveness of formative evaluation methods in relation to program improvement (Brown & Kiernan, 2001). We suggest that a major contributor to this gap is insufficient standards (e.g., standardization and uniformity) in formative evaluation --which is a platform and sine qua non for testing (Kazdin, 2003). FORECAST (Formative Evaluation, Consultation, and Systems Technique) (Goodman & Wandersman, 1994) is an example of a uniform model with accompanying tools for formative evaluation that can be subjected to piloting, refinement, rigorous testing, and ultimately dissemination as a science-based approach to program improvement. After providing examples of past projects that have applied FORECAST models and tools and integrated their use, we will discuss three recent projects that have made significant innovations to the original FORECAST approach: 1) an NIH-funded trial to increase physical activity in underserved communities, 2) a National Science Foundation-funded university-based science, technology, engineering, and mathematics talent expansion program, and 3) a comprehensive program for teen violence prevention. We will close by suggesting next steps for the advancement of FORECAST as a science-based formative evaluation approach.
The Need for Social Theories of Power in Empowerment Evaluation
Presenter(s):
Thomas Archibald, Cornell University, tga4@cornell.edu
Abstract: There is no question that empowerment and other similarly aimed modes of evaluation have become major epistemological and methodological forces in the field of evaluation. There are numerous papers in major evaluation journals representing debates about these terms’ definitions as well as explicating the practical threats and promises of applying these methods in a variety of contexts. Across this rich literature, however, it is relatively rare to find in-depth considerations of social theories of power. What’s more, almost as soon as empowerment evaluation and related approaches began developing and being disseminated, critiques also emerged, claiming ‘empowerment’ is a fundamentally contradictory and easily co-opted construct. Hence, in an effort to contribute to the continual evolution of empowerment evaluation’s theoretical grounding, this paper presents a critical review of the literature to ascertain what role social theories of power play (or could play) in this domain.
Where is the Power in Empowerment Evaluation (EE): Locating Power and Understanding its Role Within an EE Process
Presenter(s):
Divya Bheda, University of Oregon, dbheda@uoregon.edu
Abstract: Empowerment evaluation (EE) is an internal evaluation approach that offers a means through which various stakeholders within the EE process are empowered through the evaluation process. EE advocates a democratic participation process with a focus on inclusion and community ownership geared toward organizational learning and improvement. However, often, the EE approach does not explicitly acknowledge the differential power of the diverse stakeholder groups at the table—power that strongly impacts the effectiveness and legitimacy of actual democratic participation and true inclusion. This paper highlights the limitations of an EE that was conducted to evaluate existing graduate advising practices, and set new advising policy and guidelines at a U.S. northwestern college. It demonstrates how the quality of the EE process is minimized, its social justice principle devalued when the differential power and privileges of multiple stakeholder groups are not overtly, methodologically factored into the democratic participation process of an empowerment evaluation.

Return to Evaluation 2010
Search Results for All Sessions