2011

Return to search form  

Session Title: Perspectives on Credible Evidence in Mixed Methods Evaluation Theory and Practice
Panel Session 451 to be held in Avalon A on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Mixed Methods Evaluation TIG
Chair(s):
Mika Yamashita, World Bank, myamashita@worldbank.org
Discussant(s):
Donna Mertens, Gallaudet University, donna.mertens@gallaudet.edu
Abstract: In the current milieu, scarce resources are concomitant with a greater demand for accountability and credible evidence. The evaluation field over 25 years has become increasingly pluralistic. Evaluators embrace different ideologies, value stances, and methods preferences. Mixed methods, as a mode of inquiry, has held sway in developing theory and practice that considers the multiple perspectives of the evaluation community. Panelists who have published on mixed methods provide insights into maximizing the value and integrity of design options. Johnson provides an overarching umbrella for mixed methods work through the meta-paradigm, dialectical pragmatism. The next two papers take on two, of many practical issues. Caracelli addresses qualitative data in evidence review systems. Collins and Onwuegbuzie discuss maximizing interpretive consistency through sampling strategies in mixed method studies. Hesse-Biber provides broad based practical advice for working with mixed-methods. Last, Mertens, as panel discussant, uses a transformative lens to discuss facets of credible evidence.
How Might Dialectical Pragmatism Examine the Issue of Credible Evidence?
R Burke Johnson, University of South Alabama, bjohnson@usouthal.edu
"Dialectical pragmatism" is a meta-paradigm that combines ontological pluralism with a dialectical and purposively value-packed pragmatism. It asks users to examine multiple paradigmatic stances carefully and thoughtfully when considering issues of knowledge, methods, theory, policy, and practice. I will apply this "mixed philosophy" to the broad issue examined by the panel. My application will ask questions such as these: What do different stakeholders mean by credible evidence? What is the role of power in determining what is labeled "credible evidence"? How can the aims of federal/national policy/theory be combined with the aims of local communities and practice? How can multiple political and epistemological standpoints be concurrently considered in the debate over credible evidence? How does one warrant claims in a multi-paradigmatic, multi-disciplinary, multi-standpoint, multi-stakeholder environment? The "answers" surround the use of the age old philosophical approach called dialecticalism combined with practical and ethical thinking.
Credible Evidence in Systematic Review Systems Viewed Through a Mixed-Method Lens
Valerie J Caracelli, United States Government Accountability Office, caracelliv@gao.gov
Leslie Cooksy, University of Delaware, ljcooksy@udel.edu
Over the past decade, several public and private efforts have been launched to summarize available effectiveness research on social interventions to help managers and policymakers identify and adopt effective practices. Patterned after evidence-based practice models in medicine these review system initiatives are intended to provide credible evidence on what works. In synthesizing evidence these review systems complete a meta-evaluation step to judge study quality and primarily include for review experimental designs. The synthesis of qualitative studies is another burgeoning area of interest for systematic reviews. This paper will examine the adequacy of traditional quality review criteria via a mixed methods lens. Using, among other sources, several federally supported evidence review systems discussed by GAO (GAO-10-30), the paper will consider how qualitative data are included, if at all, in such reviews. The potential use of qualitative data to illuminate context, address intervention fidelity, and add value to interpreting findings will be addressed.
Establishing Interpretive Consistency When Mixing Approaches: Role of Sampling Designs
Kathleen M T Collins, University of Arkansas, kxc01@uark.edu
Anthony Onwuegbuzie, Sam Houston State University, tonyonwuegbuzie@aol.com
Decisions pertinent to devising a sampling design (selecting sample schemes and sample sizes) affect various stages of the mixed research process. Further, sampling decisions impact five quality criteria inherent in the process of mixing approaches. Representation refers to the degree that researchers obtain credible data comprising descriptive accounts and numbers. Legitimation refers to the extent that researchers' conclusions and inferences are trustworthy and transferable. Integration reflects the degree that researchers' inferences are combined into credible meta-inferences. Politics refers to the extent that researchers' conclusions and inferences are viewed as trustworthy by stakeholders, and Ethics refers to the degree that they denote an unbiased and socially ethical perspective. In this presentation, we will discuss the concept of Interpretive Consistency as it relates to the degree of consistency between the researchers' conclusions and inferences and the selected sampling designs, and we will offer strategies toward maintaining Interpretive Consistency within a mixed inquiry.
What Counts as Credible Evidence in Mixed Methods Evaluation Research?
Sharlene Hesse-Biber, Boston College, sharlene.hesse-biber@bc.edu
This paper examines the concept of what counts as credible evidence in mixed methods evaluation research. Does the use of two methods enhance the overall credibility of mixed methods evidence? Is mixed methods praxis an inherently synergistic evaluation method? The paper highlights the impact of a researcher's standpoint-- the values and attitudes they bring to the evaluation process that can determine the questions they ask, the methods, analysis and interpretations they privilege. The paper provides specific methodological and methods case study examples that demonstrate research strategies for enhancing the credibility claims of mixed methods evaluation research. The paper examines how evaluation researchers and practitioners can deploy "strong objectivity" and "holistic reflexivity," as validity tools to enhance awareness of the power and authority relations within the evaluation process; and ways evaluation projects can tend to "difference" with a commitment to social change and social justice evaluation outcomes.

Session Title: Perspectives and Practical Applications for Advancing Culturally Responsive Evaluation Practice
Multipaper Session 452 to be held in Avalon B on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Michelle Jay, University of South Carolina, jaym@mailbox.sc.edu
Discussant(s):
Leslie Cooksy, University of Delaware, ljcooksy@udel.edu
Rodney Hopson, Duquesne University, hopson@duq.edu
Abstract: This session will serve as a primer to highlight the importance of developing thought leaders in the field of evaluation whose lived experience positions them to practice evaluation through a culturally responsive lens. In addition, the evolution of the Graduate Diversity Internship Program traineeship will be discussed touching on successes, opportunities, and challenges. The presenters will extract from interviews conducted with cohort participants, advisors, internship supervisors, and program leaders, as well as their own experiences on how the program has impacted their personal and career trajectories by: (1) increasing their capacity as researchers and evaluators; (2) enhancing their ability to incorporate culturally responsive practices; and, (3) empowering them to become culturally responsive evaluation (CRE) champions. Finally, the authors will review core competencies for championing CRE leadership, tenets for incorporating CRE, precursors for advancing CRE, and strategies for addressing challenges encountered by interns as change agents of CRE practice.
Championing Culturally Responsive Leadership for Evaluation Practice
Lisa Aponte-Soto, University of Illinois, Chicago, lapont2@uic.edu
Deborah Ling Grant, University of California at Los Angeles, deb.ling@ucla.edu
Frances Carter, Westat, FrancesCarter@westat.com
Soria Colomer, University of Georgia, soria.colomer@gmail.com
Johnavae Campbell, University of North Carolina, johnavae@email.unc.edu
Karen Anderson, Independent Consultant, kanderson.sw@gmail.com
This paper will discuss the importance of developing talent to champion culturally responsive thought leadership. Given the mission of the Graduate Education Diversity Internship Program (GEDIP), we will highlight the importance of seeking talented students of color for establishing the leadership pool to advance culturally responsive evaluation (CRE) practice. We will discuss our experiences in the leadership development process as GEDIP interns including training and skill building, teamwork, and service leadership. In addition, we will discuss how the GEDIP goes beyond imparting technical knowledge to mobilizing students of color with the tools to be adaptive agents of transformational CRE leadership critical to the field of evaluation. Moreover, we will review the necessary components for building the GEDIP capacity and sustainability. Finally, we will offer strategies for establishing CRE talent sourcing, leadership development and finding innovative ways to become CRE leaders through service.
The Power Ladies on Becoming Culturally Responsive Evaluators
Dymaneke Mitchell, National-Louis University, dawishfactor@yahoo.com
Lutheria Peters, Association for Medical Colleges, lutheria.n.peters@hotmail.com
Amber Golden, Florida A&M University, ambergt@mac.com
Hamida Jinnah-Ghelani, University of Georgia, hamidajinnah@gmail.com
During the 2005-2006 academic year, four female graduate students from different racial and ethnic minority groups embarked on their journey as the second cohort of the American Evaluation Association Graduate Evaluation Diversity Internship Program and became collectively known as "The Power Ladies." This presentation explores aspects of the internship program that have impacted their career trajectories and helped them be a voice for integrating culturally responsive practices in their current work. They will share their understanding of issues such as financial, administrative, and political constraints that affected the readiness of the evaluands' institutional culture and its consumers to embrace principles of evaluation and culturally responsive practices. They will discuss strategies used to address the challenges they faced.
Culturally Responsive Evaluation Practice: Evaluator Perspectives
Tamara Bertrand Jones, Florida State University, tbertrand@fsu.edu
Maurice Samuels, University of Chicago, mcsamuels@uchicago.edu
Full comprehension of how evaluation literature and evaluation practice intersect helps to ensure meaningful and useful evaluations that impact a diverse body of stakeholders, as well as identify lessons that can be learned to help improve evaluation training, practice, and evaluation research. Over the last two decades, evaluation discourse has centered utilization, evaluation's role in meeting the needs of program stakeholders, with emphasis on inclusion of stakeholders' concerns and values. These recent discussions have highlighted the need for changes in evaluation methodology. Not only in the framing of evaluation, but also execution in a given cultural context. This paper provides results of research conducted with Black evaluators about the practice of culturally responsive evaluation. The authors provide an in-depth example of a practical application of culturally responsive evaluation at work. We will present the opportunities, challenges, and the value-added in conducting culturally responsive evaluations.
Training, Mentoring, Networking & Practical Experience: Pillars for Building a Legacy of Culturally Competent Evaluators
Asma Ali, University of Illinois, Chicago, asmamali@yahoo.com
Wanda Casillas, Cornell University, wdc23@cornell.edu
Ricardo Gomez, National Collegiate Inventors and Innovators Alliance, rgomez@nciia.org
Donna Parrish, Clark Atlanta University, donnadparrish@hotmail.com
Culturally Responsive Evaluation (CRE) is premised on reasoned discussions about the importance of understanding and awareness of cultural differences that influence and determine evaluation efforts. Additionally, CRE can increase validity and reliability of evaluation data as well as provide opportunities for self-reflection and social change (Hopson, 1998; Kirkhart, 2000; Lee, 2000). This discussion presents the results from in-depth interviews, with 7 interns of Legacy-- the fifth cohort of the Graduate Evaluation Diversity Internship Program (GEDIP), their advisors, internship supervisors, and program leaders all of whom were involved in the GEDIP from 2008-2009. In particular, this discussion will explore how the specific tenets of CRE were incorporated into the mentorship of interns as they grow into their future role as evaluators. Secondly, the findings will highlight the extent to which the GEDIP brought about changes in knowledge, attitudes, skills, and aspirations, which are well recognized precursors for change in evaluation practice.
Training Evaluators: What's Context Got to Do With It?
Nia K Davis, University of New Orleans, nkdavis@uno.edu
Situational context is a necessary factor in every evaluation. The contexts by which evaluators are introduced to the field have implications on intern training experiences. The focus of this presentation is to discuss the context in which members of the third cohort of the Graduate Education Diversity Internship were placed during the internship. This cohort of interns was the first to be separated into two types of placement sites, each yielding separate experiences for interns. This presentation will use concrete situations and examples to demonstrate how the intern's experiences contributed the internship's goals of developing: (a) a large pool of culturally diverse evaluators; (b) empirical work that is expanding the knowledge of culturally and contextually responsive evaluation; (c) interdisciplinary knowledge around evaluation in culturally diverse settings; (d) networks among novice and senior evaluation professionals, practitioners, and scholars; and (e) present a list of internship essentials for inclusion in evaluation internships.

Session Title: Should Environmental Sustainability be a Core Value for an Evaluator?
Panel Session 453 to be held in California A on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the AEA Conference Committee
Chair(s):
Leslie Cooksy, University of Delaware, ljcooksy@udel.edu
Abstract: Humanity is running up against the natural limits of aquifers, soils, fisheries, forests, even our atmosphere. In system after system, our collective demands are overshooting what nature can provide. The AEA guiding principles state that "evaluators have obligations that encompass the public interest and good." The principles call on evaluators to consider multiple values but do not specifically address environmental sustainability. How do we enact the AEA guiding principles with environmental issues in mind? How do the links between the values of environmental sustainability, social responsibility, and economic well-being affect the work of evaluators? Participants will form small groups following presentations on environmental sustainability and evaluation. They will discuss how evaluators consider environmental sustainability when they develop evaluation designs, build logic models, consider theories of change, and help clients frame goals, desired outcomes, and strategies for implementing programs, producing products, and engaging in other types of evaluation activities.
Why Environmental Sustainability is an Important Value for Evaluators
Beverly Parsons, InSites, bparsons@insites.org
Natural scientists see a world economy that is destroying its natural supports. We are in a time when the world population is increasing by 80 million annually, and 215 million women worldwide who want to plan their families lack access to family planning services. Dense populations and their livestock herds degrade land and undermine food production while some 3 billion people seek to eat more grain-intensive livestock products. Roughly one-third of the world's cropland is losing topsoil faster than it can be re-formed. Topsoil loss reduces productivity, eventually leading farmers and herders to abandon their land. Countries such as Haiti, Mongolia, and North Korea are losing the ability to feed themselves. Saudi Arabia became self-sufficient in wheat production by tapping a non-replenishable aquifer-now largely depleted. Is a "perfect storm" gathering that could create unprecedented economic and political upheaval for civilization? What does this mean for evaluators?
Environmental Sustainability and Social Justice: How Can Evaluators Consider the Holistic Nature of Social Problems?
Veronica Thomas, Howard University, vthomas@howard.edu
Many scholars recognize the relationship between social justice and environmental sustainability. Social justice is equal access to the social and economic resources of society, which results in balance in both society's burdens and benefits. Substantial evidence indicates that the wealthiest of society consume significantly more than their fair share of environmental resources with the poorest one-third of the world having little alternative but to utilize resources in an inefficient, less sustainability manner. In the U.S., in particular, environmental inequality is manifested through the disproportionate share of environmental burdens (e.g., waste transfer stations, power plants, truck routes) borne by low-income minority communities. As a practice intent on improving society, how might issues of environmental sustainability and social justice be infused into the teaching and practice of evaluation through a more holistic consideration of social problems? Critical theory evaluation, as a theoretical lens for exploring this question, will be considered.
Sustainability and Economic Evaluation: Transforming "Is It Cost-Beneficial?" Into "Is It Sustainable?
Brian Yates, American University, brian.yates@mac.com
Traditional economic evaluation asks whether a program or practice is cost-beneficial, i.e., whether the monetary value of its outcomes exceeds the monetary value of resources consumed to produce those outcomes. A program with a net positive benefit can be judged sustainable, however, only if the monetary value of resources is all that needs to be sustained. This could result in "sustainable" programs that consume irreplaceable resources. Money, of course, is not an actual resource: money is only a means of valuing and obtaining resources. From the perspective of sustainability, the primary question in cost-benefit evaluation is transformed into, "Does the program produce resources (e.g., services, space, energy, resource savings) that fully replace, or improve upon, the resources it consumes?" An example of cost-benefit analysis performed from a sustainability perspective is provided; more examples are developed with participation from members of the audience and the panel.

Session Title: State-level Evaluation Policy: A Call for Dialogue
Panel Session 455 to be held in California C on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Evaluation Policy TIG and the Government Evaluation TIG
Chair(s):
Maria Whitsett, Moak, Casey & Associates, mwhitsett@moakcasey.com
Discussant(s):
Maria Whitsett, Moak, Casey & Associates, mwhitsett@moakcasey.com
Abstract: With many state governments engaging in sweeping educational, economic, environmental, health, and social policy changes, questions related to impact---especially the value of systematic, high quality evaluation of program and policy impact---are increasingly important to state-level policy makers, administrators and the public-at-large. Evaluation policy sets the stage for how evaluation is practiced, yet little is known about how states differentially adopt evaluation policies and practices to assess the impact of state-sponsored programs and services. This panel presentation explores issues related to state-level evaluation policy, considering the following questions: What is the role of evaluation policy at state/provincial levels of governance? How do states vary in the types of evaluation policies adopted and how evaluation functions are conceptualized, structured, and housed? What values might influence state-level evaluation policy-making? Should evaluators play an advocacy role in shaping state-level evaluation policy? How might state-level policy-making influence the quality of evaluation work?
Conceptualizing State-level Evaluation Policy
Rakesh Mohan, Idaho State Legislature, rmohan@ope.idaho.gov
This presentation will introduce the topic of state-level evaluation policy. A conceptual overview will include discussion on: 1.) the relevance and importance of state-level evaluation policy work; 2.) the political context in which state-level evaluation policies are developed and implemented; 3.) the role of state-level evaluation policy; 4.) the embedded nature of evaluation units and evaluation policies; 5.) the potential of state-level evaluation policy to influence state sponsored programs and services.
Analytic Comparison of State-level Evaluation Policy
Kristin Kaylor Richardson, Western Michigan University, kkayrich@comcast.net
This presentation will build on previously presented conceptual material, discussing specific examples of state-level evaluation policy work. States will be compared on a range of variables reflecting evaluation policy work. Implications of comparative findings will be discussed and will include practical, actionable suggestions to strengthen state-level evaluation policy.
The Administrative Capture of Evaluation
Saville Kushner, University of the West of England, saville.kushner@uwe.ac.uk
This presentation will discuss the shift of control over the evaluation function from those without the administrative system into the administrative system itself, through stipulation, contracting and internal functional development. This shift has gathered pace over the past 15 years. It has been most marked in the field of international development. I will discuss the implications of this for deliberations over public value.

Session Title: ACT II: Exploring the Complexities of Evaluating Infrastructure Development in Cluster, Multi-site, or Multi-level Initiatives
Panel Session 456 to be held in California D on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Rene Lavinghouze, Centers for Disease Control and Prevention, shl3@cdc.gov
Discussant(s):
Rene Lavinghouze, Centers for Disease Control and Prevention, shl3@cdc.gov
Abstract: ACT II: As evaluators and researchers of public health systems and policies we know that systems, environmental, and policy interventions are dynamic, complex, and unpredictable. One task is, through evaluation, to understand the stages of growth and development of a given initiative. Then and only then can optimal alignment of focus, people and resources for achievement of program goals be actualized. The task is complex and daunting because the line from infrastructure elements to distal outcomes is circuitous at best and does not lend itself to simple evaluation solutions. The complexity of the evaluation is further increased when programs involve multiple-sites and/or multiple-levels of implementation. A mixture of quantitative, qualitative, and out-of-the-box methods are required. This panel will discuss the complexities of evaluating infrastructure and its link to outcomes across sites and levels of program implementation as well as the relevance of evaluating infrastructure to the overall logic model.
Assessing Emergency Communication Infrastructure
Keri Lubell, Centers for Disease Control and Prevention, kgl0@cdc.gov
During a public health emergency, CDC's Emergency Communication System (ECS) integrates relevant communication from across CDC and ensures coherent risk communication information reaches the public, affected communities, and partners. Principles of emergency risk communication indicate that messages need to be timely, scientifically accurate, and consistent to decrease emergency-related morbidity and mortality. Carrying messages to the intended audiences requires substantial and properly functioning communication infrastructure: connections between CDC and local and state public information officers, clinicians, and other community based organizations through which risk information can be shared; trust in CDC as a source; and transparency about what is not yet known. Before the impact of CDC's emergency messaging can be assessed, we first need to know the extent to which we have effectively built the communication infrastructure that supports message dissemination during emergency events. We will present qualitative data from a systematic investigation of CDC's emergency communication infrastructure.
Using Qualitative Research to Understand Public Health Infrastructure
Ray Maietta, ResearchTalk, ray@researchtalk.com
Rene Lavinghouze, Centers for Disease Control and Prevention, shl3@cdc.gov
Judith Ottoson, Independent Consultant, jottoson@comcastnet
We have identified 5 essential elements of public health infrastructure. These elements are tangible, visible and accessible for study: 1. A Strategic Plan, 2. Effective Leadership, 3. Active Partnerships, 4. Managed Resources, and 5. Engaged Data. As we learn more about these elements we witness less tangible, less visible factors that fuel the system that builds infrastructure. Evolving strategic understanding and tactical action of key actors within any system shapes immediate and long-term action and potential of a public health infrastructure. Qualitative research methods are necessary to understand the interplay of the tangible essential elements and less visible driving forces of such strategic understanding and tactical action. Maietta's Sort and Sift, Think and Shift method has been successfully applied to examine holistic and evolving case stories of state programs in oral health and smoking and health. These studies inform the story of understanding evolving public health infrastructure.
Infrastructure: It's More Interesting Than You Think!
Judith Ottoson, Independent Consultant, jottoson@comcastnet
Rene Lavinghouze, Centers for Disease Control and Prevention, shl3@cdc.gov
Ray Maietta, ResearchTalk, ray@researchtalk.com
The overall purpose of this case study was to understand whether and how the infrastructure of state oral health programs impacts progress towards oral health outcomes. Using Yin's approach to case study design, data were collected through meetings and site visits with state oral health personnel and their partners in four states. A dual analytical approach examined data within and across states. The Ecological Model of Oral Health Infrastructure emerged from the data and was used to identify five "essential elements" of oral health infrastructure including the state plan, partnerships, leadership, resources, and engaged data. "Strategic thinking" and "tactical action" are needed to support these elements. Infrastructure enables states and their partners to engage and create oral health opportunities, to offer a maximum response, and to track and sustain results. The case study offers general recommendations about infrastructure support, as well as, strategy-specific recommendations tied to the Ecological Model.
He Said / She Said / They Said / It Said: Authenticity and Meaning in Qualitative Data Analysis
Patrick Koeppl, Deloitte Consulting, pkoeppl@deloitte.com
Mixed methods are often the best way to conduct comprehensive evaluation of complex systems and situations. In-depth interviews, focus groups, dyads, triads, participant observation, archival research, literature and policy reviews and analyses, and many other methods each have a place and role in the development of understanding. Determining the validity and reliability of qualitative data collected via mixed methods both poses challenges to and provides opportunities for authentic understanding of complex systems and phenomenon. Triangulization, intersubjectivity, and introspection all contribute to study validity and subject/source reliability, but only go so far. This presentation describes the path to authentic understanding of public health infrastructure via a mixed-methods approach. Data stability, reproducibility, and accuracy are considered, as are the pros and cons of face, criterion, construct, and predictive validity as they relate to the application of qualitative data collection and analysis techniques and the quest for "useful" evaluation.

Session Title: Stakeholder Values in Policy Evaluation: Working to Improve Public Health Policy at the Centers for Disease Control and Prevention
Panel Session 457 to be held in Pacific A on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Advocacy and Policy Change TIG and the Presidential Strand
Chair(s):
Erika Fulmer, Centers for Disease Control and Prevention, efulmer@cdc.gov
Discussant(s):
Karen Debrot, Centers for Disease Control and Prevention, kdebrot@cdc.gov
Abstract: How do stakeholders value a policy? How does their valuing affect policy implementation, the organization of groups during adoption and enforcement, and the resulting health behaviors of individuals? This panel examines how three Divisions of the U.S. Centers for Disease Control and Prevention (CDC) conduct policy evaluation with stakeholders. The panelists will highlight methods to understand the values placed on policy formation and implementation, as well as how these values influence identification and prioritization of policy outcomes. The session will highlight lessons learned by CDC evaluators within chronic disease and injury prevention programs at the local, state, and national levels. Focused panel discussion will demonstrate how stakeholder engagement throughout policy evaluation can guide advocacy and policy planning, generate promising practices for policy implementation, and improve monitoring of policy outcomes.
Identifying Facilitators and Barriers to Implementing Policy, Environmental, and System Changes: Lessons Learned from Comprehensive Cancer Control Policy Taskforces
Angela Moore, Centers for Disease Control and Prevention, armoore@cdc.gov
Staci Lofton, Centers for Disease Control and Prevention, slofton@cdc.gov
Julie Townsend, Centers for Disease Control and Prevention, jtownsend@cdc.gov
Annette Gardner., Centers for Disease Control and Prevention, akg4@cdc.gov
In 2010, the Division of Cancer Prevention and Control (DCPC) at Centers for Disease Control and Prevention (CDC) provided additional competitive funding to 13 of the 69 CDC-funded National Comprehensive Cancer Control Programs (NCCCP) to advance their cancer prevention efforts by implementing policy, system, and environmental change strategies for sustainable cancer control. These funded entities will convene or enhance existing taskforces to develop a policy agenda that will impact the three cancers with the highest burden within their respective areas. An environmental scan of existing relevant efforts was conducted to identify both facilitators and barriers that contribute to the taskforce's success of influencing policy, systems, and environmental (PSE) change. Environmental scan methodology includes a review of the literature, internet search of grey literature, and key informant interviews. The results of the environmental scan will inform the development of an evaluation plan that will assess the processes and outcomes of NCCCP.
Valuing Stakeholder Values: A Case Study Approach to Evaluating "Return to Play Legislation" in Massachusetts and Washington
Sue Lin Yee, Centers for Disease Control and Prevention, sby9@cdc.gov
Howard Kress, Centers for Disease Control and Prevention, hkress@cdc.gov
David Guthrie, Centers for Disease Control and Prevention, dguthrie@cdc.gov
Elizabeth Zurick, Centers for Disease Control and Prevention, egf3@cdc.gov
Rebecca Greco Kone, Centers for Disease Control and Prevention, ftm1@cdc.gov
Annually, over 3.8 million sports and recreation-related concussions occur in the United States. Youth athletes represent a large portion of the injured, and some obtain catastrophic injuries or die due to improper evaluation or management. Currently, 9 states have passed laws that require the education of key stakeholders and list requirements about removal from and return to play. To provide practice-based guidance to states considering similar legislation, the CDC National Center for Injury Prevention and Control is conducting an evaluation of the "return to play" laws in Washington and Massachusetts. This presentation will examine how values shape the participation of key stakeholders--athletes, public health practitioners, educators, and coaches--in the implementation of each state's legislation. Discussion will address unintended consequences, barriers to implementation, and offer suggestions for utilizing stakeholder values to promote effective implementation that eventually leads to a reduction in youth concussions.
Advancing Tobacco Control Practice Through Policy Evaluation: Engaging Stakeholders to Set an Agenda for Reducing Tobacco Industry Influence
Erika Fulmer, Centers for Disease Control and Prevention, efulmer@cdc.gov
Kimberly Snyder, Oak Ridge Institute for Science and Education, kmsnyder@cdc.gov
Martha Engstrom, Centers for Disease Control and Prevention, mengstrom@cdc.gov
Shanta Dube, Centers for Disease Control and Prevention, sdube@cdc.gov
CDC's Office on Smoking and Health (OSH) is working with state and national partners to reframe the scope and application of key outcome indicators (KOI) for policy evaluation. Recent changes in the ability of both federal and state governments to regulate tobacco provide new opportunities to limit tobacco industry influences. OSH proactively engaged tobacco control stakeholders to clarify the scope of the problem, offer suggestions for enhancing tobacco industry monitoring, and identify relevant evaluation opportunities and challenges. Using this information, OSH systematically applied a set of assessment criteria to clarify high priority issues and worked to revamp its key outcome indicators to ensure that they remain timely and relevant for current and future policy evaluations. In this presentation, we describe the methods used to capture stakeholder input, the implicit and explicit values applied in selecting high priority issues, and the process for incorporating the information into tobacco policy evaluation.

Session Title: Meet the Pros: Intermediate Consulting Skill Building Self-Help Fair
Skill-Building Workshop 458 to be held in Pacific B on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Independent Consulting TIG
Presenter(s):
Robert Hoke, Independent Consultant, robert@roberthoke.com
Discussant(s):
Mariam Azin, PRES Associates, mazin@presassociates.com
Sathi Dasgupta, SONA Consulting Inc, sathi@sonaconsulting.net
Michael Herrick, Herrick Research LLC, herrickresearch@me.com
Norma Martinez-Rubin, Evaluation Focused Consulting, norma@evaluationfocused.com
Janice Noga, Pathfinder Evaluation and Consulting, jan.noga@stanfordalumni.org
Judith Kallick Russell, Independent Consultant, jkallickrussell@yahoo.com
Lucy Seabrook, Seabrook Evaluation + Consulting, lucy@seabrookevaluation.com
Dawn Smart, Clegg & Associates, dsmart@cleggassociates.com
Susan Wolfe, Susan Wolfe and Associates, LLC, susan.wolfe@susanwolfeandassociates.net
Abstract: This skill-building workshop features independent evaluation consultants demonstrating and sharing some of their hard-earned lessons on managing a consulting business. A series of eight topic tables led by an experienced table leader who is prepared to share information about one consulting topic they enjoy and do well. This session uses a speed-dating approach to learning. Every 10-15 minutes participants will circulate to a different topic table with a different table leader. Each table leader will prepare a two-page summary of helpful hints and resources for the participants. Topics include: Streamlining Your Evaluations; Establishing Terms of Service -- Project Scope, Payment Schedules, and Deliverables; Networking; Strategies for Surviving Turbulent Times; Alternative to Incorporation: Benefits of an Informal Partnership; Budgeting for Staff and Expertise; Collaborative Consultant-Client Relationships; "Consulting with Government Agencies" Table Leaders have more than three years' consulting experience.

Session Title: Quantitative Methods: Theory and Design TIG Business Meeting and Expert Lecture: Evaluating Theory-based Evaluation: Information, Norms, and Adherence
Business Meeting Session 459 to be held in Pacific C on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
TIG Leader(s):
Patrick McKnight, George Mason University, pem725@gmail.com
George Julnes, University of Baltimore, gjulnes@ubalt.edu
Karen Larwin, Youngstown State University, drklarwin@yahoo.com
Dale Berger, Claremont Graduate University, dale.berger@cgu.edu
Chair(s):
Lee Sechrest, University of Arizona, sechrest@u.arizona.edu
Presenter(s):
Aurelio Figueredo, University of Arizona, ajf@u.arizona.edu
Abstract: Programmatic social interventions attempt to produce appropriate social-norm-guided behavior in an open environment. Those efforts will be optimal, however, only if evaluations of those interventions are scientifically sound and cumulative. A marriage of applicable psychological theory, appropriate program evaluation theory, and outcome of evaluations of specific social interventions assures the acquisition of cumulative theory and the production of successful social interventions - the marriage permits us to advance knowledge by making use of both success and failures. We briefly review well-established principles within the field of program evaluation, well-established processes involved in changing social norms and social-norm adherence, the outcome of several program evaluations focusing on smoking prevention, pro-environmental behavior, and rape prevention and, using the principle of learning from our failures, examine why these programs often do not perform as expected. Finally, we discuss the promise of learning from our collective experiences to develop a cumulative science of program evaluation.

Session Title: The Challenges of Evaluating the Scale Up and Replication of Innovative Social Programs
Panel Session 460 to be held in Pacific D on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Gale Berkowitz, MasterCard Foundation, gberkowitz@mastercardfdn.org
Discussant(s):
Gale Berkowitz, MasterCard Foundation, gberkowitz@mastercardfdn.org
Abstract: Policymakers and funders increasingly ask "what information is needed to facilitate decisions about whether and how to expand promising new programs or policies to other places?" This panel will shed light on the range of issues evaluators can address to inform funders' replication and "scaling up" strategies. The session will discuss such questions as: what types of evidence are needed before replicating or scaling up a program; how might evaluators design evaluations to identify those program attributes that should be adapted to the differing social and political environments of the adopting organizations versus those attributes that form the unvarying core of the program; how can evaluators assess fidelity to the core program model; finally, once a program has been expanded or replicated how can evaluators measure its success? This panel will provide suggestions for ways to approach these issues and identify outstanding questions that should be explored in the future.
Scaling, Scale Up and Replication: A Call for a More Disciplined Discusstion
Laura Leviton, The Robert Wood Johnson Foundation, llevito@rwjf.org
Patricia Patrizi, Public Private Ventures, patti@patriziassociates.com
Public and private funders have advanced efforts to "scale up" with little conceptual clarity and without addressing the limits to what we know about the organizational, human and scientific factors that limit its effectiveness or appropriateness. The discussions have tended to accept scale-up as unquestionably good and while attending to some issues--the need for sufficient evidence before going to scale and the importance of fidelity to model--it has not sorted through the factors that affect capacity to reach more people with better services that can produce better outcomes. Some of these factors include: organizational and population variation, limits of single model approaches relative to these variations, necessary local adaptation, limits to generalizability and how scale up supports or interferes with practice improvement. The presentation should offer ideas about how we think about, deliver and evaluate efforts to scale up.
Addressing Challenges in Evaluating Innovations Intended to be Scaled
Thomas Kelly, The Annie E. Casey Foundation, tkelly@aecf.org
Why are successfully evaluated and well-evidenced programs not taken to scale? Evaluations of innovations intended to scale need to be designed and conducted better and with more intention not only to our evaluation methods but also to the goal of increasing the utilization and applicability of the evaluation's findings to real practice in the field. Social and human service prevention and intervention programs are implemented not in controlled settings but in varying social and political environments that always require a degree of adaptation. Therefore, funders, policymakers, and nonprofits need more than excellent evidence of impact, they also need detailed implementation guidance, help with knowing what data on quality are important, and contingency plans for responding to real events. This presentation will focus on the necessary elements of evaluations capable of addressing not only the evidence of outcomes but also the data and knowledge needed to make practical decisions during replication.
Replicating Innovative Program Models: What Evidence do we Need to Make it Work?
Margaret Hargreaves, Mathematica Policy Research, mhargreaves@mathematica-mrp.com
Beth Stevens, Mathematica Policy Research, bstevens@mathematica-mpr.com
"Scaling up" is partially a result of replication. Can an organization adopt or replicate) a new model? If not, could scaling up be achieved? How can evaluation contribute to answers to this question? What elements of knowledge, strategy, and local conditions need to be present in both the original organization and the organization replicating the model for successful replication to occur? The evaluation of the RWJF Local Funding Partnerships Program included case studies of four pairs of programs - the organizations that had developed innovative program models and the organizations that replicated them. These case studies reveal that the goal of most evaluations - - evidence of effectiveness, is only one of the elements that further the chances of successful replication. Diffusion of knowledge of the innovation, identification of appropriate candidates for replication, and the provision of technical assistance to transplant the innovation are also part of the process.
Measuring the Capacity for Replication and Scale Up
Lance Potter, New Profit Inc, lance_potter@newprofit.com
In order for interventions to effectively replicate and scale, implementing organizations must have the organizational conditions to support growth with fidelity. New Profit, Inc., a social venture fund, has participated in the successful scaling of many notable not-for-profit organizations. New Profit's approach includes use of a Growth Diagnostic Scale, which was developed over a decade of working to scale-up successful social interventions. This presentation will describe the Growth Diagnostic Scale and its value for assessing the capacity of an organization to replicate. The scale has applications for both funders and service providers seeking to assess where an organization sits on the scale from start up to program maturity, attempting to employ targeting intervention to improve program growth, and wishing to predict future problems for growing organizations based on their current organizational strengths and weaknesses.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Doing Quality Evaluation While Surviving the Funding Crisis
Roundtable Presentation 461 to be held in Conference Room 1 on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Evaluation Managers and Supervisors TIG
Presenter(s):
Cynthia Tananis, University of Pittsburgh, tananis@pitt.edu
Abstract: How do you identify and develop evaluation projects when funding opportunities are tight? How do you manage evaluation work on a limited budget? How do you produce quality evaluation work with decreasing resources? The Collaborative for Evaluation and Assessment Capacity (CEAC) is a University-based evaluation center that works with clients in human services and education sectors to design and perform evaluation. We have experienced a number of clients (old and new) who are struggling to fund their operations/programs. As they experience increasing financial restraints, evaluation resources are often the first line to be cut in the budget. How can we work with clients to continue to provide high-quality evaluation with fewer dollars --- and maintain the infrastructure to maintain that quality as an evaluation organization? This session presents some strategies we have considered and implemented and opens the floor for discussion among colleagues. This roundtable will be of interest to independent contractors, evaluation units in academic settings, and any evaluator working with not-for-profit clients.
Roundtable Rotation II: Quality is Job #1: Strategies and Struggles to Ensure Quality in an Evaluation Unit
Roundtable Presentation 461 to be held in Conference Room 1 on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Evaluation Managers and Supervisors TIG
Presenter(s):
Kristine Chadwick, Edvantia, kristine.chadwick@edvantia.org
Abstract: Ensuring quality evaluation designs, implementations, and reports is a key responsibility of evaluation managers. In this roundtable session, participants will have an opportunity to review the quality assurance system currently being implemented in a small- to mid-sized evaluation services firm. This system, though not perfect, provides processes and checklists for evaluators to use throughout a project's life cycle. The session will include discussion of this quality assurance system, ideas for improvement, and a chance to share strategies and struggles other managers have encountered when attending to evaluation quality.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: What Can Research on Evaluation Do for You? Benefits of Practitioner-based Research on Evaluation (ROE)
Roundtable Presentation 462 to be held in Conference Room 12 on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Research on Evaluation
Presenter(s):
Matthew Galen, Claremont Graduate University, matthew.galen@cgu.edu
Silvana Bialosiewicz, Claremont Graduate University, silvana@cgu.edu
Abstract: Developing a body of research to describe and inform program evaluation practice is vital for the development of our field. This roundtable seeks to explore the ways in which evaluation practitioners can incorporate Research on Evaluation (ROE) into their existing projects in order to contribute to the field's collective knowledge. There are several potential benefits of conducting practitioner-based ROE studies. Practitioner-based ROE studies may: (1) answer research questions which are highly relevant and practical; (2) improve knowledge-sharing of lessons learned in individual evaluation projects; (3) increase evaluators' influence in shaping program and policy decisions; and (4) enhance evaluation's visibility and credibility as a professional field. This roundtable will explore tips, frameworks, and examples for designing and implementing ROE studies within existing evaluation projects. We will also facilitate a conversation about potential issues and challenges when conducting ROE, as well as how to overcome these challenges.
Roundtable Rotation II: Evaluation After the Facts: Tips and Alternative Designs
Roundtable Presentation 462 to be held in Conference Room 12 on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Research on Evaluation
Presenter(s):
Julien Kouame, Western Michigan University, j5kouame@wmich.edu
Fatma Ayyad, Western Michigan University, f4ayyad@wmich.edu
Abstract: This paper provides tips and alternative ways to evaluate in situations where: (1) evaluator has no baseline data, (2) the only data available come after the program has been completely implemented, (3) there is no defined criteria according to which individuals are distributed into treatment and comparison group and the client requires a quasi-experimental design.

Session Title: Assessing the Use and Influence of Evaluations: Evidence of Impacts and Predictors of Success
Multipaper Session 465 to be held in Avila A on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Evaluation Use TIG
Chair(s):
Lyn Shulha,  Queen's University, lyn.shulha@queensu.ca
Discussant(s):
Lyn Shulha,  Queen's University, lyn.shulha@queensu.ca
Evaluation of NCLB-mandated Supplemental Educational Services in the Chicago Public Schools: What Predicts a Successful Program?
Presenter(s):
Curtis Jones, University of Wisconsin, Madison, cjjones5@wisc.edu
Abstract: As part of No Child Left Behind (NCLB), low-income students attending failing schools may receive free math and reading tutoring known as Supplemental Educational Services (SES). In this paper, I present my evaluations of SES and how the Chicago Public Schools (CPS) both used and misused their results. My evaluations consisted of several methods. I used 'Vale-added' multi-level modeling to establish impact. I analyzed student attendance and registration records to document implementation. Surveys of school staff were used to determine which providers worked most respectfully within schools. Finally, I surveyed providers to explore practices and policies that predict effectiveness and school relationships. I discuss how CPS used these results in constructive ways to development an accountability system. I then discuss how CPS attempted to misuse the results for political gain. Finally, I discuss how, by engaging in a dialogue with multiple stakeholders, I minimized the misuse of my work.
Assessing the Use and Influence of Impact Evaluations: Evidence from Impact Evaluations of the World Bank Group
Presenter(s):
Javier Baez, World Bank, jbaez@worldbank.org
Izlem Yenice, World Bank Group, iyenice@ifc.org
Abstract: There has been a rapid expansion in recent years in the production of impact evaluation(IE) as a method to assess the impacts of development projects, which is largely driven by an increasing demand for credible evidence of development results. Much of the development community perceives IE as a tool that provides rigorous and objective estimates of the causal effects of specific interventions. Largely motivated by this, and as part of its results and knowledge agenda, the World Bank Group (WBG) has made important efforts to expand and deepen its IE work. However, little is known whether IEs have actually influenced resource allocation, project design/implementation, future evaluation, strategy and policy making. This evaluation looks at the experience of around 300 IEs supported by the WBG to assess their contribution to improving development practices.
Evaluating the Post-Grant Impacts of Evaluation Capacity Building in a K-20 Partnership
Presenter(s):
Edward McLain, University of Alaska, Anchorage, afeam1@uaa.alaska.edu
Susan Tucker, Evaluation & Development Associates LLC, sutucker1@mac.com
Patricia Chesbro, University of Alaska, Anchorage, afprc@uaa.alaska.edu
Abstract: As the US begins new Teacher Quality grants and Math Science Partnerships with increased accountability requirements the relevance of reflecting on sustainable evaluation practices from past grants becomes ever more timely. Using a grounded case study of a USDE teacher quality enhancement funded Alaska Education Innovation Network (AEIN), this paper focuses on how evaluation capacity building (ECB) efforts in the K20 network shaped the final two years of the grant and what evaluation use was sustained by this K20 partnership after six years of federal funding ceased. The piloting of three protocols resulted in the creation of rubrics for monitoring ECB impacts in post-grant decision-making. Where network partners used a collaborative process of cyclical logic modeling over a three-year period, AEIN evaluators noted four shifts in participants and stakeholder leaders regarding: a) evaluation purpose, b) evaluation questions, c) capacity building strategies, and d) evaluation use.

Session Title: Evaluation in the Face of Uncertainty: Resolving the Tension Between Need for Design Integrity and Need to Adopt Evaluation to Shifting Outcomes
Think Tank Session 466 to be held in Avila B on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Systems in Evaluation TIG
Presenter(s):
Jonathan Morell, Fulcrum Corporation, jamorell@jamorell.com
Discussant(s):
Sanjeev Sridharan, University of Toronto, sridharans@smh.ca
Abstract: Good evaluation usually require maintaining the integrity of a design over time. As examples, it may be necessary to conduct interviews just before treatment, to maintain control groups, to assure the usefulness of a validated scale, or to nurture relationships with gatekeepers to particular data sets. In these examples, good design means inflexible design. So what to do when a design optimized for one set of outcomes confronts an unexpected set of outcomes? How can a design be made robust in the face of such change, or agile to adapt to new circumstances? Solutions exist, but each can induce its own set of problems. We will explore these issues via: 1) A short overview of possible solutions. 2) Breakouts for groups to deal with unexpected change in a scenario common to all the groups. 3) Report back discussions to explore the relative merits of each group's solution.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Identifying Ways to Increase The Racial/Ethnic Diversity of People Entering the Field of Evaluation
Roundtable Presentation 467 to be held in Balboa A on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Presenter(s):
Nandini Bhowmick, University of Minnesota, Duluth, band0077@d.umn.edu
Abstract: Of different aspects of diversity, structural and interactional diversity mostly influence our evaluation process. This proposal frames methods to understand, measure, and improve structural and interactional diversity amongst the evaluators and in the field of evaluation in general. Structural and interactional diversity are considered essential elements of cultural competence of an organization. Interactional diversity provides dynamic synergies to structural diversity, often considered from a static viewpoint. Taken together, these diversity approaches provide necessary explanations on how individual cultural-frame interacts with one's cultural-world views. There are needs and requirements for evaluators to understand different aspects of cultural diversity, more so at an inter and intra group levels than individual understanding.
Roundtable Rotation II: Exploring Evaluation Theory to Promote Diversity in Program Evaluation
Roundtable Presentation 467 to be held in Balboa A on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Presenter(s):
Dwayne Campbell, University of Rochester, dwayne.campbell@warner.rochester.edu
Nadine Hylton, University of Rochester, nadinedhylton@hotmail.com
Tom Noel, University of Rochester, tnoeljr@gmail.com
Abstract: There is no doubt, that while program evaluation is oftentimes discussed within the realm of education, evaluation is a significant part of many different entities, organizations and branches of both the public and private sector. Given the widespread significance of program evaluation, it becomes interesting that the appeal for program evaluation across different demographics of scholars is not as evident, as one would imagine. It is with this in mind that we are proposing that we explore and utilize the various theories of program evaluation to make the field more inclusive and diverse. This deliberate attempt will, not only give all stakeholders in various programs a voice and presence, but it will also promote greater understanding of different constituents and their cultures, while ultimately making the field more attractive to groups that might have previously not demonstrated a strong presence in this growing and necessary discipline.

Session Title: Developing Approaches That Place a Positive Value and Reduce Resistance to the Evaluation Process
Panel Session 468 to be held in Balboa C on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Non-profit and Foundations Evaluation TIG and the Internal Evaluation TIG
Chair(s):
Stanley Capela, HeartShare Human Services, stan.capela@heartshare.org
Abstract: As program evaluators, whether external or internal, we are often confronted by resistance to the evaluation process. Very often this resistance leads to a negative outcome on how the evaluation process is manipulated by various factors and ultimately a report that produces findings that have no impact on identifying the program's strengths and challenges. In the end the evaluation results have no impact on improving the quality of services. This panel will provide several examples on how an evaluator used a variety of techniques that not only reduce the level of resistance but ultimately produces results that provide the organization with the information that will ultimately improve the overall quality of services.
Breaking Down Mythconceptions & Resistance to Evaluation in Funders & Nonprofits
Charles Gasper, Missouri Foundation for Health, cgasper@mffh.org
Evaluators who work for and with foundations can face two levels of resistance to evaluation - from the foundation staff as well as the nonprofit. Few funders have dedicated staff for evaluation and those that do spend a significant amount of time breaking down that resistance. The resistance can be either passive or active and can have a disastrous impact on the quality of the evaluation, the value ascribed to the information, and whether the results are used. Some techniques that will be discussed to address resistance include clarification of reasons for evaluation, language, linkage of evaluation with program planning and design, provision of engaging general education, and other stakeholder outreach.
A Recipe for Adding Value While Reducing Resistance to Program Evaluation
Stanley Capela, HeartShare Human Services, stan.capela@heartshare.org
An internal evaluator is often confronted with a high level of resistance when conducting program evaluation. Often it is due to the inherent perception among program management that the purpose of program evaluation is nothing more than "I Got You Mentality." In many instances the culture that permeates an organization can be affected by how program evaluation is perceived by senior management. The purpose of this paper is to provide a recipe that is a continuing though process that was shared during a recent AEA 365 session on resistance to evaluation. The primary focus is to provide techniques on how to change the organizational culture to make program evaluation more conducive to senior management and produce positive results.
Values, Deep Culture and Resistance
Molly Engle, Oregon State University, molly.engle@oregonstate.edu
Stakeholders come to an evaluation bringing with them their values, their biases, and their expectations, all of which form a resistance to the change that could result from the program delivery. Deep culture encompasses all of these constructs. Understanding how resistance can present itself is the first step in overcoming resistance and ensuring buy-in from stakeholders. Resistance to evaluation, like deep culture won't go away--evaluators must work within the structure provided to minimize resistance to evaluation.
Meeting Resistance With Transparency: Randomization and Political Pressure in Zambia
Keri Culver, MSI Inc, kericulver@yahoo.com
In an impact evaluation of social cash transfer programming in remote, rural Zambia, using a randomized controlled trial model, stakeholders voiced customary concerns about the ethics of assigning communities to a control group. An added layer of conflict brought significant political resistance as well: Zambian political divisions (communities, wards, districts, provinces) are overlaid by a system of traditional chiefdoms, led by headmen accustomed to trading influence for public goods. Selecting communities for participation was highly contentious and the value of impact evaluation was brought into question publicly. Our team devised transparent, recorded steps to demonstrate the randomization process, involve stakeholders, show how alternate methods introduced political bias, and provide language for social welfare officers' interactions with constituents. The result was political and ministerial buy-in, and the program is being rolled out to households following successful baseline data collection.

Session Title: Valuing Evaluation in Non-Traditional Areas: Lessons for Evaluation Capacity Building
Panel Session 469 to be held in Capistrano A on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building
Chair(s):
Robert Lahey, REL Solutions Inc, relahey@rogers.com
Abstract: Identifying key factors that serve as important elements to building evaluation capacity has been the focus of a recent study of the US Government Accountability Office (GAO). This capacity building framework will be presented and serve as a backdrop in examining the situation faced by organizations where historically formal evaluation has not resided. Increasingly, more organizations are being challenged to demonstrate good governance, accountability and, more recently, value for money in what they do and produce. Two cases of building evaluation capacity in non-traditional areas will be examined: (i) a charitable association in the voluntary sector; and, (ii) a small public sector organization that is also an independent watchdog of government. Among the lessons that will be drawn from the two cases will be lessons for building an evaluation culture in the context of volunteerism and lessons for 'valuing' evaluation and determining the appropriate capacity building strategy in small organizations.
Developing Evaluation Capacity: Key Elements and Strategies Identified in Five Federal Agencies
Stephanie Shipman, United States Government Accountability Office, shipmans@gao.gov
For two decades, federal agencies have been increasingly expected to focus on achieving results and to report on how program activities help achieve agency goals. Yet, GAO has noted limitations in the quality of agency performance and evaluation information and agency capacity to produce rigorous evaluations of program effectiveness. To assist agency efforts to provide credible information on program effectiveness, GAO 1) reviewed the experiences of five agencies that have demonstrated evaluation capacity-defined as the ability to collect, analyze, and use data on program results, and 2) identified useful capacity-building strategies that other agencies might adopt. In the agencies reviewed, the key elements of evaluation capacity were: an evaluation culture, data quality, analytic expertise, and collaborative partnerships. This paper will describe the various forms and importance these elements took in the cases we reviewed, and the various strategies that the agencies took to develop and improve their evaluation capacity.
Building Evaluation Capacity in a Context of Volunteerism: Strategies and Challenges
Zita Unger, Independent Consultant, zitau@bigpond.com
This paper will discuss the challenges of building an evaluation culture in a context of volunteerism and the strategies employed in developing evaluation systems. Charitable associations often depend on volunteers to perform important social welfare functions in the community. The mission-based charity discussed here has provided recreation camps for disadvantaged youth and families for 30 years. The organization relies on young volunteer leaders to maintain its characteristic one-to-one ratio with participants on camp, volunteer board members to provide leadership and oversight of the organization, and a small executive team. In recent times, financial support, traditionally underwritten by a religious order, became dramatically reduced with a requirement that the organization move towards self-sustainability. Issues of governance, continuous improvement, accountability and attracting major funding were seen as important, despite little prior experience of evaluation.
Building a Performance Monitoring and Evaluation Capacity in Small Agencies: Challenges and Strategies
Robert Lahey, REL Solutions Inc, relahey@rogers.com
This paper addresses the challenges and lessons learned for evaluation capacity building in a small public sector organization where formalized evaluation has not historically resided. Faced with external pressures to demonstrate good governance, accountability and value for money in their operation, small organizations in the public sector face a variety of challenges, not the least of which is finding the right model that will work for their organization. The particular case examined is intriguing since it is not only dealing with a small agency, but one that serves as an independent government watchdog, a fact that some felt would be challenged by establishing an internal evaluation function. Despite the hurdles, a formal evaluation capacity has been introduced and the paper draws general lessons for other small organizations.

Session Title: The Measurement of Issues Affecting the African-American Population: Uncovering the Value
Multipaper Session 470 to be held in Capistrano B on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Social Work TIG
Chair(s):
Jenny Jones,  Virginia Commonwealth University, jljones2@vcu.edu
Valuing: HIV Prevention Education Evaluation
Presenter(s):
Sarita Davis, Georgia State University, saritadavis@gsu.edu
Abstract: Evaluation generally evolves out of a context, be it professional or personal experience, practice, or a discipline. Despite the historical claims that evaluation is objective, we see that the process of HIV Prevention Education evaluation is frequently informed by a narrow set of values. This begs the question, 'Whose values are (or are not) being considered'? And how do these values influence HIV Prevention Education and perceptions of truth?
Evaluation of an Instrument Designed to Identify the Self-Care Practices of Older African Americans with Type 2 Diabetes
Presenter(s):
Gina M McCaskill, University of Alabama, gmmccaskill@crimson.ua.edu
Kathleen Bolland, University of Alabama, kbolland@sw.ua.edu
Abstract: Type 2 Diabetes is a major health issue for older African Americans. Self-care routines are important to the management of diabetes and can affect overall health and well-being. Yet, there is an absence in the research literature of an instrument for evaluating diabetes self-care practices among older African Americans. Existing scales that assess self-care practices among individuals with diabetes have not been developed for, nor assessed with, this population. In this presentation, we will discuss the development and assessment of the Self-care Utility Geriatric African American Rating (SUGAAR), a new instrument for evaluating diabetes self-care practices among older African Americans. We will highlight our approach to scale development, our findings from the rating scale evaluation, and the implications of the results for social workers and health care providers who use the SUGAAR to evaluate the self-care practices of their clients.
Creative Interventions: Use of an Evaluation Team to Move the State of Black Gay America Summit (SBGA) Agenda Forward
Presenter(s):
Karen Anderson, Independent Consultant, kanderson.sw@gmail.com
Abstract: The purpose of my paper, Creative Interventions: Use of an Evaluation Team to Move the State of Black Gay America Summit Agenda Forward is to explore the utility of evaluation in advocacy and social change events to engage and provide feedback to a range of stakeholders. The lesbian, gay, bisexual, and transgender (LGBT) population has a range of unique advocacy points, and various individual, family, societal, and system concerns that need to be voiced and addressed. In this paper I will discuss how an evaluation team was used as an intervention at the State of Black Gay America (SBGA) Summit to provide data for the organizers to assist with strategizing ways to improve knowledge sharing and advocacy efforts, as well as definition to their roles as leaders through structured interviews. An evaluation plan was developed with the organizers to ensure that SMART objectives were being utilized throughout the evaluation process.

Session Title: Recovery Orientation in Service Programs
Multipaper Session 471 to be held in Carmel on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Trena Anastasia,  University of Wyoming, tanastas@uwyo.edu
Understanding Mental Health Recovery and Peer Support among Latinos and People who are Deaf and Hard of Hearing
Presenter(s):
Linda Cabral, University of Massachusetts, linda.cabral@umassmed.edu
Kathy Muhr, University of Massachusetts, kathy.muhr@umassmed.edu
Judy Savageau, University of Massachusetts, judith.savageau@umassmed.edu
Abstract: Offering recovery-oriented mental health and peer support services to various cultural and linguistic groups is challenging. This study sought to better understand how persons with mental health conditions from two cultural groups - Latinos and Deaf and Hard of Hearing (D/HH) - access recovery-based services. Interviews and focus groups were conducted with persons with mental health conditions from both cultural groups. Language barriers posed the biggest challenge in accessing mental health services. There is a lack of qualified professionals who speak Spanish or use American Sign Language. Among both groups, the preference was to work directly with someone in the language they feel most comfortable with and to avoid interpreters. Access to adequate mental health services, not just recovery-oriented and peer support services, were not widely available for Latinos and persons who are D/HH. Public mental health systems need to adapt and expand services for these and other cultural groups.
Evaluating the Role of the Peer Specialist in the Massachusetts Mental Health System
Presenter(s):
Linda Cabral, University of Massachusetts, linda.cabral@umassmed.edu
Heather Strother, University of Massachusetts, heather.strother@umassmed.edu
Kathy Muhr, University of Massachusetts, kathy.muhr@umassmed.edu
Laura Sefton, University of Massachusetts, laura.sefton@umassmed.edu
Judy Savageau, University of Massachusetts, judith.savageau@umassmed.edu
Abstract: A growing trend nationally in mental health systems is for individuals with mental illness and experience with mental health services to work as Peer Specialists. Training programs have been established to develop this new workforce. A recently completed evaluation assessed factors that both help and hinder Peer Specialists in applying their learning from the Massachusetts training program. We sought feedback from peer specialists, supervisors of peer specialists, as well as people who receive peer specialist services. Interviews and focus groups were conducted separately with all stakeholder groups to better understand their experiences in supervising this new role, receiving this new service or being employed as a peer specialist. This study helped to inform the Massachusetts Department of Mental Health about the importance of having clear job duties for peer specialists as well as the value placed on this service by those working with peer specialists.
The Recovery-Orientation of Mental Health Programs: Valuing Different Perspectives
Presenter(s):
Diana Seybolt, University of Maryland, Baltimore, dseybolt@psych.umaryland.edu
Laura Anderson, University of Maryland, Baltimore, landerso@psych.umaryland.edu
Lachelle Wade-Freeman, University of Maryland, Baltimore, lfreeman@psych.umaryland.edu
Abstract: Recovery has increasingly been recognized in the mental health field as the primary goal of individuals receiving services. As such, it is important to understand the extent to which mental health service programs foster and promote the recovery of mental health consumers. As part of a training initiative sponsored by Maryland's Mental Health Transformation State Infrastructure Grant, this evaluation examined the recovery-orientation of several Psychiatric Rehabilitation Programs. The Recovery Self-Assessment (O'Connell, Tondora, Croog, Evans, & Davidson, 2005) was completed by both program staff and service recipients. The results showed significant differences in the way in which program staff and consumers rated the programs. The results will be discussed in terms of possible reasons for the differences as well as importance and value of including different stakeholder perspectives in the evaluation of mental health service programs.
Families on the Border: Using Evaluation and Program Data to Understand Family Problems and Value Family Strengths for a Recovery-Oriented Model of Care
Presenter(s):
Judith Francis, Pima Prevention Partnership, jfrancis@thepartnership.us
Kara Jones, Pima Prevention Partnership, kjones@thatpartnership.us
Abstract: Juvenile justice-involved adolescents entering substance abuse treatment tend to have multi-faceted family problems, and treatment programs often rely on these same families to provide direct sobriety support for their adolescent. Outpatient programs, where three-quarters of these youth are placed, face challenges in engaging families and providing resources to assist them. To learn more about the needs of their Latino and non-Latino participants, evaluators at one Arizona model program analyzed family context variables from pooled Global Appraisal of Individual Needs (GAIN) for 3,063 youth in similar programs in the four U.S-Mexico border states. Since GAIN family context data is limited, the evaluators undertook a client record review of rich narrative notes completed by clinical staff following intensive intake interviews with clients and family members. These data on family problems and strengths are matched with treatment outcomes to generate the nuanced understanding of families necessary to develop a recovery-oriented model of care.

Session Title: Elements of Evaluation Training: Developing Evaluator Competencies and Understanding Values in Evaluation Practice
Multipaper Session 472 to be held in Coronado on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Teaching of Evaluation TIG
Chair(s):
Bonnie Stabile,  George Mason University, bstabile@gmu.edu
Teaching Program Evaluation in a Master of Public Administration Program: a Search for Commonalities
Presenter(s):
James Newman, Idaho State University, newmjame@isu.edu
Kwame Badu Antwi-Boasiako, Stephen F Austin University, antwibokb@sfasu.edu
Abstract: The purpose of this study is to determine any existing or desired commonalities of courses in program evaluation within the curriculum of a Master of Public Administration (MPA) program. The goal in conducting this research is to provide instructors of program evaluation information as to learning outcomes in courses concerning program evaluation. It is my hope this information will be helpful to instructors of program evaluation and MPA directors. The study is based upon existing literature and a survey of instructors of program evaluation in MPA programs. The survey received a 25% response rate from instructors of program evaluation, which produced an N size of 61. The results indicate several common learning outcomes and a strong desire to create prerequisites, primarily in statistics and research methods.
Competency Acquisition: Linking Education Experiences to Evaluator Self-Efficacy
Presenter(s):
Lisa Dillman, University of California, Los Angeles, ldillman@ucla.edu
Abstract: Recently, conversations surrounding credentialing evaluators in the mold of the Canadian Evaluation Society's Credentialed Evaluator Designation have increased. Significant discussions about teaching evaluation abound; however, little has been said about how novice evaluators learn essential competencies and skills. This paper presents the results of a survey administered to AEA's Graduate Student and New Evaluators Topical Interest Group. Respondents were asked about their training experiences, their confidence in certain skills and knowledge, and the components of their training in evaluation that had the greatest impact on the development of a certain set of competencies. A paired comparison analysis was conducted to assess the differences between the contributions of elements of a training program to the development of each competency. Analysis shows cultivating a well-trained evaluator requires a variety of training program components. However, the training component considered to be most important differed according to which competency was being developed.
Teaching Values in a Program Evaluation Course
Presenter(s):
Kathryn Newcomer, George Washington University, kathryn.newcomer@gmail.com
Burt Barnow, George Washington University, barnow@gwu.edu
Abstract: Program evaluations involve more than using the appropriate statistical and qualitative techniques. Frequently evaluators are confronted with ethical issues that may lead them to decline an opportunity to undertake an evaluation or may alter the way in which the evaluation is conducted. Moreover, ethical behavior in conducting program evaluations is open to varying interpretations—depending on the ethical standards adopted, one can reach quite different conclusions. In teaching program evaluation in our program, we include a class devoted to ethical issues in program evaluation, but we also engage the students in a brief debate in each class on ethical issues that arise in evaluations. This paper discusses rationale for including the ethics debates, the process used for the debates, and several examples of the debate topics we have used. After describing the process used for the debates, the paper presents examples of topics used and the discussions that ensued.

Session Title: Values and Perspectives on Evaluating Clinical and Translational Science
Panel Session 473 to be held in El Capitan A on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Cathleen Kane, Weill Cornell Clinical Translational Science Center, cmk42@cornell.edu
Discussant(s):
Donald Yarbrough, University of Iowa, d-yarbrough@uiowa.edu
Abstract: Clinical and translational science is at the forefront of biomedical research and practice in the 21st century. The NIH-funded Clinical and Translational Science Awards (CTSAs) are the largest initiative at NIH and the 55 center grant evaluation teams constitute a unique national field laboratory in the evaluation of biomedical research and practice. The four presentations in this panel address: the value of evaluation to key stakeholders; the values that shape qualitative and mixed methods work in assessing biomedical research and practice; whose values determine which outcomes are assessed and how this plays out in practice; and how different values shape the cross-center national evaluation. The panel will present four different evaluation studies and discuss their implications both practically in the context of the CTSAs and theoretically in terms of the values that influence this work.
Assessing the Perceived Value of Evaluation at a Clinical and Translational Science Award (CTSA) Institution
Christine Weston, Johns Hopkins University, cweston@jhsph.edu
One of the major goals of internal evaluation is to increase the evaluation capacity of the organization. In order to increase capacity, evaluation first needs to be appreciated and valued. Unfortunately, evaluation is often undervalued, misunderstood, or dismissed. As a result, the efforts of internal evaluators are often met with resistance. The purpose of our study is to investigate the degree to which Program Evaluation is valued at our institution. We aim to address the following questions: 1) To what extent do our evaluation stakeholders understand the a) role of evaluation in the organization, b) the benefits of evaluation to the organization, and c) value of evaluation to organizational learning? 2) What are the misconceptions about evaluation, and how can they be corrected? What are the misconceptions about evaluation, and how can they be corrected? As a result of our assessment we aim to develop a targeted intervention to increase the perceived value of program evaluation in our organization.
Beyond Telling Their Stories: The Added Value of Qualitative Research to CTSA Evaluations
Nancy Bates, University of Illinois, Chicago, nbates@uic.edu
Jessica Hyink, University of Illinois, Chicago, jessicah@srl.uic.edu
Timothy Johnson, University of Illinois, Chicago, tjohnson@srl.uic.edu
Mary Feeney, University of Illinois, Chicago, mkfeeney@uic.edu
Megan Haller, University of Illinois, Chicago, mhalle1@uic.edu
Priyanka Nasa, University of Illinois, Chicago, pnasa2@uic.edu
Linda Owens, University of Illinois, Chicago, lindao@srl.uic.edu
Eric Welch, University of Illinois, Chicago, ewwelch@uic.edu
In CTSA evaluations, it is important to tell the success stories of researchers who have transformed their research to clinical and translational work, using CTSA resources. Beyond these stories, however, qualitative methods can bring added value to CTSA evaluations by yielding important findings that cannot be learned through quantitative methods alone. This presentation will introduce qualitative data collection and analysis methods used to understand the process, implementation and outcomes of each CTSA's core group's Specific Aims and Logic Models. Integration with quantitative data will be shown. Examples and lessons learned from the University of Illinois at Chicago Center for Clinical and Translational Science will be reported.
Values and the Selection of Outcomes for Evaluating CTSAs
D Paul Moberg, University of Wisconsin, Madison, dpmoberg@wisc.edu
Janice Hogle, University of Wisconsin, Madison, jhogle@wisc.edu
Christina Hower, University of Wisconsin, Madison, cjhower@wisc.edu
Large, expensive and complex programs, such as the CTSAs, have numerous constituents and stakeholders, resulting in a multiplicity of potential outcomes that could be measured quantitatively or assessed qualitatively in an evaluation. Selection of outcomes to be given priority reflects both the expressed intent of the funders and the values of the stakeholders. This presentation will explore those valued outcomes for CTSA's, and their implications for what we should be measuring, assessing and documenting. Data from qualitative interviews of key stakeholders in the University of Wisconsin CTSA, informed by interaction with other CTSA evaluators and written documentation, will be used to explicate the local and national range of valued outcomes. Theoretical analysis will conceptually situate valued outcomes within cultural, professional, political and community contexts of the stakeholders in the CTSA enterprise
The National Evaluation of the Clinical and Translational Science Awards Initiative: Challenges and Strategies for Addressing Them
Joy Frechtling, Westat, joyfrechtling@westat.com
Meryl Sufian, National Institutes of Health, sufianm@mail.nih.gov
The National Evaluation of the Clinical and Translational Science Awards (CTSA) Initiative is designed to provide initial information on the progress of the program, examining both the accomplishments of the first four CTSA cohorts of academic medical centers and the Consortium overall that they comprise. Using a mixture of surveys, interviews, field visits, bibliometrics, and expert review, the evaluation is designed to gather preliminary data on the impacts of the program on the clinical and translational workforce, the development of collaborations and collaborative research, and the quality of clinical and translational science. The presentation will include an overview of the national evaluation and some of the strategies that the evaluation team and NCRR are using to build an accurate picture of this multi-faceted, multi-level initiative. Lessons for both future evaluations of the CTSA and of other similarly complex programs will be offered.

Session Title: International and Cross-cultural TIG Business Meeting
Business Meeting Session 474 to be held in El Capitan B on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the International and Cross-cultural Evaluation TIG
TIG Leader(s):
Tessie Catsambas, EnCompass LLC, tcatsambas@encompassworld.com
Mary Crave, National 4-H Council, mcrave@fourhcouncil.edu

Roundtable: An Assessment Tool to Evaluate Evaluator Cultural Competence: A Report of Progress
Roundtable Presentation 475 to be held in Exec. Board Room on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the AEA Conference Committee
Presenter(s):
Arcelia Hernandez, St Edward's University, arceliah@stedwards.edu
Janice Johnson-Dias, John Jay College of Criminal Justice, jjohnson-dias@jjay.cuny.edu
Aurolyn Luykx, University of Texas, El Paso, aluykx@utep.edu
Osman Ozturgut, University of the Incarnate Word, ozturgut@uiwtx.edu
Marcel Sargeant, Southwestern Adventist University, sargeant@swau.edu
Joseph Smith, Clark Atlanta University, professor.josephsmith@gmail.com
Gita Upreti, University of Texas, El Paso , gitaupreti@gmail.com
Clare Weber, California State University Dominguez Hills, cweber@csudh.edu
Guang Zeng, Texas A&M University, Corpus Christi, guang.zeng@tamucc.edu

Session Title: Evaluating Websites and Social Media
Multipaper Session 476 to be held in Huntington A on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Integrating Technology Into Evaluation
Chair(s):
Stephanie Evergreen, Evergreen Evaluation, stephanie@evergreenevaluation.com
Abstract: Clients, and even evaluators, are increasingly engaging in web-based interactions but little evaluation takes place to determine the benefit or impact of such engagement. Social media forums like Facebook and Twitter are often undertaken for no reason other than "we think we should." Even websites, almost a mandatory component for any organization, often go unmonitored. In this panel we propose that a lack of evaluation of web activity is a result of a lack of knowledge of evaluation tools as strategies. Join us as we discuss methods for the madness, applicable to clients and evaluators alike.
Google Analytics: Goldmine of Free Evaluation Data
Kurt Wilson, Compass Outreach Media, wilson@compass-om.com
Most websites have grown from their roots as simple online brochures to become a primary organizational resource, serving roles related to marketing, education, and engagement for many organizations. Google Analytics is a free tool that can be linked to any website, providing an extensive range of data that could help evaluators with questions such as: How well did our marketing campaign work? How engaged are people with our content? How strong are our partnerships? Which of our resources are most (or least) in demand? Where are our visitors located? By what means are people reaching our site? Additionally, Analytics data can be exported enabling further analysis, the creation of custom graphs, and inclusion in evaluation reports. This presentation will provide an overview and practical guide for evaluators interested in using Google Analytics for evaluation.
So What, Social Media?
Stephanie Evergreen, Evergreen Evaluation, stephanie@evergreenevaluation.com
Social media is a new frontier, calling for a different evaluation strategy than that used for traditional organizational communications. In this session, the presenter will review three key steps to planning an evaluation of social media: (1) Examining media outlet-objective match, (2) Selecting appropriate measurement tools, and (3) Benchmarking. The presenter will also share a research-based logic model of social media stakeholder engagement and propose sets of criteria that can be used for evaluating effectiveness in different social media formats or for content analysis of stakeholder interaction. Examples from the presenter's own social media forays will be featured.

Session Title: Infusing Evaluative Thinking in the Public Sector to Advance Valuing of Evaluation in Society
Panel Session 477 to be held in Huntington B on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Government Evaluation TIG
Chair(s):
Michael Patton, Utilization-focussed Evaluation, mqpatton@prodigy.net
Discussant(s):
Mike Jancik, Ontario Ministry of Education, mike.jancik@ontario.ca
Abstract: Public sector work in the 21st Century means adaptation and innovation as the norm in providing services to the changing realities of society. Developmental Evaluation (DE) supports social innovation and adaptive management. In DE the evaluator is internal, part of a team, a facilitator and learning coach bringing evaluative thinking to the table supportive of the organization's goals. (Patton, 2010) Evaluators in the Ontario Ministry of Education's Student Achievement Division infuse evaluative thinking to build system evaluation capacity to reach every student. Presenters will discuss how internal evaluators use program theory to build understanding of initiatives function to leverage collective and individual knowledge in building effective initiatives while integrating M&E for ongoing improvement/evolution, how collaborative inquiry is used within school and school board team interaction across schools, and how external evaluation is commissioned to provide information that would facilitate decisions for future directions of Ministry-funded professional learning strategies for educators.
Role of the Internal Evaluators in the Developmental Process to Reach Every Student
Keiko Kuji-Shikatani, Ontario Ministry of Education, keiko.kuji-shikatani@ontario.ca
Mike Jancik, Ontario Ministry of Education, mike.jancik@ontario.ca
David Cameron, Ontario Ministry of Education, david.cameron@ontario.ca
Cristine Ilas, Ontario Ministry of Education, cristina.ilas@ontario.ca
Kim Spence, Ontario Ministry of Education, kim.spence@ontario.ca
Realization of vision-and-values-driven social innovation typifies the ideal of the public sector. In DE the evaluator is internal, part of a team, a facilitator and learning coach bringing evaluative thinking to the table supportive of the organization's goals. (Patton, 2010) Evaluation's role in supporting programs is valued for the betterment of society through exploring ways of engaging our stakeholders, evaluation capacity building of both individuals and the system. For example, the Student Achievement Division uses program theory in a Division-wide collaborative project designed to describe the work at the initiative, branch and division levels - to build understanding of initiatives' function to leverage collective and individual knowledge in building effective initiatives while integrating M&E for ongoing improvement/evolution. Another example is how the Ministry School Support Initiative team is utilizing Developmental Evaluation for timely program development and incremental refinements informed by data collected through the implementation of participating schools and boards.
Teacher Inquiry Models of Professional Learning: The Challenge of Fidelity, Evaluation and Scale
Barnabas Emenogu, Ontario Ministry of Education, barnabas.emenogu@ontario.ca
Mike Jancik, Ontario Ministry of Education, mike.jancik@ontario.ca
Rachel Ryerson, Ontario Ministry of Education, rachel.ryerson@ontario.ca
Judi Kokis, Ontario Ministry of Education, judi.kokis@ontario.ca
David Cameron, Ontario Ministry of Education, david.cameron@ontario.ca
The session will use a range of data to examine three examples of collaborative inquiry used within school and school board team interaction across elementary schools. Data will be drawn from 70 participating teams in an early primary collaborative inquiry, 30 participating districts in a collaborative inquiry for learning mathematics, and 50 case studies written by the teacher-researchers. In total, nearly a quarter of Ontario's elementary schools will be represented. Data sources will also include policy documents, final research report of all three initiatives, team self-evaluations, and individual reflections on the inquiry process, focus groups, action plans, surveys, and feedback provided will be examined. Individual case and cross-case analyses will be conducted to determine how program intentions were maintained or adjusted during implementation. The features of success and challenge within teacher collaborative inquiry will be critically discussed.
External Evaluation of the Ontario Ministry of Education's Differentiated Instruction Professional Learning Strategy
Megan Borner, Ontario Ministry of Education, megan.borner@ontario.ca
Student Success/Learning to 18 Strategy focuses on keeping more young people learning to age 18 or graduation, reducing students dropping out, improving student achievement and graduation rates, re-engaging youth who have left school without graduating and providing effective programs to prepare for their post-secondary pathway. University of Ottawa began an external evaluation the DIPLS which focuses on meeting the needs of all students through improving educator instructional practices. DI's approach to teaching and learning is responsive to the learning needs and preferences, interests and readiness of the individual learner. The overall intent is to foster instructional, assessment and evaluation practices that support student engagement, learning and academic achievement. It seeks to determine the extent to which the outcomes have been achieved; the impact on instructional practice; and the effectiveness of its implementation. It should provide information that would facilitate decisions for future directions of Ministry-funded professional learning strategies for educators.

Session Title: Evaluating Policy Functions: Emerging Practice in the Canadian Federal Government
Panel Session 478 to be held in Huntington C on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Government Evaluation TIG
Chair(s):
Brian Moo Sang, Treasury Board of Canada Secretariat, brian.moosang@tbs-sct.gc.ca
Abstract: The 2009 Canadian federal Policy on Evaluation expanded the universe of evaluation coverage. As a result, some federal departments/agencies are now looking at ways to evaluate their policy functions. To assist departments in meeting this new evaluation challenge, the Treasury Board of Canada's Centre of Excellence for Evaluation launched an interdepartmental working group to explore approaches and methods for evaluating policy functions. In this panel session, the lessons emerging out of the working group - including approaches and conceptual models for understanding the evaluation of policy functions - will be presented. The challenges and practical application of these lessons will then be explored in presentations by the Public Health Agency of Canada, the Atlantic Canada Opportunities Agency and the Department of Finance which will focus on their pilot evaluations of policy functions, which both informed and were informed by the working group.
Concepts and Approaches in Evaluating Policy Functions
Brian Moo Sang, Treasury Board of Canada Secretariat, brian.moosang@tbs-sct.gc.ca
Anne Routhier, Treasury Board of Canada Secretariat, anne.routhier@tbs-sct.gc.ca
A 'policy function' (also sometimes called a 'policy program') refers to a set of activities undertaken in an organization in which the primary outputs/outcomes relate to: the provision of advice; policy development; and/or support for/monitoring of policy implementation. In 2009, the Treasury Board of Canada's Centre of Excellence for Evaluation (CEE) established an interdepartmental working group to explore concepts, approaches, methods and challenges in measuring the performance of and evaluating policy functions in federal departments/agencies. In this presentation, CEE will provide an overview of the concepts and lessons learned emerging from the working group with an emphasis on the challenges related to determining the various roles of policy functions for evaluation purposes. This session is intended to contextualize the presentations of 'pilot' evaluations of policy functions that will be made by the other panelists.
Preparing for Evaluation of the Policy Function at the Public Health Agency of Canada (PHAC)
Paule-Anny Pierre, Public Health Agency of Canada, paule-anny.pierre@phac-aspc.gc.ca
Mary Frances MacLellan-Wright, Public Health Agency of Canada, mary.frances.maclellan-wright@phac-aspc.gc.ca
Nancy Porteous, Public Health Agency of Canada, nancy.porteous@phac-aspc.gc.ca
Understanding the nature of policy work - the activities, outputs and desired outcomes - is the first step in preparing to evaluate this important government function. In this case study, interviews with policy staff were used to map out the policy function at the Public Health Agency of Canada. This presentation will present the policy staff perspective on what is important to measure and the implications for evaluation methods and approaches that might be suitable to evaluating the policy function's processes and impacts.
Evaluating the Atlantic Canada Opportunities Agency's Policy, Advocacy and Coordination Function
Tonya Furlong, Atlantic Canada Opportunities Agency, tonya.furlong@acoa-apeca.gc.ca
Natalie Doiron, Atlantic Canada Opportunities Agency, natalie.dorion@acoa-apeca.gc.ca
Julie Nadeau, Atlantic Canada Opportunities Agency, julie.nadeau@acoa-apeca.gc.ca
The Atlantic Canada Opportunities Agency (ACOA) is responsible for the Government of Canada's economic development efforts in Atlantic Canada. Over the last year, ACOA's Evaluation Unit has undertaken its first evaluation of the relevance and performance of the Agency's Policy, Advocacy and Coordination (PAC) function. ACOA's PAC function is expected to result in: "Policies and programs that strengthen the Atlantic economy" that reflect the economic reality and potential of Atlantic Canada; and in a coordinated and coherent approach to addressing the region's priorities. As a member of interdepartmental Policy Program/Function Evaluation Working Group (PPEWG), ACOA's evaluation unit has applied the knowledge and understanding generated through this group to the design and implementation of its PAC evaluation. This paper provides an overview of ACOA's PAC function, and details the lessons learned through the evaluation process. It also examines the conceptual model developed within the PPEWG through a practical lens.
Evaluating the Policy Function at Finance Canada
Nazish Ahmad, Finance Canada, nazish.ahmad@fin.gc.ca
Christian Kratchanov, Finance Canada, christian.kratchanov@fin.gc.ca
The Department of Finance (Canada) conducted extensive research into the "evaluability" of the policy advice and research function and the degree to which it is possible to assess the contribution of this function to the departmental decision-making process. The research was shared with a working group of federal government representatives as theirs was shared with us. Our presentation outlines the practical application of our research findings to the evaluation of a key government function that conducts policy research and gives policy advice to decision-makers. The approach involved a systematic assessment of the function's relevance and performance by reviewing: alignment of clients' expectations; analysts' perspectives; internal processes; and financial and human resources management. We examined the extent to which key objectives and results were achieved, including the provision of timely, high-quality policy research and advice. The presentation outlines the detailed methodology, challenges encountered, findings, opportunities for improvement and best practices.

Session Title: Is Your Evaluation Tired? Rejuvenate it!
Skill-Building Workshop 479 to be held in La Jolla on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the AEA Conference Committee
Presenter(s):
Dominica McBride, The HELP Institute Inc, dmcbride@thehelpinstitute.org
Rita Fierro, Independent Consultant, fierro.evaluation@gmail.com
Pauline Brooks, Independent Consultant, pbrooks_3@hotmail.com
Abstract: Evaluations are riddled with values, preconceived notions (e.g. concerning race, gender, language, etc.), multiple cultural influences, human emotions and habitual behavior. Oftentimes it is difficult to perceive these factors that have the potential to both cloud and provide clarity to evaluation practice, let alone overcome or consciously integrate them into effective professional performance. Using strategies from tai chi, meditation, health psychology, and body awareness work, this workshop introduces skills and practices for strengthening the evaluator in these types of areas, thereby helping to strengthen the quality of our evaluation practice. These techniques are useful in clearing one's mind, enhancing creativity and mental flexibility, and reducing physical barriers (e.g. fatigue) that can impede the quality of our work. Once we are more aware of and hone these dynamics, we improve the quality of our relationships with stakeholders, accuracy in interpreting information, and our originality in making useful recommendations.

Session Title: Evaluation and Research-Practice Integration: What are Our Roles and How Can We Play Them Better?
Think Tank Session 480 to be held in Laguna A on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Research on Evaluation
Presenter(s):
Thomas Archibald, Cornell University, tga4@cornell.edu
Discussant(s):
Marissa Burgermaster, Montclair State University, burgermaster@gmail.com
Monica Hargraves, Cornell University, mjh51@cornell.edu
Abstract: The need to effectively and efficiently integrate research and practice is a daunting problem facing most, if not all, scientific endeavors. This is especially true in social scientific inquiry. Traditionally, practitioners focus on particular contexts whereas researchers focus on the production of generalizable knowledge. The fields of biomedicine, education and other social domains have attempted to bridge the research-practice gap (e.g., evidence-based practice and translational research.) Often, these efforts have been criticized for their top-down nature. On the other hand, practitioner resistance to research often stymies the impact of research findings. Yet both researchers and practitioners want to focus on "what works" (especially in resource-constrained times.) We posit that evaluation can play a crucial role in research-practice integration, but that currently it is insufficiently clear how. In this session we will briefly present the issue and then facilitate brainstorm sessions to generate dialogue among our peers on this topic.

Session Title: Managing Ethical Risk in Evaluation Projects by Using Two Practical Online Ethics Decision-Support Tools
Demonstration Session 481 to be held in Laguna B on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the AEA Conference Committee
Presenter(s):
Linda Barrett-Smith, Alberta Innovates Health Solutions, linda.barrett-smith@albertainnovates.ca
Birgitta Larsson, BIM Larsson and Associates, birgitta@bimlarsson.ca
Abstract: This session demonstrates the application of two online ethics decision-support tools to help participants integrate an ethical approach in all evaluation projects so that people or their information are protected and respected: 1) The ARECCI Guidelines for Quality Improvement and Evaluation Projects introduce six ethical considerations to assist integration of ethics in project planning or to use as a framework for reviewing evaluation projects. 2) The ARECCI Ethics Screening Tool helps determine the primary purpose of the project (research versus non-research), category of risk to participants, and the level of ethics review required (if any). These tools provide a fast, transparent and consistent way of identifying and managing risks in evaluation projects. They can be shared with team members and work in progress can be stored for future retrieval. The tools are available free of charge at www.ahfmr.ab.ca/arecci

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: Monitoring and Evaluation Online Collaboration and Capacity Building: Challenges and Lessons Learned for International Organizations
Roundtable Presentation 482 to be held in Lido A on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the International and Cross-cultural Evaluation TIG and the Integrating Technology Into Evaluation
Presenter(s):
Gretchen Shanks, Mercy Corps, gshanks@mercycorps.org
Scott Chaplowe, International Federation of Red Cross and Red Crescent Societies, scott.chaplowe@ifrc.org
Abstract: M&E is greatly enhanced when we're able to leverage the experience and expertise of individual field and HQ teams to catalyze organizational learning. However, with increasing environmental concern and responsibility among international organizations to reduce their carbon footprints, along with the need for wise stewardship of resources in difficult economic times, in-person learning and training opportunities for M&E practitioners are limited. This, combined with the potential outreach and reduced costs of the Internet, has lead some organizations to explore online learning and collaboration tools. What are the key strategies and best practices in online learning and field-to-field collaboration and technical assistance when applied to M&E capacity building? What are some inherent challenges, and what steps can be taken to mitigate them? This roundtable will examine these and other questions, drawing upon the recent experiences of Mercy Corps and IFRC.
Roundtable Rotation II: The Challenges of Collecting Evaluative Data Across Long Distances Rather Than Face-To-Face
Roundtable Presentation 482 to be held in Lido A on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the International and Cross-cultural Evaluation TIG and the Integrating Technology Into Evaluation
Presenter(s):
Robert Ruhf, Western Michigan University, robert.ruhf@wmich.edu
Abstract: Science and Mathematics Program Improvement (SAMPI), an evaluation center at Western Michigan University, has several national and distant projects for which we are often unable to be present when it is time to collect evaluative data (surveys, questionnaires, pre/post tests, interviews, etc.). Much of the data is instead collected online, by phone, or by other long distance methods. This can create its own unique set of challenges that are not present when data are collected face-to-face. The presenter will discuss examples of projects for which data need to be collected at a distance, as well as challenges that go along with that (such as lack of interaction, dependence on others to administer evaluation instruments correctly, etc.). The presenter will then engage round table participants in a discussion of the sorts of similar situations they have you encountered in their own evaluation work, as well has how they addressed those situations.

Session Title: Evaluating Programs to Improve Children's Development and Health
Multipaper Session 483 to be held in Lido C on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Jenica Huddleston,  Deloitte Consulting, jenicahuddleston@gmail.com
Integrating feedback from distinct perspectives for an evaluation of a statewide childcare provider training on nutrition and physical activity in Delaware
Presenter(s):
Gregory Benjamin, Nemours Health & Prevention Services, gbenjami@nemours.org
Laura Lessard, Nemours Health & Prevention Services, llesard@nemours.org
Stefanie VanStan, Nemours Health & Prevention Services, svanstan@nemours.org
Abstract: Nemours Health and Prevention Services, with support from a USDA Team Nutrition grant, developed an innovative training for Delaware childcare providers on the changing obesity-related regulations for childcare settings. A multi-component, mixed-methods evaluation was used to first create and subsequently improve the training. Input from stakeholders was solicited in multiple ways and resulting changes were made to the training and companion resources along the way. Evaluation methods included a) focus groups with providers to assess needs and gain feedback on training materials and design; b) surveys that assessed satisfaction with the training, provider knowledge on regulations and whether practice changes occurred; and c) additional focus groups with parents to understand their needs related to nutrition and physical activity. This presentation will address the ways in which this project integrated feedback from different perspectives in real time, maximizing the potential for the training to have an impact on Delaware children.
Learning From Evaluations of Complex Programs: The Case of an Early Childhood Development Program in Brazil
Presenter(s):
Eduardo Marino, Funda Maria Cecilia Souto Vidigal, eduardo.marino@yahoo.com.br
Thomaz Chianca, COMEA Evaluation Ltd, thomaz.chianca@gmail.com
Abstract: Evaluating programs to promote changes of complex realities requires the use of diverse and flexible approaches. Among other things, such approaches need to be sensitive to differences in contexts and capacities of implementing teams. The Early Childhood Program of the Fundação Maria Cecilia Souto Vidigal is implemented in six municipalities in São Paulo, Brazil. It aims at developing capacities of health, education and social services professionals to work more effectively with pregnant women and their families so that they are able to help their kids have adequate physical, cognitive and emotional development. This paper describes the challenges to implement a mix method approach to evaluate and monitor this initiative. Special focus will be given to: (i) definition of values, criteria and indicators; (ii) establishment of a monitoring system to improve implementation and capture innovation; and (iii) application of the Early Development Index to access children's development in five domains.
Chronic Health Conditions and School Performance
Presenter(s):
Casey Crump, Stanford University, kccrump@stanford.edu
Diana Austria, Stanford University, daustria@stanford.edu
Rebecca London , Stanford University, rlondon@stanford.edu
Melinda Landau, San Jose Unified School District, 
Bill Erlendson, San Jose Unified School District, bill_erlendson@sjusd.org
Eunice Rodriguez, Stanford University, er23@stanford.edu
Abstract: Chronic health conditions are common and increasing in U.S. children, but their effect on school performance remains unclear. We conducted the largest study to date to examine the association between chronic health conditions and school performance, and to assess whether absenteeism mediates this association. Using a longitudinal cohort design, we followed 22,730 students (grades 2-11) enrolled in the San Jose Unified School District for at least two years during 2007-10, to examine whether parent-reported chronic health conditions are associated with school absenteeism and low performance on standardized English language arts and math examinations. Chronic health conditions were independently associated with absenteeism, and with low performance in English language arts and math after adjusting for absenteeism, across different ethnicities, socioeconomic status, and grade levels. These findings underscore a reciprocal relationship between education and health that begins in early life, and the need for effective interventions to address the resulting disparities.

Session Title: Multiple Methods for Assessing Societal and Environmental Impacts of Research
Multipaper Session 484 to be held in Malibu on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
George Teather,  Performance Management Network, george.teather@pmn.net
Evaluation Tools of Environmental and Welfare Effects in Tekes Funding
Presenter(s):
Jari Hyvarinen, Research Institute of the Finnish Economy, jari.hyvarinen@etla.fi
Abstract: A goal of my paper is to determine meta evaluation results of Tekes funding, Finland. These results describe the socioeconomic effects that are defined in new Tekes strategy and focus areas implemented in 2011. In new Tekes strategy, I have selected two main goals to my paper. Those are i) natural resources and sustainable economy, and ii) wellbeing of citizens. These new goals are implemented in impact model of Tekes, which concentrates on the effects of public R&D financing on the whole economy and society. In my paper, I build up a framework of longer term societal welfare effects by focusing on the results of meta evaluation, Tekes strategic goals of environment and welfare, and Tekes impact model. My aim is to find more suitable road-maps how Tekes funding and activities can be grouped to more controllable items.
The Contribution of Research to Socioeconomic Outcomes: A Case Study
Presenter(s):
George Teather, Performance Management Network, george.teather@pmn.net
Beth MacNeil, Canadian Forest Service, beth.macneil@nrcan-rncan.gc.ca
Ajoy Bista, Canadian Forest Service, ajoy.bista@nrcan-rncan.gc.ca
Abstract: Central agencies in Canada and other countries are challenging research organizations to demonstrate the contributions of their research to the achievement of high level economic, environmental and societal outcomes. This paper describes the progress of the Canada Forest Service (CFS), an agency of Natural Resources Canada in responding to that challenge. CFS managers have used a modified logic model approach to develop a performance story that begins by demonstrating the relevance of their research to addressing major challenges to the Canadian forest sector. The next step is to describe the influence of the research on the decisions made by key public and private sector organizations on forest sector policies and practices, and through those decisions, to changes in the state of the forest sector. The approach identifies the requirements for surveys and other evaluation methods to provide evidence that describe the level of influence of the research and associated decision support tools descisions on forest management policies and practices. CFS wildland fire and forest pest research will be used as examples of the application of the approach.
The Relationship Between Environmental and Scientific Performance of Nations: Lessons Learned From a Macro-Level Evaluation Using Scientometric Indicators And An Environmental Performance Index
Presenter(s):
Frederic Bertrand, Science-Metrix, frederic.bertrand@science-metrix.com
David Campbell, Science-Metrix, david.campbell@science-metrix.com
Michelle Picard-Aitken, Science-Metrix, m.picard-aitken@science-metrix.com
Gregoire Cote, Science-Metrix, gregoire.cote@science-metrix.com
Michele-Odile Odile Geoffroy, Independent Consultant, mo_geoffroy@hotmail.com
Abstract: Measuring the contribution of scientific research to national-level outcomes continues to challenge research evaluation. At the same time, several rankings and indices have been developed to investigate the influence of economic and non-economic factors on the environmental outcomes of nations. However, the role of scientific performance, as a determinant of national environmental performance, has not been fully investigated. This paper aims to 1) apply methods and metrics to improve the multi-criteria analysis of scientific performance, by reducing the dimensionality and effect of scale, and 2) expand on previous work exploring the interpretative value of macro-level indicators by better understanding the links between the environmental research and environmental outcomes of nations. Using a composite scientometric index developed by Science-Metrix, and a composite policy outcome-oriented index, (the Environmental Performance Index or EPI) the relationship between the scientific and environmental performances of countries is investigated to support the evaluation of research.

Session Title: Indigenous Peoples in Evaluation TIG Business Meeting
Business Meeting Session 485 to be held in Manhattan on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Indigenous Peoples in Evaluation
TIG Leader(s):
Katherine Tibbetts, Kamehameha Schools, katibbet@ksbe.edu
Kalyani Rai, University of Wisconsin, Milwaukee, kalyanir@uwm.edu
Joan LaFrance, Mekinak Consulting, lafrancejl@gmailcom

Session Title: Graduate Student and New Evaluators TIG Business Meeting
Business Meeting Session 486 to be held in Monterey on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Graduate Student and New Evaluator TIG
TIG Leader(s):
Nora Gannon, University of Illinois at Urbana-Champaign, ngannon2@illinois.edu
Ayesha Boyce, University of Illinois at Urbana-Champaign, boyce3@illinois.edu
Jason Burkhardt, Western MIchigan University, jason.t.burkhardt@wmich.edu

Session Title: Fairness for Participants in Evaluation Studies: An Easy-to-Use Toolkit to Identify Issues for Consideration
Demonstration Session 487 to be held in Oceanside on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the AEA Conference Committee
Presenter(s):
Ellen Irie, BTW Informing Change, eirie@btw.informingchange.com
Abstract: What does it really mean to treat participants in evaluation studies fairly, equitably and with the utmost respect? Particularly for those commissioning evaluations without a deep research background, it is vital to understand whether evaluators are taking the appropriate steps to protect human subjects in their work. Does every study require an Institutional Review Board (IRB) process? When is this not necessary? Where do you begin if an IRB process is warranted? The toolkit that will be shared in this demonstration session addresses these issues. Relevant for those responsible for evaluation studies, this go-to resource provides easy-to-follow decision trees, guidelines and descriptive resources for spotting potential areas to consider and determine next steps to ensure ethically sound evaluation involving individuals.

Session Title: Needs Assessment TIG Business Meeting and Roundtable: Concerns in Assessing Needs
Business Meeting Session 488 to be held in Palisades on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Needs Assessment TIG
TIG Leader(s):
Sue Hamann, National Institutes of Health, sue.hamann@nih.gov
Hsin-Ling Hung, University of North Dakota, sonya.hung@und.edu
Maurya West Meiers, World Bank, mwestmeiers@worldbank.org
Presenter(s):
James W Altschuld, The Ohio State University, altschuld.1@osu.edu
Abstract: Needs assessors may implement methods without enough thought to hidden methodological and philosophical issues inherent in them. Following a short presentation (perhaps 15 minutes or so), participants will form into roundtable groups to formulate additional problems and concerns they perceive or have encountered. Then the total group will meet to discuss an expanded laundry list of things to think about when they identify and prioritize needs and as they guide institutions and agencies to formulate solutions to rectify needs. The session will be primarily participant driven and perhaps will even help to lead to future papers and presentations on this or a related theme at AEA.

Session Title: Social Network Analysis TIG Business Meeting and Presentation: The Application of Multiple Measures in SNA Evaluations
Business Meeting Session 489 to be held in Palos Verdes A on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Social Network Analysis TIG
TIG Leader(s):
Maryann Durland, Durland Consulting, mdurland@durlandconsulting.com
Stacey Friedman, Foundation for Advancement of International Medical Education & Research, staceyfmail@gmail.com
Irina Agoulnik, Brigham and Women's Hospital, irina@syscode.med.harvard.edu
Todd Honeycutt, Mathematica Policy Research, thoneycutt@mathematica-mpr.com
Presenter(s):
Maryann Durland, Durland Consulting, mdurland@durlandconsulting.com
Abstract: This expert lecture will illustrate the application of multiple Social Network Analysis measures. Multiple SNA measures allow for exploring and explaining the complexity of networks and move analysis away from one "statistically significant" measure, such as density, when comparing networks. Programs create networks. Defining these, for evaluation purposes, is critical, and will form the base for determining measures. Some program related networks are small, and bounded by the program specifics (i.e. a network includes the participants in a group, participating over time as a small group). Other program networks may be more loosely defined and bounded by a relationship theory (i.e. support networks, communication networks, etc.). Some programs have one specified network and others have multiple parallel networks. In each case, multiple measures provide a means to understand the complexity of networks and to evaluate multiple networks on specific criteria. Which can include more traditional statistical significance testing.

Session Title: Building Standards for Health Policy Evaluation in an Academic Medical Setting
Skill-Building Workshop 490 to be held in Palos Verdes B on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Government Evaluation TIG
Presenter(s):
Judy Savageau, University of Massachusetts, judith.savageau@umassmed.edu
Teresa Anderson, University of Massachusetts, terri.anderson@umassmed.edu
Abstract: Since the late 1990s, the University of Massachusetts Medical School's Center for Health Policy and Research (UMMS/CHPR) , through its Commonwealth Medicine Division, has served a unique advisory role for the Massachusetts Executive Office of Health and Human Services. Following the 2006 passage of Chapter 58, the Massachusetts Health Care Reform legislation, the government's need for health policy evaluation studies with published reports increased. Seeking to enhance an existing template, Commonwealth Medicine evaluators undertook a formal review of their reporting capacity with the twin goals of alignment with a private, national grant making foundation standard and promotion of academic publications. After participation in this workshop, attendees will be able to use the Robert Wood Johnson Foundation Evaluation Criteria with the American Evaluation Association's Guiding Principles in reviewing and writing an evaluation report.

Session Title: Balancing Values: Examining the Capacity for Interactive, On-Line Extension Planning, Evaluation and Reporting Systems to Address Dispersed Personnel and Stakeholder Values
Panel Session 491 to be held in Redondo on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Extension Education Evaluation TIG
Chair(s):
Kimberly Norris, University of Maryland, knorris1@umd.edu
Abstract: Panelists provide examples of comprehensive, interactive, on-line planning, evaluation, and reporting systems used for Extension groups that encompass different scales of integration (regional, statewide, and programmatic) to provide a springboard for discussion on how to best accommodate local, statewide, and regional personnel and stakeholder values and decision-making goals. The panel's foci are best practices and lessons learned from designing, developing and implementing online accountability systems. Questions addressed are: 1) How integrated can data be for interactive online systems, while maintaining user and administrator friendliness? Are data integration and human-system interactiveness opposing or supporting goals for system development?; 2) How wide an audience can a system target for use and reporting while still maintaining integrity and integration between levels and types of data?; 3) What are the challenges, limits and benefits of using online systems to support organizational goals?; 4) What strategies are critical to effectively address limits and challenges?
Benefits, Challenges, and Lessons Learned as a State Administrator and Four State Collaborator of an Online Planning and Reporting System
Robin Lockerby, University of Vermont, robin.lockerby@uvm.edu
The Logic Model Planning and Reporting System (LMPRS) is the product of a four state collaborative effort begun in 2005. The effort developed and maintains an on-line planning and reporting system. The LMPRS system currently enables six states to customize preferences and structure to best serve the needs of their respective states. The goal of four state partners, Maine, Massachusetts, New Hampshire, and Vermont , is to enhance program development, evaluation and reporting capacity for individual staff members and the organization. Early in the development process partners designed the system to closely integrate planning and reporting, follow a logic model framework, include evaluation planning and results, and reflect impacts. These decisions have contributed to moving University of Vermont Extension organization, faculty, and program staff on a path that is positively changing culture. Challenges, benefits and lessons learned will be shared to encourage and support other states undertaking similar efforts.
Cost-Benefit Analysis for Local Extension Offices: Showcasing Planning, Outcome Measurement and Stakeholder Communications
Joseph Donaldson, University of Tennessee, jldonaldson@tennessee.edu
Cost-benefit analysis communicates the value of public investments in a way most stakeholders and citizens understand. Cooperative Extension has used multiple perspectives to document monetary benefits of adopting the various practices and behaviors taught by its programs, including: non-market value, savings, reduced costs and increased income. Despite using these various techniques, describing Extension's economic impact on a countywide or statewide basis, across all program areas, has remained tedious, if not impossible. As part of a one-stop reporting effort, the University of Tennessee Extension collapsed 14 different databases into one custom-made, relational software system called "System for University Planning, Evaluation and Reporting," or SUPER. In 2008, UT Extension deployed a cost-benefit analysis tool within SUPER for use by 95 county Extension offices. The panelist will discuss strengths and limitations of this approach.
Taking AIM at Institutional Value: Reflections on the Technical and Human Challenges of Development and Implementation of a State-wide Online Accountability System
Karen Ballard, University of Arkansas, kballard@uaex.edu
The Arkansas Information Management System (AIMS) was developed in response to the specific demands from a state stakeholder. The system was developed and rolled out withing 120 days and utilized by 75 counties and over 300 faculty. Initial challenges included lack of computer literacy , user interface, and resentment by faculty regarding this rapid requirement for highly detailed data reporting requirements. The challenge to the institution was even more complicated due to the lack of widespread value in accountability by many middle managers. The issues of value and institutional values have been driving issues as this system has matured and training has been employed to address technical skill deficits. Now a mature system, it is the "value" question that continues to be the core issue in the effective adoption and use of this planning, reporting and evaluation system.
Realizing Everyone's Dreams (RED) for Data?: Examining the Causes and Effects of Fully Integrating and Sharing Program Processes and Outcomes with an Online System
Kimberly Norris, University of Maryland, knorris1@umd.edu
Maryland Extension's SNAP-Ed program initially developed an on-line Reporting, Evaluation, and Data (RED) System to address reporting needs for including accurate monetary match and process data from more than 200 educators and collaborators around the state. Year one educator training on collecting process (output) data achieved 100% compliance. Year two optional entry of success stories was introduced. Year three, optional curriculum post-pre surveys were introduced. With recent improvements and administrative decisions tying the presence of outcome data to personnel reviews, 100% use rates are anticipated. Educators can now create local and state collaborator- and program-specific reports, and faculty CV-relevant reports. Data have benefited administration, program development, and Extension-wide needs assessments. By increasing educator, collaborator, and audience utility from the system as administrative needs are addressed, the system has maintained real and perceived value for all stakeholders, ultimately influencing UMD Extension's decision to adopt the system.

Session Title: Methods I: New Approaches to Assessment in Higher Education
Multipaper Session 492 to be held in Salinas on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Guili Zhang,  East Carolina University, zhangg@ecu.edu
A 360 Evaluation of Service Learning Programs
Presenter(s):
Guili Zhang, East Carolina University, zhangg@ecu.edu
Abstract: Service-learning programs often involve multiple stake holders and generate intended main effects as well as unanticipated spillover impacts. While evaluations aiming at a single aspect of impact and focusing on a single stage of a service-learning program can be informative and valuable in answering isolated questions, they often fail to provide a complete picture to fully capture the multifaceted effects that a service-learning program can generate in the process and at the end. This proposal describes an effective 360° evaluation of a service-learning tutoring program in teacher education at a research university by following Stufflebeam's context, input, process, product (CIPP) model. The process and advantages of the 360° evaluation during the context, input, process, product evaluation is described and discussed. The 360° evaluation using the CIPP model is systematic and can help researchers strive toward a more holistic appraisal of service-learning programs.
Using Participatory Evaluation for Program-level Assessment of Student Learning in Higher Education
Presenter(s):
Monica Stitt-Bergh, University of Hawaii, Manoa, bergh@hawaii.edu
Abstract: Regional accreditation agencies require that higher education institutions conduct student-learning assessment, and the U.S. Department of Education is pushing for more transparent accountability. Using the University of Hawai'i at Mānoa (UHM) as an example, I describe how a practical participatory evaluation (P-PE) approach can meet accreditation demands for program-level assessment of student learning and hold the institution responsible for student learning. I explain the factors related to UHM's organizational culture and values that made P-PE an appropriate evaluation approach. This presentation is aimed at those interested in program assessment of student learning at a research university or in factors contributing to P-PE success. Session attendees will leave knowing factors that led to positive reception of P-PE as an evaluation approach; strategies to grow P-PE in an organization; and how P-PE results can be used to meet regional accreditation requirements.
Evaluators and Institution Researchers Working Together to Understand Student Success in Learning Communities
Presenter(s):
Amelia E Maynard, University of Minnesota, mayn0065@umn.edu
Sally Francis, University of Minnesota, fran0465@umn.edu
Abstract: Currently, institutional research (IR) offices in community colleges nationwide are collecting and reporting on extensive data sets. Colleges are especially focused on studying student retention. This paper discusses how external evaluators can work with IR to provide a deeper understanding of program successes and challenges in improving retention. We will present a case study of an evaluation of learning communities (LC) in two community colleges. First, we provide the context of the cases and describe what data the colleges were already collecting. Then, we discuss the evaluation study we designed, which included a retrospective analysis of student level data and student and faculty interviews. The presentation focuses on how the evaluation contributed to a more comprehensive understanding of how LCs affect student success. Lastly, we will discuss how the colleges have applied these findings and some challenges we faced as external evaluators working with the colleges' IR offices.
Talking About Assessment: An Analysis of the Measuring Quality Blog and the Comments it Elicited
Presenter(s):
Gloria Jea, University of Illinois at Urbana-Champaign, gjea2@illinois.edu
Abstract: Assessing student learning outcomes has become an important part of accreditation and the discussions about quality of higher education (Ewell, 2009; Kuh & Ikenberry, 2009). Interested in the conversations around learning outcomes assessment in higher education, this research discusses the values that faculty, institutional researchers, professionals, and other observers carry. This research is done through a qualitative content analysis on a special blog series, Measuring Stick that The Chronicle of Higher Education ran during the fall of 2010. The blog explored debates about quality in higher education, answering two main questions: 'How should quality in higher education be measured and are higher education's ostensible quality-control mechanisms functioning well?' By analyzing the blog postings and reader comments this paper proposes blogs as a source of data, discusses the value of these comments, and questions the potential arena in blogs for constructive conversations about student learning outcomes assessment.

Session Title: Moving Toward Impact: Alternative Approaches to Meeting the Premium Value of Impact Assessment in Environmental Evaluations
Multipaper Session 493 to be held in San Clemente on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Environmental Program Evaluation TIG
Chair(s):
Angela Helman, Industrial Economics Inc, ahelman@indecon.com
Discussant(s):
Katherine Dawes, United States Environmental Protection Agency, dawes.katherine@epa.gov
Abstract: Experimental designs are increasingly being raised as the gold standard in environmental evaluations; however, environmental evaluators are seldom able to consistently employ methods that enable definitive causal impact claims. This pressure to reach the gold standard has necessitated evaluators' employment of diverse approaches that approximate impact estimation. Here, environmental evaluators discuss use of innovative alternative approaches that allow statistical estimation of impact. One approach employs a comparison of early joiners and late joiners to measure a "dosage effect." Another discusses a quasi-experimental approach that compares the outcomes of two similarly situated states receiving differential levels of compliance assistance. A third approach explores theoretical limitations of experimental designs in light of economic principles that render these approaches untenable. Finally, feasible and efficacious technique for identifying a counterfactual for comparison to the intervention is presented. These approaches will be discussed in context of relative value of experimental methods versus non-experimental methods.
Measuring Dosage Effects: Using Tenure in Program as a Variable to Assess Likely Environmental Influence on Solid Waste Management Behaviors
Angela Helman, Industrial Economics Inc, ahelman@indecon.com
In a recent evaluation conducted on a waste management program at the United States Environmental Protection Agency, a non-regulatory voluntary partnership program was able to secure a convenience sample of federal partners who had different tenure lengths in a program that assists its partners in adopting positive waste management practices. Considerable pressure had been placed this program demonstrate causal impact that could be assessed over and above other contributing factors that produce the desired outcomes, independent of program influence. By developing a precise survey instrument which measured the extent to which early joiners and late joiners differed on confounding variables, and ruling out selection threats to validity, a confident assertion could be made about the program's benefits, illustrating the utility of dosage effects as a way of evaluating complex environmental programs.
Non-Equivalent Control Designs and Group Comparison Approach in Assessing the Impact of Compliance Assistance Behaviors in the Autobody Sector of Two States
Tracy Dyke-Redmond, Industrial Economics Inc, tredmond@indecon.com
The evaluators of a compliance assistance program explored the effectiveness of activities to assess the long-term impact of a comprehensive compliance assistance package. A forthcoming environmental statute and pre-statute compliance assistance activities in two similarly situated states presented the opportunity for a group comparison quasi-experiment. Facilities in one group were offered a full suite of compliance assistance activities. Those in a second group were offered reduced- or no outreach. In each of the two regions, site visits by qualified personnel at independent random samples of facilities are being used to estimate performance before and after compliance assistance has been provided. The impact of compliance assistance will be assessed by comparing the change in group one performance with the change in group two performance. This is referred to in the policy evaluation literature as a "difference-in-differences" approach to assessing policy impacts. The advantages and challenges of this approach will be discussed.
Attributing Benefits of Non-Regulatory Programs to Environmental Change: Identifying Concerns and Alternative Solutions
Cynthia Manson, Industrial Economics Inc, cmanson@indecon.com
In this paper, an evaluator working on behalf of the United States Environmental Protection Agency discusses the economic justification for voluntary environmental programs to derive defensible measures of their positive social outcomes. The authors consider ideal experimental and statistical designs to detect and attribute benefits. We also explore a set of more practical approaches to benefit attribution that take into account the data gaps and statistical challenges that often make more rigorous approaches infeasible. These conclusions will be addressed
New Technique for Comparison to an Alternative: The Negotiated Alternative
Andy Rowe, ARCeconomics, andy.rowe@earthlink.net
In many settings evaluators encounter serious challenges in identifying a suitable comparison. This is particularly true in natural resource and sustainable development settings. Yet comparison to a reasonable alternative is our main approach to judging changes in outcomes of interest attributable to the intervention. This paper describes a new option, the negotiated alternative. It has been used in three different resource and environmental programs in the US: fish and freshwater, environmental enforcement and for off-road vehicle use at national seashores. It is proving a feasible technique generating valid and reliable judgments about environmental effects of decisions. It offers a new option for evaluators regardless of setting.

Session Title: Population-based Impact Evaluation for Public Health and Social Programs: A Conceptual Framework
Think Tank Session 494 to be held in San Simeon A on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Presenter(s):
Huey Chen, Centers for Disease Control and Prevention, hbc2@cdc.gov
Discussant(s):
Thomas Chapel, Centers for Disease Control and Prevention, tchapel@cdc.gov
Huey Chen, Centers for Disease Control and Prevention, hbc2@cdc.gov
Abstract: Evaluation often focuses on assessing individual impacts, that is, effectiveness of an intervention on individuals who participate in the intervention. More recently, decision makers are asking whether an intervention has desirable effects on an at-risk population in a particular locality such as a community, county, state, or nation. A conceptual framework of population-based impact evaluation would be highly useful for evaluators to address the challenge. This think tank explores a conceptual framework for population-based impact evaluation that evaluators can use to design and conduct this type of evaluation. . Participants are presented with initial thoughts on a conceptual framework developed from the program theory perspective that covers the following issues: Concept and definition of population-based impact evaluation, scope and components of the evaluation, interventions that are likely to have population impacts, methodological challenges, data sources and evaluation designs. Participants are asked to probe, critique, and modify the presenter's assertions.

Session Title: Responding to Context: Advances in Evaluation Practices
Multipaper Session 495 to be held in San Simeon B on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Tiffany Berry,  Claremont Graduate University, tiffany.berry@cgu.edu
Discussant(s):
Tarek Azzam,  Claremont Graduate University, tarek.azzam@cgu.edu
Evaluating Large-Scale Grant Initiatives in a District: Helping Districts Create Space for Sustainability
Presenter(s):
Sheila A Arens, Mid-continent Research for Education and Learning, sarens@mcrel.org
Andrea Beesley, Mid-continent Research for Education and Learning, abeesley@mcrel.org
Susan Shebby, Mid-continent Research for Education and Learning, sshebby@mcrel.org
Abstract: Using a recently completed evaluation of a grant as a case, evaluators will discuss the successes and challenges associated with school districts' efforts to implement initiatives in line with the intent of the funding. Presenters will briefly discuss an evaluation of an intervention to decrease minority isolation and increase student achievement in Science, Technology, Engineering, and Mathematics. Presenters will then describe a 'case study' of the initiative that enabled examination of the implementation of the district's large-scale grant initiatives. This case is relevant for stakeholders in that it revealed ways that districts can prepare for future grant opportunities and funded initiatives; it is relevant for evaluators in that it can inform how we work with clients as they write and/or begin large-scale grants. Presenters align this advice with the professional practice of evaluation, as indicated in the Program Evaluation Standards.
The Role of Replication in Evaluating Complex Systems in Education
Presenter(s):
Tamara M Walser, University of North Carolina, Wilmington, walsert@uncw.edu
Michele A Parker, University of North Carolina, Wilmington, parkerma@uncw.edu
Emily R Grace, University of North Carolina, Wilmington, gracee@uncw.edu
Dawn M Hodges, The Hill School of Wilmington, hodg68@bellsouth.net
Abstract: Given requirements for the implementation of research-based educational programs that can be implemented effectively in diverse educational settings, replication is increasingly important to educators and program evaluators. Although replication is well-supported in educational research and evaluation literature, there is a lack of research on the potential of replication as an evaluation approach to address complex systems such as educational programs and the educational systems within which they are implemented. The purpose of this presentation is to describe the replication of a theory-driven educational model for improving the reading achievement of struggling readers in K-12 public schools, using this as an example of replication as an evaluation approach that is useful when evaluating complex systems in education.

In a 90 minute Roundtable session, the first rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
Roundtable Rotation I: How Do You Measure Success? Framing the Evaluation Conversation for Programs With Fuzzy Goals
Roundtable Presentation 496 to be held in Santa Barbara on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Internal Evaluation TIG
Presenter(s):
Corrie Whitmore, Southcentral Foundation, cwhitmore@scf.cc
Wendi Kannenberg, Southcentral Foundation, wkannenberg@scf.cc
Abstract: As Albert Einstein noted, 'not everything that can be counted counts, and not everything that counts can be counted.' Evaluators today are challenged to facilitate evaluation for a variety of programs, including those with abstract, undefined aims and limited quantitative data. This round table proposes a framework for beginning the internal evaluation process with programs struggling to identify what to assess and how it can be measured. We will address the importance of defining your evaluation's purpose and goals, describe strategies for using qualitative data, and share tools used to structure conversations with clients before inviting participants to offer feedback and discuss their own experiences. This round table will contribute to the body of knowledge in the field of internal evaluation by offering participants structures and tools, soliciting success stories, and facilitating a conversation about what 'counts' and how it can be 'counted' during evaluation efforts.
Roundtable Rotation II: Advancing Internal Evaluation in a Values-Driven Organization
Roundtable Presentation 496 to be held in Santa Barbara on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Internal Evaluation TIG
Presenter(s):
Wendi Kannenberg, Southcentral Foundation, wkannenberg@scf.cc
Corrie Whitmore, Southcentral Foundation, cwhitmore@scf.cc
Abstract: Internal evaluators often face both challenges and opportunities in establishing their role and advancing evaluation projects within their parent organization. The presenters of this roundtable are experienced evaluators fostering the growth and development of a new internal evaluation department in a values-driven organization. We will share our perspectives on strategically engaging and utilizing an organization's stated values and organizational approach to foster trust, advance internal evaluation, build collaborative partnerships, and maximize impact of evaluation efforts. This roundtable will contribute to the body of knowledge in the field of internal evaluation by offering participants operational perspectives and facilitating the conversation regarding both knowing and utilizing organizational values to advance evaluations. The sharing of success stories and lessons learned in the field will increase understanding of factors affecting internal evaluation in varied organizational contexts.

Session Title: The Effect of Political Values and Expectations on Evaluation: Perspectives From Different Countries
Panel Session 497 to be held in Santa Monica on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Theories of Evaluation TIG
Chair(s):
Jody Fitzpatrick, University of Colorado at Denver, jody.fitzpatrick@ucdenver.edu
Discussant(s):
Ross Conner, University of California Irvine, rfconner@uci.edu
Abstract: The political nature of evaluation was recognized early in evaluation literature. However, evaluators have not consciously considered the political values of stakeholders: their views of the roles of government in society; their faith, suspicions, and valuing of their government and its leaders; their values concerning government accountability and transparency; their perceptions of the rights of various stakeholder groups; and so on. This panel will focus on the concept of political culture, its dimensions and variations, and its role in the context and practice of evaluation. Panel members working in evaluation in various countries and regions - the United States, Brazil, Burkina Faso in Africa, South Asia, and developing countries - will describe and discuss political values that influence evaluation including citizens' and other stakeholders' perspectives on the necessity and nature of accountability, equity, participation, government, and evaluation itself.
Political Culture: A Source of Values for Evaluation
Jody Fitzpatrick, University of Colorado at Denver, jody.fitzpatrick@ucdenver.edu
The term political culture was created by Daniel Elazar to study differences in political values, institutions, and actions in American states. This session will describe his research on variables that define political culture and research in international public administration which compares countries on political values, actions, and outcomes. This presenter will describe this literature and use the dimensions and constructs to describe and define political issues that influence evaluation practice in different contexts, in particular, in different countries. For example, Taylor (2006) in comparing Australia and Hong Kong on implementation of performance measurement implementation found context to be vital. Similarly, Radin's comparison of the U.S., New Zealand, and Australia (2003) found political values in each country to have a strong influence on performance management. The literature will be linked to contextual theory in evaluation including Rog's model of context and its influence on evaluation and Greene's writings on values.
Evaluation Values in Brazil: Control, Legitimacy, Learning and Transparency
Marcia Paterno Joppert, Brazilian Evaluation Agency, marciapaterno@agenciadeavaliacao.org.br
This presentation will describe the connections between the values of the Brazilian democratic process and some of the major characteristics and views of evaluation: the coexistence of external control organizations, focused mainly on accountability; the use of evaluation to legitimate policies; and the view of evaluation as a tool for managing for results, learning and transparency. Results of recent research concerning the demand and the supply of measurement and evaluation (M & E) services and the development of evaluation practice in Brazil, by both the public and private sector, will also be presented. The interests of many stakeholders in developing knowledge and practice of evaluation is growing in Brazil and is causing the rise of new kinds of movements and institutions to organize this community and to introduce M&E further into the societal agenda.
Developing Rural Womens' Understandings of Social Accountability for Local Governments in Burkina Faso
Issaka Traore, ReBuSe, AfrEA Board, issakatraore@yahoo.com
This presentation will focus on the different, and changing, views of social accountability in one West African nation, Burkina Faso. In the rural areas of Burkina Faso, social accountability is a concept unknown to the majority of citizens. And many in key decision-making circles prefer that. This presentation will discuss the values that influence the different views of social accountability in Burkina Faso and how these values, central to evaluation and to citizen empowerment, differ in urban and rural areas and among educated and illiterate citizens. These views, then, affect citizens' expectations concerning municipal governments. Social accountability can, however, become an empowerment tool. I will describe a program developed by The National Democratic Institute for International Affairs to change non-elected women's understandings of social accountability in 21 rural municipalities and the assessments and analyses around this program that shed light on the meanings of social accountability.
Values and Valuing in Equity-Focused Development Programming and Evaluation
Michael Bamberger, Independent Consultant, jmichaelbamberger@gmail.com
There is increasing recognition that assessing progress of developing countries in terms of monetary indicators, such as the proportion of the population below a poverty-line, and the use of average indicators (percentage of children suffering from different degrees of malnutrition) can present a misleading picture for the total population. Average scores mean that the situations of vulnerable or remote communities or even differences between household members may be overlooked. Many development agencies are moving towards an equity-focus planning and evaluation system that examines the status of vulnerable populations and highlights groups not benefiting from development. Equity analysis often challenges official estimates that poverty is declining and raises sensitive political questions concerning the status of women or ethnic minorities. These approaches raise important issues of personal and political values, how development is valued and how far development agencies are willing to raise sensitive issues with national governments.
Political Environments in South Asian Countries: An Analysis of Their Effects on Evaluation Policies and Use
Shubh Kumar-Range, Community of Evaluators for South Asia, shubhk.range@gmail.com
This panelist will describe her analysis of political conditions in South Asian countries and how they are changing. Using the World Bank implementable governance indicators, she explores the impact of two political indicators: (a) top-down governance environment (for example, regulatory quality) and (b) bottom up governance environments on the types of evaluation, their use, and evaluation policies. She finds that changes in these two types of political environments (improvements or deteriorations) have very different impacts on evaluations and evaluation policies.

Session Title: Process Improvement Techniques for Program Evaluation: Adding New Tools to the Evaluator Tool Box
Skill-Building Workshop 498 to be held in Sunset on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Human Services Evaluation TIG
Presenter(s):
Joyce Miller, KeyStone Research Corporation, joycem@ksrc.biz
Tania Bogatova, KeyStone Research Corporation, taniab@ksrc.biz
Abstract: This session introduces new and nontraditional conceptual framework and methodology for evaluating organizational processes, which aim not only at documenting an existing state of the program processes, but also at developing opportunities for process improvements to enhance organizational effectiveness and efficiency. This methodology is an adaptation of Lean philosophy, a process improvement framework that focuses on delivering most value to clients, while consuming the fewest resources. This methodology will enable evaluators to guide their clients to improve their programs through improvement of their processes, help them embrace the culture of continuous learning and quality improvement, and survive constrains of the current fiscal climate of dwindling resources. The session will highlight a systematic and visual way of defining and implementing process improvement initiatives and introduce multiple tools, including Value Stream Mapping (VSM) and Process Flow Mapping, which will assist organizations and evaluators in accomplishing these tasks.

Session Title: Diverse Approaches to the Evaluation of Out-of-School Time Programs
Multipaper Session 499 to be held in Ventura on Thursday, Nov 3, 4:30 PM to 6:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Stacey Merola,  ICF International, smerola@icfi.com
Discussant(s):
Sae Lee,  Harder+Company Community Research, slee@harderco.com
High Quality 21st Century Community Learning Centers: Academic Achievement Among Frequent Participants and Non-participants
Presenter(s):
Jenell Holstead, University of Wisconsin, Green Bay, holsteaj@uwgb.edu
Mindy King, Indiana University, minking@indiana.edu
Abstract: This study examined academic differences between students who attended 21st Century Community Learning Center (CCLC) programs frequently (60 or more days) as compared to matched non-attendees during the 2008-2009 school year. Schools included in the study represented only those centers found to be implementing high quality programming, as measured by a strenuous site visit process in Indiana. Results demonstrated no differences between the groups. However, differences were observed when examining those frequent attendees who were struggling academically at the beginning of the year. For fifth grade specifically, those students that were struggling academically who attended high quality 21st CCLC programming appeared to improve student performance in the spring.
An Evaluation of the Dynamical Effects of an Out-of-School Time Program
Presenter(s):
Amy Corron, United Way of Greater Houston, acorron@unitedwayhouston.org
Roger Durand, Durand Research and Marketing Associates, LLC, durand4321@gmail.com
Julie Johnson, Communities-in-Schools, jjohnson@cis-houston.org
Kevin Kebede, Alief YMCA, kevink@ymcahouston.org
Jennifer Key, Alief Independent School District, jennifer.key@aliefisd.ne
Joseph Le, Joint City/County Commission on Children, joseph-mykalhung.le@cityofhouston.net
Linda Lykos, YMCA of Greater Houston, lindal@ymcahouston.org
Cheryl McCallum, Children's Museum of Houston, cdm@cmhouston.org
Katherine von Haefen, United Way of Greater Houston, kvonhaefen@unitedwayhouston.or
Abstract: This paper/poster will present the results of an evaluation of the dynamical effects of an out-of-school-time program. In the aftermath of Hurricanes Katrina and Rita, an out-of-school-time program, known as 'Houston's Kids' or HK, was developed and implemented with the intention of addressing the needs of displaced and other at-risk children and youth in a single community. The outcomes evaluation that was designed to assess HK examined changes in developmental assets and values (www.searchInstitute.org) among the 625 individual kindergarten through high school children/youth program participants. True panels of data that tracked changes in the same individual participants over a school year were employed as were data on a sample of control or 'comparison subjects.' This design and these data afforded in-depth understanding of the dynamics of assets and values development over time in response to specific HK program elements and among participants with different social characteristics.
The Design and Impact of Support Networks on California Afterschool Science
Presenter(s):
Ann House, SRI International, ann.house@sri.com
Abstract: This paper explores the relationship between the types and quality of science offerings within afterschool programs and these programs' connections to sources of science support, such as science museums and technical assistance providers. It will focus on grantees in three geographic regions of California. SRI surveyed a state-wide random sample of programs funded by the state's Afterschool Education and Safety (ASES) program about their science programming, and their connections to organizations that provide science resources or support. An important innovation of this study is to analyze the social networks of afterschool programs, to explore whether quality science instruction is linked to the number, type, or strength of ties a program has to outside organizations. The three regions will be contrasted, to understand the key factors impacting the site's use of outside science supports, such as physical proximity to major science institutions, urban or rural location, and staff capacity and stability.
Using Web-Based Management Systems for Program Improvement
Presenter(s):
Femi Vance, University of California, Irvine, fvance@uci.edu
Hilda Gaytan, University of California, Irvine, hgaytan@uci.edu
Natalie Kovacs, University of California, Irvine, nkovacs@uci.edu
D'Amore Montgomery, University of California Irvine, montgomd@uci.edu
Abstract: After-school programs are being integrated into the educational experience of children and youth. This new role requires after-school programs to demonstrate their impact on youth. Scholars argue that the key to linking outcomes to participation in after-school programs is 'dosage' or attendance. Yet, many programs do a poor job of measuring attendance in their program. Web-based data management systems offer one solution to the attendance tracking problem. This paper will show how data collected using a web-based management system can be used to inform program improvements. Attendance data are used to identify underserved youth and linked to youth surveys to understand why youth enroll in the program, and if their perceptions of the program are associated with future attendance. The ways in which these findings contribute to self-evaluations for after-school programs are discussed.

Return to Evaluation 2011
Search Results for All Sessions