|
Session Title: Capacity Mapping Initiative: Evaluating Organizational Growth in the Former Soviet Union
|
|
Demonstration Session 340 to be held in Capitol Ballroom Section 1 on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the International and Cross-cultural Evaluation TIG
|
| Presenter(s): |
| Robert O'Donovan Jr,
Eurasia Foundation,
rodonovan@eurasia.org
|
| Maria Gutierrez,
CamBia Associates,
mariag@cambiaassociates.net
|
| Abstract:
Building the capacity of nonprofit organizations can be a challenge under ordinary circumstances. When working across geographic and cultural distances, however, tailoring diagnostic tools, assessing growth, and evaluating capacity-building initiatives can be especially demanding. In 2004, the Eurasia Foundation (EF) began creating indigenous foundations to continue EF's work in the countries of the former Soviet Union. To meet the challenge of assessing these foundations' organizational growth, EF launched the Capacity Mapping Initiative (CMI) in 2007. Based upon a maturity model methodology developed by a large, U.S.-based nonprofit, CMI is a diagnostic approach designed to assist EF staff in assessing the foundations' current capacity, collaborating with each foundation to create organizational growth plans, and consistently measuring their capacity gains and infrastructure improvements. Drawing on two rounds of CMI assessments, the consultant who led the CMI design process and EF's evaluation officer will share significant lessons learned from this international evaluative process.
|
|
Session Title: Music to My Ears: Alternative Approaches for Presenting Evaluation Findings
|
|
Expert Lecture Session 342 to be held in Capitol Ballroom Section 3 on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the AEA Conference Committee
|
| Presenter(s): |
| Jeremiah Johnson,
University of Illinois Urbana-Champaign,
jeremiahmatthewjohnson@yahoo.com
|
| Jennifer Greene,
University of Illinois Urbana-Champaign,
jcgreene@uiuc.edu
|
| Lizanne DeStefano,
University of Illinois Urbana-Champaign,
destefan@uiuc.edu
|
| Jori Hall,
University of Illinois Urbana-Champaign,
jorihall@uiuc.edu
|
| Ezella McPherson,
University of Illinois Urbana-Champaign,
emcpher2@uiuc.edu
|
| Abstract:
The presentation of evaluation findings is often viewed as a crescendo in the process of program evaluation. Traditionally, this presentation serves as an opportunity for evaluators to explain and describe the findings, judgments, and recommendations that have resulted from their evaluative efforts. It should also be acknowledged that such presentations also provide unique opportunities to foster critical reflection and promote dialogic engagement amongst various stakeholder groups. As part of a field test of an educative, values-engaged approach to STEM education evaluation, several innovative (non-traditional) approaches for presenting evaluative findings have been explored. This paper will highlight the experiences and reflections of evaluation team members in their efforts to craft and utilize innovative approaches for presenting evaluative findings for the purpose of promoting critical reflection and dialogic engagement amongst diverse stakeholder groups.
|
|
Session Title: Leadership Recruitment and the Road to the American Evaluation Association Board of Directors
|
|
Think Tank Session 343 to be held in Capitol Ballroom Section 4 on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the AEA Conference Committee
|
| Chair(s): |
| Sarita Davis,
Georgia State University,
skdavis04@yahoo.com
|
| Discussant(s):
|
| Debra Rog,
Westat,
debrarog@westat.com
|
| Karen E Kirkhart,
Syracuse University,
kirkhart@syr.edu
|
| Sarita Davis,
Georgia State University,
skdavis04@yahoo.com
|
| Rakesh Mohan,
Idaho Legislature,
rmohan@ope.idaho.gov
|
| Stanley Capela,
HeartShare Human Services,
stan.capela@heartshare.org
|
| Abstract:
An organization’s strength and viability is dependent on strong leadership. A major component of such leadership resides with the AEA Board and its commitment to identify future leaders as well as foster a culture that encourages member participation to seek nomination to the Board.
One of the major drawbacks is often a lack of knowledge on the part of members on what are the key ingredients to becoming a viable candidate for the Board. Further, there are those members who seek nomination and often fail to either be selected for the slate or who run for the Board but do not win.
The purpose of this session is to provide several case studies of individuals who have participated on the Board over the past few years. They will provide an overview of their experience as well as the highs and lows of seeking nomination to the Board. Further, the session will help educate members on the role of the nominations and elections committee as well as the leadership task force to assist members who may seek to run for the board and/or who have lost and want additional information on steps that can be taken to increase the likelihood of winning a slot on the board.
The overall purpose of this session is to help the organization identify future leaders but more importantly encourage greater member participation by seeking nomination to the board and further strengthening organization capacity.
|
|
Session Title: Introduction to Evaluation and Public Policy
|
|
Demonstration Session 344 to be held in Capitol Ballroom Section 5 on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Presidential Strand
|
| Chair(s): |
| Damon Thompson,
American Evaluation Association,
damon@eval.org
|
| Presenter(s): |
| George Grob,
Center for Public Program Evaluation,
georgefgrob@cs.com
|
| Abstract:
Evaluation and public policy are intimately connected. Such connections occur at national, state, and local government levels, and even on the international scene. The interaction moves in two directions: sometimes evaluation affects policies for public programs, and sometimes public policies affect how evaluation is practiced. Either way, the connection is important to evaluators. This session will explain how the public policy process works. It will guide evaluators through the maze of policy processes, such as legislation, regulations, administrative procedures, budgets, re-organizations, and goal setting. It will provide practical advice on how evaluators can become a public policy players how they can influence policies that affect their very own profession, and how to get their evaluations noticed and used in the public arena. There will opportunities for audience discussion of sensitive topics, such as how evaluators can protect their independence in a world of compromise and deal making.
|
|
Session Title: Behind the Scenes in AEA: The Process of Conducting the AEA Internal Scan
|
|
Demonstration Session 345 to be held in Capitol Ballroom Section 6 on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the AEA Conference Committee
|
| Presenter(s): |
| Colleen Manning,
Goodman Research Group Inc,
manning@grginc.com
|
| Elizabeth Bachrach,
Goodman Research Group Inc,
bachrach@grginc.com
|
| Discussant(s): |
| Leslie Goodyear,
Education Development Center Inc,
lgoodyear@edc.org
|
| Abstract:
You've had access to the AEA Internal Scan report on the AEA website, read about it in AEA newsletters, and perhaps you were one of the more than 2,500 members who gave input via the survey, interviews or online discussions. Come to this session for a step-by-step presentation of how the scan was accomplished and a discussion of the scan's strengths and lessons learned. You'll learn about how the scan was conceptualized; how the contractor, the AEA office, and an AEA task force worked together to conduct the scan; how the scan involved AEA Board, committee, and member stakeholders and how these stakeholder groups responded to the scan. You'll also learn some practical information about conducting a large-scale online survey as well as online discussions groups. Finally, hear about the humor and humility involved in collecting data from thousands of other evaluators!
|
| Roundtable:
Using Coaching Techniques in Logic Model Development to Help Clients Think Clearly |
|
Roundtable Presentation 347 to be held in the Limestone Boardroom on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Program Theory and Theory-driven Evaluation TIG
|
| Presenter(s):
|
| Maggie Miller,
Maggie Miller Consulting,
maggie@maggiemiller.org
|
| Abstract:
In working collaboratively with stakeholders to create logic models, evaluators may notice that the process can be enlightening for the client. In the course of creating the logic model a stakeholder may notice gaps between activities and outcomes, the grandiosity or paucity of impacts, or the mismatch of resources and activities; all of which help him/her revise the program plan. The techniques of coaching – asking reflective questions, using paraphrasing to synthesize ideas, challenging clients to consider unexplored possibilities – are handy skills in this process. At this roundtable, a short presentation and a discussion focused around targeted questions will help us identify coaching skills that we can refine in order to be most helpful to clients when we work with them to create logic models.
|
| Roundtable:
Extension Volunteers as Program Evaluators: Extension Evaluation Capacity |
|
Roundtable Presentation 348 to be held in the Sandstone Boardroom on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Extension Education Evaluation TIG
|
| Presenter(s):
|
| Nancy Franz,
Virginia Polytechnic Institute and State University,
nfranz@vt.edu
|
| Abstract:
Public and private funders of Extension have increased accountability expectations for program impact which in turn increases the need to build evaluation capacity throughout the organization. Volunteers, a strength of Cooperative Extension are a great resource for measuring the impact of Extension programs and sharing the impact of Extension work. This round table provides a curriculum for training Extension volunteers in program evaluation to extend the evaluation capacity and thinking in the organization. Participants will be asked to share their thoughts and practice about volunteers as evaluators.
|
| Roundtable:
Evaluating Customer Purchasing Behavior of Compact Fluorescent Light Bulbs |
|
Roundtable Presentation 349 to be held in the Marble Boardroom on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Environmental Program Evaluation TIG
|
| Presenter(s):
|
| Jennifer Canseco,
KEMA Inc,
jenna.canseco@us.kema.com
|
| Kevin Price,
KEMA Inc,
kevin.price@kema.com
|
| Daisy Allen,
KEMA Inc,
daisy.allen@kema.com
|
| Abstract:
Compact fluorescent light bulbs (CFLs) can now be installed in nearly every type of lighting fixture, save about $50 over the life of the bulb, and are available in different degrees of color warmth. Despite the wide availability of various CFL bulb types, most consumers choose not to install CFLs in all of their fixtures. Three main evaluation methods have been used to investigate motivation behind this purchasing behavior. In this presentation, we will discuss the advantages and limits of our experience with phone surveys, in-depth site visits, and in-store intercepts as evaluation methods to investigate this issue. Following this presentation, we will open the discussion to conference participants.
|
|
Session Title: Measuring the Effects of Organizational Capacity Building in Israeli Social Change Nonprofits
|
|
Think Tank Session 350 to be held in Centennial Section A on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
|
| Presenter(s):
|
| Nancy Jill Strichman,
Independent Consultant,
strichman@ie.technion.ac.il
|
| Discussant(s):
|
| William Bickel,
University of Pittsburgh,
bickel@pitt.edu
|
| Abstract:
Capacity building for organizational growth and learning is receiving growing attention in the nonprofit sector (Strichman, Bickel & Marshood, 2007; Blumenthal, 2003). Accordingly, there is a crucial need in the evaluation community to develop practical ways to gauge whether such capacity building efforts are having intended effects (Light, 2004; Wing, 2004). Organizational assessment tools are increasingly used by funders and evaluators as part of an effort to gain insight both into the capacity building process, as well as nonprofits' organizational culture and its link to organizational performance (Davidson, 2001; Light, 2004; Guthrie & Preston, 2003). This think tank focuses on an evaluation of capacity building in Israel being undertaken by Shatil, the New Israel Fund's Empowerment and Training Center for Social Change Organizations. The work is being evaluated via a number of methods; central to this discussion will be the presentation of a nonprofit self-assessment tool for measuring organizational capacities.
|
|
Session Title: Summative Confidence: The Mathematical Algorithm
|
|
Expert Lecture Session 351 to be held in Centennial Section B on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Chair(s): |
| Brooks Applegate,
Western Michigan University,
brooks.applegate@wmich.edu
|
| Presenter(s): |
| Cristian Gugiu,
Western Michigan University,
crisgugiu@yahoo.com
|
| Abstract:
How can one estimate the precision of an evaluative conclusion? While confidence intervals (CI) are the standard method of examining the precision of a single variable, evaluative conclusions are formulated by synthesizing multiple indicators, measures, and data sources into a composite variable. Unfortunately, the standard method for calculating a CI is inappropriate in such cases. Moreover, no method exists for estimating the combined impact of sampling, measurement error, and design on the precision of an evaluative conclusion. Consequently, evaluators formulate recommendations and decision makers implement program and policy changes without full knowledge of the precision of an evaluative conclusion. This presentation will demonstrate how the Summative Confidence method can be used to estimate the impact of over a dozen factors on the precision of a conclusion. Therefore, evaluators will no longer need to assume that their conclusions are accurate. Instead, they can estimate the measurement error of each conclusion.
|
|
Session Title: Reflections About New Research on University-based Evaluation Training Programs: Implications for Policy and Practice
|
|
Think Tank Session 352 to be held in Centennial Section C on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Teaching of Evaluation TIG
|
| Presenter(s):
|
| John LaVelle,
Claremont Graduate University,
john.lavelle@cgu.edu
|
| J Bradley Cousins,
University of Ottawa,
bcousins@uottawa.ca
|
| Discussant(s):
|
| Stewart I Donaldson,
Claremont Graduate University,
stewart.donaldson@cgu.edu
|
| Abstract:
The pre-service preparation of evaluators through university-based training programs (UBTPs) has been the subject of sporadic inquiry for the professional evaluation associations, and occasionally accumulates in the publication of a UBTP directory. Although the profession of evaluation has developed greatly over the past fourteen years, the last comprehensive directory was published in 1994 (Altschuld, Engle, & Kim, 1994) leaving evaluation practitioners and policy-makers alike unsure as to the current state of UBTPs. This has important implications for evaluation policy, because decision makers need to determine fit and competencies when selecting evaluators. Utilizing a combination of Internet research methodologies, curriculum analysis, and qualitative interviewing, the in-depth research, a current view on UBTPs in the United States, with a focus on informing evaluation policy, will be presented.
|
|
Session Title: Evaluating in the Face of Uncertainty: Anticipation and Agility to Improve Evaluation Quality
|
|
Expert Lecture Session 353 to be held in Centennial Section D on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Systems in Evaluation TIG
|
| Chair(s): |
| Michael Quinn Patton,
Utilization Focused Evaluation,
mqpatton@prodigy.net
|
| Presenter(s): |
| Jonathan A Morell,
TechTeam Government Solutions,
jonny.morell@newvectors.net
|
| Abstract:
When we design evaluation we consider models, theory, stakeholders, information use, roles, methodology, data, funding, deadlines, and logistics. The science, technology, and craft of evaluation would be strengthened if another set of considerations were added to the mix, i.e. the need to deal with unanticipated consequences. Or, put another way, how to contend with surprise. This session will present a framework for dealing with evaluation surprise, and will provide specific methods within that framework. Key elements include: 1) the continuum between 'highly anticipatable' and 'truly unpredictable', 2) the evaluation life cycle, 3) planning and agility in evaluation, and 4) similarities between evaluations and the programs being evaluated. The goals of the presentation are to: 1) improve evaluations that are being done, and 2) further development of an invisible college of evaluators who are interesting in developing this aspect of our field.
|
|
Session Title: Teaching Evaluation: Building Skills in Both Evaluators and Stakeholders
|
|
Multipaper Session 354 to be held in Centennial Section E on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Graduate Student and New Evaluator TIG
|
| Chair(s): |
| Leslie Fierro,
Claremont Graduate University,
leslie.fierro@cgu.edu
|
|
Building Individual and Organizational Capacity through a Pilot Evaluation Fellowship Program: How Collaborative and Constructivist Approaches Enhance Learning and Strengthen Ties Across Sectors, Through the Eyes of a Graduate Student
|
| Presenter(s):
|
| Jeanne F Zimmer,
University of Minnesota,
zimme285@umn.edu
|
| Abstract:
This paper focuses on the first year of an Evaluation Fellowship Program from the perspective of a graduate student involved in the planning, implementation, and evaluation of the program. The pilot program was geared toward bringing together representatives from different sectors to build the capacity of each and increase their ability to work collaboratively. This unique approach combined a year-long process of identifying and engaging in capacity building within a specific topical area as well as a strong focus on enhancing the understanding and practice of evaluation. The program was designed to go beyond “one more training session” on evaluation: it combined educational, technical assistance, and peer-networking efforts to build individual and organizational evaluation capacity. The paper examines the degree to which the goals of the pilot project were met and explores the experiences of individual and organizational participants. Recommendations are made for program design and evaluation for future cohorts.
|
|
A Collaborative, Internal Evaluation of a Pre-Service Teacher Education Course Guided by Complexity Thinking
|
| Presenter(s):
|
| Michelle Searle,
Queen's University,
michellsearle@yahoo.com
|
| Abstract:
This paper reports on the second year of a collaborative, internal evaluation of PROF 150/155; Concepts in Teacher Education conducted for Bachelor of Education students at Queen’s University. Multiple methods of data were collected over two years to target teachers’: a) understanding of assessment and evaluation issues in practice; b) readiness to engage in these areas; and c) the impact of the course in contributing to professional needs related to assessment and evaluation. The evaluation was conceptualized using a collaborative and developmental framework that is supportive of ongoing organizational development (Patton 1999; Wesley, Zimmerman, & Patton, 2006). This paper uses complexity thinking (Denis, Sumara, Luce-Kapler, 2008) to understand the impact of the course on multiple levels within the organization. Finally, it reports on evaluative practice as an opportunity for learning by examining possibilities in both course design, as well as learning in evaluation and assessment.
|
| |
|
Session Title: Using Internet-Based Enumeration Surveys to Collect Post-Training Implementation Data
|
|
Expert Lecture Session 356 to be held in Centennial Section G on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Integrating Technology Into Evaluation
|
| Chair(s): |
| Patricia Mueller,
Evergreen Educational Consulting,
eec@gmavt.net
|
| Presenter(s): |
| Jim Frasier,
University of Wisconsin Madison,
jfrasier@education.wisc.edu
|
| Discussant(s): |
| Larry Wexler,
United States Department of Education,
larry.wexler@ed.gov
|
| Abstract:
The US Department of Education Office of Special Education Programs on a competitive basis annually awards 41+ states about 48 million dollars to train special education and regular education professionals. One of six performance measures requires States to evaluate practices (e.g., mentoring, coaching, structured guidance, modeling, and continuous inquiry) that are in place and professionals are using to sustain learning acquired from this federally funded training initiative. Session attendees will learn how Wisconsin is using low cost Internet-based enumeration surveys to collect post-training data from multiple individuals within school-level work environments to inform this performance measure. Dr. Frasier is the Lead Evaluator of Wisconsin's annual 1.4 million dollar Office of Special Education Programs training grant (2002 - present). He received his doctorate in training and development, and evaluation at the University of Illinois at Urbana-Champaign.
|
|
Session Title: Informing Policy Actions With Cost/Benefit Evaluation Synthesis
|
|
Expert Lecture Session 357 to be held in Centennial Section H on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
|
| Chair(s): |
| Michael Scriven,
Claremont Graduate University,
scriven@hotmail.com
|
| Presenter(s): |
| Ronald Visscher,
Western Michigan University,
ronald.s.visscher@wmich.edu
|
| Abstract:
A brief overview of traditional approaches and challenges in meta-analysis of cost/benefit studies will first be provided. These techniques and their limitations will be presented in light of their significance within the fields of education and health. Then an integrated approach will be introduced that has been designed to overcome many of the traditional challenges. The presentation will serve to demonstrate how cost/benefit meta-analysis can be used to better inform ex-ante evaluation of policy and program options. Examples applying the approach will show how it works prospectively to improve the synthesis and harmonization of policy and program evaluation studies and results.
|
|
Session Title: Collaborating With Information Technology Personnel: How Technology Can Improve Evaluation Designs
|
|
Demonstration Session 358 to be held in Mineral Hall Section A on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Integrating Technology Into Evaluation
|
| Presenter(s): |
| Robert Lefevre,
University of Wyoming,
blefevre@uwyo.edu
|
| Tiffany Comer Cook,
University of Wyoming,
tcomer@uwyo.edu
|
| Abstract:
How can technology improve your evaluation design? Whether you need a multidimensional database or an interactive graphing site, 21st century technology can efficiently collect, manage, maintain, and distribute evaluation data. This session will demonstrate the multiple ways evaluators at the Wyoming Survey & Analysis Center (WSYAC) have used their Center for Information Systems and Services (CISS) to enhance their evaluation designs. Attendees will be introduced to several technological tools that can impact programming decisions, manage and store data, and creatively present data. Specially, presenters will provide step-by-step explanations of an agent-based simulation model measuring outcomes, two data collection and management tools, and two interactive graphing websites.
|
|
Session Title: Foundations for Evaluating Staff Development: Principles into Practice
|
|
Demonstration Session 359 to be held in Mineral Hall Section B on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Human Services Evaluation TIG
and the Pre-K - 12 Educational Evaluation TIG
|
| Presenter(s): |
| James Sass,
Los Angeles Unified School District,
jimsass@earthlink.net
|
| Abstract:
There is much more to evaluating staff development activities than using post-session satisfaction forms. This Demonstration addresses foundational principles and practices for useful evaluation of staff development initiatives. These principles and practices include identifying key aspects of the professional development initiative, measuring outcomes based on a logic model for the initiative, follow-up data collection on such areas as participant learning and implementation, factors outside of the staff development activities that could affect implementation and outcomes, and innovative techniques for assessing various benefits derived from staff development. The session will begin by addressing the inadequacy of the all-too-common post-session satisfaction form. The next step will be the presentation of principles for effective evaluation of staff development. These principles will be followed by discussion of concrete practices and techniques for evaluating staff development. Participants will receive handouts, including a bibliography, to support their implementation of session content and continued learning.
|
|
Session Title: Striving for Alignment: One Funder's Lessons in Supporting Advocacy
|
|
Demonstration Session 360 to be held in Mineral Hall Section C on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Advocacy and Policy Change TIG
|
| Presenter(s): |
| Tanya Beer,
The Colorado Trust,
tanya@coloradotrust.org
|
| Ehren Reed,
Innovation Network Inc,
ereed@innonet.org
|
| Abstract:
How does a nonprofit maintain strategic focus when asked to submit 10 different evaluation reports to 10 different funders and each report requires a different set of measures? What if just two of those funders agreed on a common set of outcomes or a standard reporting format? The concept of alignment is a critical element in the field of evaluation. Alignment not only eases the burden of evaluation reporting for grantees, but can help both nonprofits and funders move closer towards realizing a common vision for success.
This demonstration will highlight the ongoing efforts of The Colorado Trust to incorporate principles of alignment in their new strategic grantmaking. The facilitator will discuss the benefits and challenges of aligning outcomes and reporting requirements across two foundations by providing various tools, examples, and emerging lessons from The Trust's own experiences.
|
|
Session Title: Three Essential C's of Data Collection: Collaboration, Coordination, and Communication in a Multi-Site Evaluation
|
|
Panel Session 361 to be held in Mineral Hall Section D on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
|
| Chair(s): |
| Manolya Tanyu,
Learning Point Associates,
manolya.tanyu@learningpt.org
|
| Discussant(s):
|
| Mary Nistler,
Learning Point Associates,
mary. nistler@learningpt.org
|
| Abstract:
In today's public education system, large, multi-site evaluations are expected by clients to provide guidance for urgent management and policy decisions. The quality of data largely depends on comprehensive coordination, communication, and collaboration efforts during data collection. In this two-piece panel, we discuss the process of implementing a multi-site audit of English Language Arts curricula and instruction in 12 districts comprising 314 schools. The project was managed by one educational consulting organization that subcontracted with two other organizations for their content area expertise. Panelists represent team members that were intricately involved in the planning and implementation phases of the audit. Our presentations will focus on: (1) Building and improving evaluation capacities of a cross-disciplinary staff with an array of evaluation experience, and (2) Creating structures and systems to ensure high-quality data. We will discuss strategies that worked and did not work and how the project has evolved based on lessons learned.
|
|
Building Evaluation Capacity in a Large Scale Project
|
| Katie Dahlke,
Learning Point Associates,
katie.dahlke@learningpt.org
|
| Brenna O'Brien,
Learning Point Associates,
brenna.o'brien@learningpt.org
|
|
The project discussed in this panel involved the partnership of three educational consulting organizations. Thus, the data collection team was comprised of cross-disciplinary staff, contractors, and temporary employees. Although the project grew stronger with the contributions of each partner, training every staff to implement a standardized, high-quality data collection process was necessary. The presentation provides a description of procedures that were followed during the planning phase of the evaluation project. These included an all-staff kick-off meeting; training of staff on multiple data collection instruments and methods; creation of a collaborative team structure, in which each member had an identified set of roles and expectations; creation of an intercommunication web system; and development of policies, procedures, and guidelines for data collection. The presenters will give an overview of these activities, discuss challenges, strategies that worked, and lessons learned.
|
|
|
Implementing an Open And Dynamic Data Collection System
|
| Manolya Tanyu,
Learning Point Associates,
manolya.tanyu@learningpt.org
|
| Christina Bonney,
Learning Point Associates,
christina.bonney@learningpt.org
|
|
One of the major challenges posed on the project was the fact that data collection was carried out by a large and diverse group of staff, stationed in different states around the country with different arrangements of workload (full-time, part-time, temporary), who traveled long distances to multiple sites for data collection. Additionally, data collection involved working with 11 rural districts, 1 urban district, and 134 schools. Scheduling various methods of data collection (e.g., observation, interviews, document review) by different groups, coordinating a team structure, and utilizing a centralized data management system required extensive communication, coordination, and collaboration within and between internal organizational staff, partnering organizations, and the client. This presentation provides an overview of the team structure, data collection procedures, and strategies and tools used to monitor the process. The presentation will also include the experiences and lessons learned as implementation went forth, and the improvements that were made based on those lessons.
| |
|
Session Title: Evaluation Managers and Supervisors TIG Business Meeting and Presentation: Managing Evaluation: Mediating Evaluation Policy and Evaluation Practice
|
|
Business Meeting Session 362 to be held in Mineral Hall Section E on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Evaluation Managers and Supervisors TIG
|
| TIG Leader(s): |
|
Ann Maxwell,
United States Department of Health and Human Services,
ann.maxwell@oig.hhs.gov
|
|
Susan Hewitt,
Health District of Northern Larimer County,
shewitt@healthdistrict.org
|
|
Laura Feldman,
University of Wyoming,
lfeldman@uwyo.edu
|
| Presenter(s): |
| Donald Compton,
Centers for Disease Control and Prevention,
dcompton@cdc.gov
|
| Michael Baizerman,
University of Minnesota,
mbaizerman@umn.edu
|
| Michael Schooley,
Centers for Disease Control and Prevention,
mschooley@cdc.gov
|
| Robert Rodosky,
Jefferson County Public Schools,
robert.rodosky@jefferson.kyschools.us
|
| Marco Munoz,
Jefferson County Public Schools,
marco.munoz@jefferson.kyschools.us
|
| Abstract:
The management of evaluation studies, evaluators and an evaluation unit stands between evaluation policy and evaluation practice. We have submitted a New Directions for Evaluation co-edited volume on managing these three elements and our paper will pre- review the text.
We suggest the strengths and weaknesses of the two classical answers: Insiders (evaluator) and outsiders.
Further, we characterize evaluators as “knowledge workers” and examine this small literature for practical suggestions and concepts for this level of managing. Then we look at managing a unit, homogeneous and heterogeneous, when the manager is an insider or outsider. In addition, we suggest guidelines for education and training Insiders and Outsider managers. Finally we present Guidelines for Managing Evaluation based on the literature, three case studies, and two essays on their practical experiences by federal managers, and our own professional work.
The result will be a sample of content from the NDE volume which will serve as an introduction to our contribution to the understanding and practice of managing evaluation studies, workers and a unit.
|
|
Session Title: Pretty Data: Building a Data-Engaged Culture through User-Friendly Electronic Charting Tools
|
|
Demonstration Session 363 to be held in Mineral Hall Section F on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Presenter(s): |
| Cheryl Walter,
California Services for Technical Assistance and Training,
li@sonic.net
|
| Alan Wood,
California Services for Technical Assistance and Training,
alan.wood@calstat.org
|
| Abstract:
The evaluators of the California State Improvement Grant have fostered the use of data in California schools through the development and distribution of simple tools that electronically translate data into visual charts so people can SEE what is going on. This approach builds a data-engaged culture, eliminates turnaround between data collection and usable feedback, and helps leverage the resources of a small technical assistance project in describing program implementation and outcomes. Three data tools will be demonstrated. The TIC (Team Implementation Checklist) is an Excel file containing a survey form and providing instantaneous, longitudinal charts describing the responding sites' degree of program implementation. The CST (California Standards Test) Charting tool, another Excel file, allows users with minimal skills to create sophisticated charts representing the literacy proficiency levels of diverse populations over time. TED (Training Evaluation Database) is a Filemaker Pro event-tracking database which summarizes event evaluations with automatically generated charts.
|
|
Session Title: Innovative Approaches to Engaging Residents in all Phases of a Participatory Action Research Project
|
|
Multipaper Session 364 to be held in Mineral Hall Section G on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Qualitative Methods TIG
|
| Chair(s): |
| Pennie Foster-Fishman,
Michigan State University,
fosterfi@msu.edu
|
| Abstract:
Participatory action research (PAR) is an effective venue for promoting resident power and participation within communities, particularly when it incorporates knowledge generation, critical conscious raising, and action. As Gaventa and Cornwell (2001) note, it is only when PAR involves all three dimensions that empowerment is truly realized. While PAR has become an increasingly popular method used by evaluators, rarely does it involve all three of these elements. In addition, though PAR can involve ordinary citizens in problem identification, analysis, intervention, and feedback most projects tend not to involve residents in all phases. In this panel presentation, presenters will describe their PAR work with youth and adults and will discuss how they have engaged them in all three participatory processes throughout all stages of PAR. They will pay particular attention to the strategies they have developed to engage residents in data analysis - perhaps the most neglected phase in PAR.
|
|
Engaging Youth and Adults in Designing and Implementing a Neighborhood-Based Participatory Action Research Project
|
| Erin Droege,
Michigan State University,
droegeer@msu.edu
|
|
This session will describe the process used to engage youth and adult residents from eight low income neighborhoods in a community based participatory evaluation project. Initial stages of this process served to develop the residents' evaluation capacity, including identifying root causes to community problems, understanding the use of evaluation in the cycle of community change, and gaining skills to interview and analyze evaluation data. The process then provided opportunities for ongoing cycles of investigation and reflection where residents developed research questions, collected qualitative and geographic data through interviews in the community, analyzed the findings, and used critical reflection to inform the next stages of data collection. The session will incorporate selected capacity-building materials created for the group as well as examples of residents' analyzed data. The session will conclude with a discussion of the challenges and lessons learned through the project.
|
|
Youth ReACT for Social Change
|
| Kristen Law,
Michigan State University,
lawkrist@msu.edu
|
|
Engaging youth in data analysis and social action are essential steps in Youth Participatory Action Research (YPAR). YPAR is an effective method for promoting youth power and participation within communities, particularly when it incorporates youth knowledge generation, critical consciousness raising, and action. To date there is a lack of models and strategies that illustrate how to effectively engage youth in these types of processes. For this panel I will present a series of games designed and effectively used to involve youth in all stages of a qualitative data analysis effort. Panel attendees will have an opportunity to learn how to use these activities to promote critical consciousness and social action.
|
| Roundtable:
Evaluation of the Life Stories Outreach Program |
|
Roundtable Presentation 365 to be held in the Slate Room on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Presenter(s):
|
| Deborah Levy,
SuccessLinks LLC,
debklevy@successlinks.biz
|
| Abstract:
The Life Stories for Incarcerated and At-Risk Youth Program has served more than 650 young people, including newly arrived immigrant girls, severely emotionally and behaviorally challenged teens residing in residential treatment facilities, adolescent sexual offenders and incarcerated teens. In the past eight years that the program has been running, there has been a struggle to incorporate evaluation activities because of the transient population of the participants as well as the turnover rates of the staff. This past year however, the Community Foundation of Greater Washington received an anonymous donation to have an outside consultant conduct a thorough impact evaluation of the program.
This round table will provide an overview of the evaluator's strategies used to assess the program as well as the successes and challenges encountered.
|
|
Session Title: Complex Challenges in Evaluating Advocacy: Internal Governance Structures and Public Policy Dispute Resolutions
|
|
Multipaper Session 366 to be held in the Agate Room Section B on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Advocacy and Policy Change TIG
|
| Chair(s): |
| Bonnie Shepard,
Social Sectors Development Strategies Inc,
bshepard@ssds.net
|
|
Key Questions in Evaluation of Governance Issues in Advocacy Coalitions: Insights From a Study of NGO Advocacy Networks in Latin America
|
| Presenter(s):
|
| Bonnie Shepard,
Social Sectors Development Strategies Inc,
bshepard@ssds.net
|
| Abstract:
This study from Latin America on internal governance issues in 13 national and regional advocacy networks points to the importance of systematic analysis of internal governance structures and processes in evaluations of advocacy coalitions. Without such analysis, important factors in a coalition’s ability to meet its objectives will not come to light. The study examined membership and leadership structures, decision-making rules, level of chapter autonomy and representation, and the trade-off between increased diversity in membership vs. ability to achieve consensus on political actions and statements. The influence of external factors on internal governance is also important to consider. In this study, the level of financial stability of members and the level of controversy attached to particular advocacy issues strongly affected the internal governance of the networks and in some cases, resulted in inability to make joint public statements or take unified action.
|
|
Learning From Your Neighbor: The Value of Public Participation Evaluation for Public Policy Dispute Resolution
|
| Presenter(s):
|
| Maureen Berner,
University of North Carolina Chapel Hill,
mberner@sog.unc.edu
|
| John Stephens,
University of North Carolina Chapel Hill,
stephens@sog.unc.edu
|
| Abstract:
Most evaluations of Public Policy Dispute Resolutions (PPDR) simply assess the existence of public participation, not a true evaluation of its quality, value or impact. This separation is a detriment to achieving a stronger perspective on PPDR as a whole. Can methods of evaluating public participation be effectively incorporated into evaluating PPDRs? We first compare the two fields, highlighting the ability to borrow strength from each. We then examine PPDR relevant literature, finding that the evaluation gap in PPDR can be addressed by more explicitly incorporating the theory and methods from public participation evaluation. In particular, we find the methods may be more successfully incorporated if one goal of the PPDR process is a public view of legitimacy. Finally, we suggest a specific model that can be used in PPDR evaluations.
|
| |
|
Session Title: Evaluating Community Policing at the Local Level
|
|
Panel Session 368 to be held in the Granite Room Section A on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Crime and Justice TIG
|
| Chair(s): |
| Deborah Spence,
United States Department of Justice,
deborah.spence@usdoj.gov
|
| Discussant(s):
|
| Rebecca Mulvaney,
ICF International,
rmulvaney@icfi.com
|
| Abstract:
This session will provide an overview of evaluation research on the implementation of community policing, with an emphasis on projects that have been sponsored by the U.S. Department of Justice, Office of Community Oriented Policing Services (COPS Office). Focusing on the local level rather than on the evaluation of national programs, the session will also provide a detailed look at the development and testing of the Community Policing Assessment Tool, and discuss its use in agency level evaluations. Finally, the session's discussant will offer a perspective on using the Assessment Tool and how agencies and evaluators can build partnerships that will support evaluation efforts and ensure both partners benefit from the results.
|
|
Overview of the Office of Community Oriented Policing Services and Efforts to Evaluate Community Policing
|
| Deborah Spence,
United States Department of Justice,
deborah.spence@usdoj.gov
|
|
Deborah Spence is a Senior Social Science Analyst in the Program/Policy, Support and Evaluation Division of the COPS Office. She will provide an overview of the COPS Office and its work to encourage the evaluation of community policing at both the local and aggregate level. In doing so she will discuss the strengths and weaknesses of previous work and how the lessons learned helped inform the creation of the Community Policing Assessment Tool.
|
|
|
Creating and Testing a Community Policing Assessment Tool
|
| Robert Chapman,
United States Department of Justice,
robert.chapman@usdoj.gov
|
|
Robert Chapman is the Supervisory Social Science Analyst for the Program/Policy, Support and Evaluation Division at the COPS Office, where he served at the program manager for the development of the Community Policing Assessment Tool. He will discuss the process of developing the Assessment Tool and the lessons learned from the pilot tests. He will also highlight the role that a community-based evaluator can play in the application of the Assessment Tool, and how the results of the tool can assist in the development of an ongoing partnership with an agency focused on outcome and impact assessments.
| |
|
Session Title: Mass Media Health Campaign Evaluations
|
|
Multipaper Session 369 to be held in the Granite Room Section B on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Health Evaluation TIG
|
| Chair(s): |
| Ruth Mohr,
Mohr Program Evaluation and Planning,
rmohr@pasty.com
|
|
Working with Polling Firms for the Evaluation of Mass Media Campaigns: Lessons from the Trenches
|
| Presenter(s):
|
| Corinne Hodgson,
Corinne S Hodgson and Associates Inc,
corinne@cshodgson.com
|
| Abstract:
Public opinion polling is a common and useful means of evaluating mass media campaigns but has both strengths and weaknesses. Strengths include the technological and manpower resources polling firms can bring to the task, as well as their capacity to help in questionnaire and sampling design and to provide detailed and sophisticated reports for clients. But there are a number of weaknesses or issues that must also be considered, including the high cost of custom surveys, the lack of information on incomplete interviews or refusals, and the use of standard reporting templates. The strengths and weaknesses, as well as ideas for optimizing working relationships with polling firms, will be illustrated with examples from work conducted with the Heart and Stroke Foundation of Ontario for the evaluation of tobacco control and stroke awareness mass media campaigns.
|
|
Evaluating Mass Communication Campaigns: Key Issues and Alternative Approaches
|
| Presenter(s):
|
| Seth Noar,
University of Kentucky,
noar@uky.edu
|
| Abstract:
Mass media campaigns have long been a tool to promote public health. While campaigns have been studied for decades, the poor evaluation of many campaign efforts has slowed efforts to determine campaign effects. Unlike other intervention approaches that lend themselves to randomized controlled trials, health communication campaigns often involve an entire country or region, making randomization to conditions difficult if not impossible.
The purpose of the current paper is to discuss the challenges that arise when trying to evaluate campaigns, and to recommend some solutions. The proposed solutions are informed in part by an expert panel that was convened to discuss this issue at the Kentucky Conference on Health Communication in April, 2008. Areas to be addressed in the paper include the strengths and weaknesses of commonly used evaluation designs; alternative evaluation designs; time series designs; use of propensity scoring to enhance evaluation; and feasibility of randomized controlled designs.
|
| |
|
Session Title: New Tools for Analyzing Satisfaction Surveys
|
|
Demonstration Session 370 to be held in the Granite Room Section C on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the AEA Conference Committee
|
| Presenter(s): |
| Gregg Van Ryzin,
Rutgers-Newark State University of New Jersey,
vanryzin@rutgers.newark.edu
|
| Abstract:
Evaluations often involve surveys of customers or clients that ask about their satisfaction with a program and its various services or components. This demonstration will show attendees how to use a set of analytical tools from market research, including key-driver analysis and importance-performance analysis, to extract more insight and evaluative information from a satisfaction survey. Specifically, these techniques reveal what specific aspects of a program matter most to clients and suggest where to focus fine-tuning or program improvement efforts. The tools are general and can be applied in various program areas, including health, education and training, criminal justice, human services, and other areas. The session will demonstrate these tools using real data from a survey of citizen satisfaction with public services in New York City, and questions will be answered about how to apply the techniques to your own satisfaction data.
|
|
Session Title: Using Participatory Evaluation to Help Parents Redefine Success for Students and Schools
|
|
Think Tank Session 371 to be held in the Quartz Room Section A on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
|
| Presenter(s):
|
| Elizabeth Kelly,
Independent Consultant,
ekelly.work@gmail.com
|
| Discussant(s):
|
| Brook Hedick,
Rye School District,
canoepad@earthlink.net
|
| Richard Wilde,
Independent Consultant,
rwilde@hotmail.com
|
| Abstract:
Data on test scores, school success, and college rankings increased dramatically. Information sharing no longer waits for local school board meetings, but is found daily through the Internet. The intention of such data sharing is to help parents make informed decisions about schools and their children's future. However, unintended outcomes have created a culture of achievement and assumptions about school success that impact where we live, the taxes we pay, and the confidence we have in ourselves as parents. How is it that test scores, class size, attendance rates, and college acceptance remain the most prevalent forces behind what we believe about our children? Using participatory evaluation methods we propose a model for parents and students to join schools in asking the new questions about student and school success - ones that speak to families' very real concerns for their children as every day learners, and as citizens of the future.
|
| Roundtable:
Modeling up: The Application of Logic Modeling to Strategic Evaluation Planning |
|
Roundtable Presentation 372 to be held in the Quartz Room Section B on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
|
| Presenter(s):
|
| Mimi Doll,
Chicago Public Schools,
medoll@cps.k12.il.us
|
| Jessica Foster,
Chicago Public Schools,
jdfoster2@cps.k12.il.us
|
| Susan Ryerson Espino,
Chicago Public Schools,
sryerson-espino@cps.k12.il.us
|
| Alva Smith,
Chicago Public Schools,
alsmith17@cps.k12.il.us
|
| Abstract:
This roundtable session focuses on strategies to increase organizational coherence and evaluation capacity. Internal evaluators within a large urban school district will present ongoing experiences extending logic modeling to the departmental level during a strategic evaluation planning process with several large central district office departments. The short-term goals of this process include increasing stakeholder knowledge of evaluation techniques and involvement in the definition of key streams of work and outcomes. Longer-term goals include increasing the relevancy of evaluation, the appreciation for and distinctions between evaluation and monitoring tasks, inter- and intradepartmental coherence, internal capacity among the departments to self-monitor their work, and the articulation of evaluation and research agendas. This roundtable will discuss challenges and successes with these efforts and provide a forum for sharing additional tools that session participants have found helpful in their organizational-level work.
|
|
Session Title: Adapting Qualitative Techniques to Enhance Your Quantitative Evaluations of Community Based Health Interventions
|
|
Demonstration Session 373 to be held in Room 102 in the Convention Center on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Health Evaluation TIG
|
| Presenter(s): |
| Julie Reeder,
Oregon Women, Infants, and Children Program,
julie.a.reeder@state.or.us
|
| Abstract:
Evaluations of large scale community health interventions tend to rely on quantitative measures to determine program effectiveness. Yet, a quantitative understanding alone is rarely adequate for creating new health promotion campaigns or revising existing services. In this demonstration, we will explore how qualitative techniques can be adapted to fit large, community based interventions or other situations where sample sizes or available resources limit the ability to conduct multiple, in-person interviews. Specifically, we will see how the principles of Phenomenology and Grounded Theory were modified to help bring greater meaning to a largely quantitative evaluation of breastfeeding support. In addition, we will debate how much a technique can be modified and still be considered true to the method. Finally, we will discuss strategies for increasing stakeholder and funder acceptance of qualitative techniques as well as ways for incorporating this approach into a greater number of health program evaluations.
|
|
Session Title: Using Program Evaluation to Make Curricula Resource Decisions
|
|
Think Tank Session 374 to be held in Room 104 in the Convention Center on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the AEA Conference Committee
|
| Presenter(s):
|
| Rhoda Risner,
United States Army,
rhoda.risner@us.army.mil
|
| Discussant(s):
|
| Thomas Ward II,
United States Army Command and General Staff College,
thomas.wardii@us.army.mil
|
| Abstract:
The United States Army Command and General Staff College (USACGSC) leadership uses a systematic approach to curriculum development requiring curriculum change to be grounded in evidence of need for change. The Accountable Instructional System (AIS) is the process used to annually evaluate all USACGSC curriculums for change, additional prominence, or plunging. The course author presents evidence of analysis and measures of student learning in a program evaluation Post Instruction Conference (PIC). It is during the PIC that leadership determines the quality of the curricula, makes suggestions for either expansion and/or changes. This think tank facilitator will make a short presentation about the AIS to include the program evaluation PIC process. That will be followed by a facilitated discussion about what other systems educational institutions use to make curriculum decisions. The facilitated discussion will also include a portion about what leaders need to know from a program evaluation in order to make decisions about educational resources.
|
|
Session Title: Improving the Collection, Analysis, and Reporting of Survey Data
|
|
Skill-Building Workshop 375 to be held in Room 106 in the Convention Center on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the AEA Conference Committee
|
| Presenter(s):
|
| Gary Miron,
Western Michigan University,
gary.miron@wmich.edu
|
| Anne Cullen,
Western Michigan University,
anne.cullen@wmich.edu
|
| Abstract:
This skill-building session will present tools and techniques to help evaluators improve the collection, analysis, and reporting of survey data. The session will cover those tasks and activities that should occur before, during, and after conducting a survey. The presenters will draw upon their combined evaluation experience to present strategies for improving both paper and electronic surveying. Related topics that also will be covered include budgeting time and resources for surveys and strategies for presenting survey findings. A number of tools we have developed to facilitate the collection, sorting, analysis, and reporting of survey data will be demonstrated and shared. The presentation will include the description and discussion of lessons learned from numerous evaluations conducted by The Evaluation Center. Interactive use with a laptop and projector will facilitate the instructional process. Handouts including sample tools for collecting, processing, and reporting survey data also will be shared.
|
|
Session Title: Evaluation in Countries Affected by Conflict
|
|
Multipaper Session 376 to be held in Room 108 in the Convention Center on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the International and Cross-cultural Evaluation TIG
|
| Chair(s): |
| Antoinette Brown,
Independent Consultant,
antoinettebbrown@juno.com
|
|
Standards Under Conflict: International Evaluation Practice in Peace-Precarious Situations
|
| Presenter(s):
|
| Catherine Elkins,
Research Triangle Institute,
celkins@gmail.com
|
| Abstract:
International assistance, reconstruction, and development have begun to occur routinely even in the most conflicted or fragile areas. Not only are such interventions often physically risky and under pressure to produce clear or dramatic humanitarian value, but volatile challenges in the situation also constantly push programs to innovate, sometimes in inconsistent directions. How can we accurately – or adequately – assess impact under these circumstances?
Projects scramble to keep up with their environment and find ways to "do good" regardless, and evaluators must ensure we continue to test hypotheses against evidence, in order to increase knowledge of how best to help most vulnerable populations effect self-sustaining improvements contributing to security and quality of life. Drawing on the author's experiences working in Iraq and Afghanistan, this paper examines AEA and other relevant principles for evaluation towards developing a model to help us understand how standards apply for international evaluation in peace-precarious situations.
|
|
Contributing to Quality Education for Children in Conflict Affected Fragile States: Findings of Four Case Study Evaluations
|
| Presenter(s):
|
| Cynthia Koons,
International Save the Children Alliance,
cynthia@save-children-alliance.org
|
| Abstract:
This paper examines effective ways of delivering quality education to children living in conflict affected fragile states. In 2008, the International Save the Children Alliance conducted the first phase of a case study evaluation to determine promising practices in delivering quality education to children in four conflict affected fragile states: Angola, Afghanistan, Nepal, and Southern Sudan. The approach to the evaluation was participatory, child friendly and mixed methods. The research questions answered were: 1) How have Save the Children’s project level interventions contributed to quality primary education for children affected by conflict; and 2) Which project level interventions have had what impact on the education quality of children affected by conflict. In addition, each country specific evaluation answered several contextually relevant sub-questions. Section one describes the research questions and methodology. Section two describes the findings.
|
| |
|
Session Title: Evaluation Policy: Current Issues, Potential Impacts, and Future Directions
|
|
Expert Lecture Session 377 to be held in Room 110 in the Convention Center on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Government Evaluation TIG
and the Research, Technology, and Development Evaluation TIG
|
| Presenter(s): |
| Deborah Duran,
National Institutes of Health,
durand@od.nih.gov
|
| Abstract:
Policies often guide organizational management practices and create program implementation standardizations. Although program accountability requirements are increasing in demand, evaluation and assessment policies are frequently not in place to proactively respond to performance requests. Evaluation policies at the national governing body level to address evaluator competence could substantially improve the practice consistency and external perceptions of evaluations. The creation of distinct policies for evaluation and for system assessments in the field would facilitate appropriate utilization in government, organization and program performance assessment policies. As agencies strive to proactively meet performance reporting requirements, evaluation/assessment policies to prompt appropriate infrastructures should include 1) evaluation of single/similar focused programs; 2) system assessments of large complex programs/organizations; 3) credentialed evaluators and system assessors; 4) recommendations plans that have realistic implementation plans; and 5) strategies to utilize findings in subsequent planning in order to enhance the value added perceptions of assessing performance.
|
|
Session Title: Discussing Approaches to Evaluating a Multifocused Suite of Energy Programs at Natural Resources Canada
|
|
Think Tank Session 378 to be held in Room 112 in the Convention Center on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Research, Technology, and Development Evaluation TIG
|
| Presenter(s):
|
| Gavin Lemieux,
Natural Resources Canada,
gavlemie@nrcan.gc.ca
|
| Discussant(s):
|
| Olive Kamanyana,
Natural Resources Canada,
okamanya@nrcan-rncan.gc.ca
|
| Ann Cooper,
Natural Resources Canada,
acooper@nrcan-rncan.gc.ca
|
| Abstract:
The purpose of this think tank is to discuss and debate possible approaches to develop a unifying theme to evaluate multiple energy programs at Natural Resources Canada. All these programs have unique goals, objectives, stakeholders and methods and will undergo individual program evaluations. There is an additional expectation, however, of a 'summative' or meta-evaluation report that will contain higher level findings across all these evaluations. This think tank will be facilitated by members of the NRCan evaluation division to help determine the questions and/or methods that might be used in each of the individual evaluations to facilitate reporting at a higher level and to discuss how we might develop a unifying, meaningful theme that cuts across each evaluation. These programs are part of the Canada's Clean Air Agenda, a major, multibillion dollar Federal investment to reduce green house gases and air contaminants.
|
|
Session Title: Social and Neural Networks
|
|
Multipaper Session 379 to be held in Room 103 in the Convention Center on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Chair(s): |
| Stephanie M Reich,
University of California Irvine,
smreich@uci.edu
|
|
Computational Modeling: Beyond Regression in Evaluation
|
| Presenter(s):
|
| Nicole Cundiff,
Southern Illinois University at Carbondale,
karim@siu.edu
|
| Nicholas G Hoffman,
Southern Illinois University at Carbondale,
nghoff@siu.edu
|
| Alen Avdic,
Southern Illinois University at Carbondale,
alen@siu.edu
|
| Abstract:
Model comparison will be conducted comparing multiple regression models to neural network backwards propagation models using existing evaluation data in order to assess the predictive ability of both types of models. The comparisons being made will attempt to find better ways to model predictive relationships in evaluation data that could be used by clients to assist in decision making. Currently, regression models are the most popular way to test prediction; however it is a strict modeling technique that requires many assumptions to be met, including linearity and homoscedasticity. Feed forward backwards propagation networks do not have to meet these restrictions as do parametric tests, and therefore give more accurate and generalizable models from the data. Problems with interpretation of data will be discussed as well as information on the use of neural networks in evaluation. Additionally, a brief discussion on how to explain findings to clients will be included.
|
|
Social Network Analysis for Evaluation: Open and Closed Approaches
|
| Presenter(s):
|
| Carl Hanssen,
Hanssen Consulting LLC,
carlh@hanssenconsulting.com
|
| Abstract:
This paper is based on work from two evaluations of Math Science Partnerships funded by National Science Foundation (NSF). One goal of the Milwaukee Mathematics Partnership (MMP) is to strengthen school-based networks and build distributive leadership in schools. Similarly, a goal of the Life Sciences for a Global Community (LSGC) teacher institute is to develop a national network of high school teacher leaders.
Both evaluations incorporate social network analysis as a tool for exploring network development. This review contrasts the open and closed approaches used in the MMP and LSGC projects, respectively. An open approach allows participants to identify any individual who they consider part of their professional network—doing so enables monitoring the expansion of networks over time. The closed approach limits network participants to a defined list of individuals and enables monitoring of network changes for a specific set of individuals (e.g., institute participants).
|
| |
|
Session Title: Real-Time Evaluation in Real Life
|
|
Panel Session 380 to be held in Room 105 in the Convention Center on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Chair(s): |
| Gale Berkowitz,
David and Lucile Packard Foundation,
gberkowitz@packard.org
|
| Discussant(s):
|
| Bernadette Sangalang,
David and Lucile Packard Foundation,
bsangalang@packard.org
|
| Abstract:
Real-time evaluations aim to support ongoing learning and strategy development. These evaluations regularly facilitate opportunities for learning to occur and bring evaluation data in accessible formats to the table for reflection and use in decision making. They use evaluation process and data to identify what about a program or strategy is or is not working and to identify midcourse corrections that ultimately can lead to better outcomes. While evaluation to support real-time learning and strategy sounds good in theory, it can be difficult to achieve successfully in practice. This session will examine the experiences of two real-time evaluations designed to support long-term grant making programs within the David and Lucile Packard Foundation. Presenters will reveal how their evaluations were designed (including specific examples of evaluation process, methods, and data), and will share what they've learned including some mistakes about using this approach in real life.
|
|
Evaluation as Integral to Program Design
|
| Lande Ajose,
BTW Informing Change,
lajose@btw.informingchange.com
|
|
Initiatives that are in the early stages of program design and planning can benefit greatly from real-time evaluation because it provides a continuous feedback loop for refining strategy and clarifying objectives. Such has been the case with the David and Lucile Packard Foundation's grant making program focused on increasing the quality of and access to after-school programs in California shaped by Proposition 49, a measure mandating that $550M be set aside by the state annually for after-school programs. Contrary to the conventional wisdom, conducting an evaluation for programs and strategies in the design phase requires evaluators to give up maintaining a 'critical distance' and instead function as partners in program design and conveners of others to form a learning community. This session will explore how evaluators can operate both as insiders and outsiders as programs and strategies unfold and still emerge with credible evaluation findings.
|
|
|
Evaluation to Support Advocacy Strategy and Learning
|
| Julia Coffman,
Harvard Family Research Project,
jcoffman@evaluationexchange.org
|
|
Real-time evaluation can be particularly useful for advocacy and policy change efforts that evolve without a predictable script. To make informed decisions, advocates need timely answers to the strategic questions they regularly face and evaluation can help fill that role. Five years ago, the David and Lucile Packard Foundation established a grant making program to achieve an ambitious policy goal voluntary quality preschool for all three and four-year olds in California by 2013. Because the Foundation knew from the program's start that the process for achieving this goal would unfold without a predictable script, they invested in an evaluation conducted by Harvard Family Research Project that emphasized continuous feedback and learning. This session will describe how the evaluation was designed to support Foundation learning and strategy development (using some new and innovative methods created specifically for this purpose), and how and why the design has evolved over time.
| |
|
Real-time Evaluation to Inform Strategic Grantmaking
|
| Arron Jiron,
David and Lucile Packard Foundation,
ajiron@packard.org
|
| Bernadette Sangalang,
David and Lucile Packard Foundation,
bsangalang@packard.org
|
|
Philanthropy has no absolute measure for success. The David and Lucile Packard Foundation's approach to evaluation is guided by three main principles: (1) Success depends on a willingness to solicit feedback and take corrective action when necessary; (2) Improvement should be continuous, and we should learn from our mistakes; and (3) Evaluation should be conducted in partnership with those who are doing the work in order to maximize learning and minimize the burden on the grantee. At the Packard Foundation, there has been a shift generally from evaluation for proof ("Did the program work") to evaluation for program improvement ("What did we learn that can help us make the program better?"). This session will describe how evaluation fits in the entire grantmaking cycle, and discuss our experience with both evaluations from the Foundation's perspective.
| |
|
Session Title: Issues in Multisite, Multilevel Evaluations of Science, Technology, Engineering, and Mathematics (STEM) Education Programs
|
|
Panel Session 381 to be held in Room 107 in the Convention Center on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
|
| Chair(s): |
| Samuel Held,
Oak Ridge Institute for Science and Education,
sam.held@orau.org
|
| Discussant(s):
|
| Samuel Held,
Oak Ridge Institute for Science and Education,
sam.held@orau.org
|
| Abstract:
A lack of funding in many government agencies has forced program managers to figure out ways to leverage resources through multilevel and multisite programs. Evaluation of these multilevel and/or multisite programs comes with a host of challenges involving coordination of multiple resources. Evaluators faced with the task of carrying out complicated evaluations must uphold professional standards when resolving these challenges. This panel discussion will cover issues in evaluating multilevel, multisite STEM education programs while adhering to the Program Evaluation Standards set for by the Joint Committee on Standards for Educational Evaluation. The presenters will outline some of the challenges they have faced in implementing their evaluation plans, and discuss how the standards of utility, feasibility, propriety, and accuracy have shaped their dealings with these issues.
|
|
Issues in Instrumentation and Data Collection for the Evaluation of a Multisite, Multilevel National Workforce Development Endeavor
|
| Pamela Bishop,
Oak Ridge Institute for Science and Education,
pbaird@utk.edu
|
|
The Department of Energy’s Office of Workforce Development for Teachers and Scientists funds five national experiential learning programs, which the presenter has been charged with evaluating. The scope of the evaluation includes both the five federally-funded, multisite science education programs, and a metaevaluation of common enterprise outcomes and impacts of all programs. The presenter will discuss her experience with developing and implementing a multilevel, multisite evaluation plan for this large-scale program. Included in the presentation will be her unique experiences with development of data collection tools that can be used across programs, but are still sensitive enough to capture information at the local level, as well as methods used to align resources and goals across the levels and sites to ensure accurate and reliable data collection (and subsequent analyses) take place in a timely and efficient manner throughout the evaluation.
|
|
|
Multilevel Evaluation of the Newly Established Tennessee Governor's Academy for Mathematics and Science
|
| Amy Sullins,
Tennessee Governor's Academy for Mathematics and Science,
acsullins@gmail.com
|
|
The Tennessee Governor's Academy for Mathematics and Science (TGA), a residential specialty experience for talented juniors and seniors, is in its second year of operation and the implementation phase of its multilevel evaluation plan. TGA is a component of Governor Phil Bredesen's P-16 education network; numerous resources are being utilized to educate and nurture this population of talented students with the goals of graduates choosing STEM majors and, later, contributing to the STEM community. The multilevel plan includes evaluation of individual students, the school, and TGA partners, including Oak Ridge National Laboratory (where TGA students are engaged in scientific investigation with scientist mentors) and the University of Tennessee (where the senior class pursues math and science coursework) for program effects. The TGA multilevel evaluation model will be presented and discussed. Challenges of implementation will also be discussed, including formation of a new organization, studying a minor population, coordination of data collection, and the evolvement of the evaluation plan.
| |
|
Session Title: Leveraging Technology to Support Current Student and Reach Alumni
|
|
Multipaper Session 384 to be held in Room 113 in the Convention Center on Thursday, Nov 6, 3:35 PM to 4:20 PM
|
|
Sponsored by the Integrating Technology Into Evaluation
|
| Chair(s): |
| Marcie Bober-Michel,
San Diego State University,
bober@mail.sdsu.edu
|
|
Current and Lapsed Membership: Evaluating a University Alumni Database
|
| Presenter(s):
|
| Kristin Pankey,
Southern Illinois University at Carbondale,
knpankey@siu.edu
|
| Meghan Lowery,
Southern Illinois University at Carbondale,
mrlowery@siu.edu
|
| Joel Nadler,
Southern Illinois University at Carbondale,
jnadler@siu.edu
|
| Abstract:
Applied Research Consultants (ARC), a student-run consulting firm at Southern Illinois University-Carbondale, was contracted to evaluate a Midwestern university’s alumni population of current and lapsed members. A survey was constructed and administered to both alumni with a current membership and alumni who had allowed their memberships to lapse. The purpose of this evaluation was to assess opinions of the alumni sample concerning the Alumni Association’s communication with members, and the perceived value of membership. The evaluators had to overcome difficulties in regards to participants’ lack of knowledge with the process of online survey methods; however, the data provided the client with extensive feedback. Feedback included praises for the client’s current system and suggestions for further improvement. This presentation is intended to serve as an example for other evaluators from universities with the ability to conduct a similar evaluation using an alumni database.
|
|
Leveraging Technology to Manage Stakeholder Involvement in Participatory Evaluation
|
| Presenter(s):
|
| Christopher DeLuca,
Queen's University,
2cd16@queensu.ca
|
| Laura April McEwen,
Queen's University,
5lam5@queensu.ca
|
| Abstract:
Growing graduate student populations in Canadian post-secondary education has fostered interest on the part of institutional service providers as to the support needs of this group. In response, nine service providers at a mid-sized Ontario university secured funding to examine the needs of the graduate student body. Given the diverse group of service providers, considerable effort was invested in collaboratively establishing evaluation focus. A participatory evaluative approach was adopted and innovative uses of technology to facilitate stakeholder participation were leveraged. Discussion will center on how such innovative practice can offset some of the considerable time commitment enlarged circles of stakeholder participation entails. Guidelines for practice are drawn from research in the area of computer mediated communication.
|
| |