|
Session Title: Learning to Promote Quality Over Ideology for Methodology
|
|
Panel Session 301 to be held in International Ballroom A on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Presidential Strand
and the Quantitative Methods: Theory and Design TIG
|
| Chair(s): |
| George Julnes,
Utah State University,
gjulnes@cc.usu.edu
|
| Discussant(s):
|
| Lois-ellin Datta,
Datta Analysis,
datta@ilhawaii.net
|
| Abstract:
Much of the controversy over methodology within AEA over the past several years has been driven by ideology. Even efforts to examine the implications of using different methods become involved in ideological debates. This panel offers perspectives on how we might learn from previous debates and move forward in promoting a commitment to quality in methodology that at least tempers, if not transcends, ideological conflicts. The goal is to strengthen the contribution of evaluation to improving society.
|
|
Missing in Action (MIA) in the Qualitative Versus Quantitative Wars
|
| Henry M Levin,
Columbia University,
hl361@columbia.edu
|
| Douglas Ready,
Columbia University,
ready@exchange.tc.columbia.edu
|
|
The war between advocates of qualitative vs. quantitative methods has led to an excess of vehemence and ideology. This presentation will attempt to deconstruct some of that rhetoric by demonstrating that the combat routinely sacrifices a major victim, that of quality in both types of studies. An attempt will be made to show the nature and source of collateral damage to quality in research by the bellicose advocacy represented by the qualitative-quantitative conflict.
|
|
|
Establishing Criteria for Rigor in Non-Randomized and Qualitative Outcome Designs
|
| Debra Rog,
Westat,
debrarog@westat.org
|
|
Evaluators involved in conducting quantitative outcome studies, especially those incorporating randomized designs, have established criteria for assessing the extent to which the study has maintained rigor and has adequate internal validity. Departures from the randomized study place have less agreed-upon criteria for determining whether the studies are sufficiently rigorous to provide statements of causality. As Boruch has demonstrated, quasi-experimental designs often do not replicate the findings from randomized studies due to their vulnerability to threats to validity. However, what strategies do we have for determining when a quasi-experiment produces results that approach the validity of a randomized study? In other words, what methodological improvements are sufficient for bolstering a study's validity and how can we assess or prove that? Similarly, what criteria and standards exist for qualitative studies to ascertain their accuracy and precision? This paper will review what strategies exist for assessing the adequacy for non-randomized outcome designs; discuss work underway to improve our ability to judge the quality and rigor of different designs as well as strategies for accumulating evidence across nonrandomized designs such as single-subject designs; and outline steps to establishing bases for judging the validity and rigor of nonrandomized designs.
| |
|
The Renaissance of Quasi-Experimentation
|
| William Shadish,
University of California, Merced,
wshadish@ucmerced.edu
|
|
Quasi-experimentation has long been the stepchild of the experimental literature. Even Donald Campbell, who invented the term quasi-experimentation, said he preferred randomized experiments when they were feasible and ethical. During the last 10 years in particular, the randomized experiment has come to dominate applications of experimental methodology to find out what works. More recently, however, quasi-experimentation has experienced somewhat of a renaissance. That renaissance is due to two primary developments. First is the increasing use of the regression discontinuity design. That design has long been known to provide unbiased estimates of effects under some conditions, but it has languished mostly out of site and out of mind since its invention 40 years ago. More recently, however, RDD has become popular among economists who have revitalized both its use and its analysis to provide better estimates of the effects of interventions. The second development is the use of propensity scores in nonrandomized experiments to provide better estimates of effects. For both of these developments, empirical studies suggest that they can provide estimates of effects that are as good as those from randomized experiments. While we still have much to learn about the conditions under which this optimistic conclusion might hold, nonetheless it seems likely that quasi-experimental methodology and analysis will begin to play a much stronger role in providing evidence about what works than has been the case in the last several decades.
| |
|
Working Towards a Balance of Values in Promoting Methods in Evaluation
|
| George Julnes,
Utah State University,
gjulnes@cc.usu.edu
|
|
Promoting quality in methods is important in evaluation, but it is so because we believe that the use of quality methods will lead to better social outcomes. Such a benevolent linkage, however, demands much with regard to the effective functioning of our evaluation community. In addition to the necessary theoretical and methodological developments, we also need the pragmatic skills to advance our craft. This paper addresses these issues and offers suggestions for promoting methodology in support of social betterment.
| |
|
Session Title: Understanding Culturally and Contextually Responsive Evaluation Through the Experiences of a Multi-year Implementation Project
|
|
Panel Session 302 to be held in International Ballroom B on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Multiethnic Issues in Evaluation TIG
|
| Chair(s): |
| Stafford Hood,
Arizona State University,
stafford.hood@asu.edu
|
| Discussant(s):
|
| Jennifer Greene,
University of Illinois at Urbana-Champaign,
jcgreene@uiuc.edu
|
| Abstract:
This panel provides an interactive session devoted to culturally responsive evaluation and the significance of cultural context to evaluation theory and practice. Through a multi-year NSF funded project an advisory board, staff, and school-based teams worked together. The goal was to examine and document how culturally responsive evaluation might be utilized in assessing a school program, when teams of teachers and a principal were supported in developing and implementing an evaluation plan. Schools involved reflected high percentages of Native American, Hispanic, or African American students and were engaged in developing school improvement plans as required under NCLB. Presentations discuss the meanings of 'responsiveness' in this project, and what was learned that informed the development of a conceptual framework for culturally responsive evaluation. Discussion will include capacity building in a K-12 setting, the many meanings of culture, and lessons learned from an effort to engage teachers and empower them through evaluation.
|
|
Conceptual Designs and Practical Issues: Lessons From the Implementation of Culturally and Contextually Responsive Evaluation
|
| Melvin Hall,
Northern Arizona University,
melvin.hall@nau.edu
|
| Jennifer Greene,
University of Illinois at Urbana-Champaign,
jcgreene@uiuc.edu
|
|
This presentation will chronicle contributions of the RCEI experiences to our understanding of contextually and culturally responsive evaluation. The project was envisioned as an opportunity to develop a major conceptual statement or framework that would be grounded in experiences working in the field. The framework that emerged engages both understandings and unresolved questions; each will be explored in this presentation. For example in pursuing culturally responsive evaluation:
What did culture and responsive mean in each of the sites?
As evaluators seek to be more engaged and open to cultural context, how should they handle the ethical dilemma regarding when to and how to impact dysfunctional situations?
Does culturally responsive evaluation practice uniformly elevate impact, efficacy, or validity of the evaluation?
Is there an inherent relationship problem if the culturally responsive evaluator is seen as the 'evaluator' given the impact this label has on power relationships?
|
|
|
Relevance of Culture in Evaluation Institute Lessons Learned: Implementing School-based, Culturally and Contextually Responsive Evaluation Projects
|
| Michael Wallace,
Howard University,
mwallace@capstoneinstitute.org
|
| Stafford Hood,
Arizona State University,
stafford.hood@asu.edu
|
|
Implementation of the RCEI project while creating a unique set of anticipated and unanticipated challenges also resulted in numerous lessons being learned by all involved. In fact a few of the more acute and unanticipated lessons were learned by the principal investigators, evaluation consultants, and members of the advisory board. The experiences of the evaluation consultants who provided direct technical assistance to the RCEI schools in the development of their evaluation plans, implementation of these plans, and preparation of the final reports provided considerable insight into the unique circumstances of implementing a project such as RCEI. The proposed presentation will provide insights resulting from the implementation of the RCEI project in two participating schools. These insights will be based on the experiences and interactions of one of the evaluation consultants in providing direct technical assistance with two of the schools over a period of four years.
| |
|
Evaluation Influence and Cultural Context
|
| Karen Kirkhart,
Syracuse University,
kirkhart@syr.edu
|
| Melvin Hall,
Northern Arizona University,
melvin.hall@nau.edu
|
|
What are the implications of Culturally Responsive Evaluation (CRE) for evaluation influence? This presentation has two intents. First, it maps the impact of evaluations undertaken under RCEI against an Integrated Theory of Influence (ITI); i.e., examining both intended and unintended influence resulting from both the CRE process and the evaluative data produced by the project implementation. Second, it reflects on what the RCEI experience teaches us about the nature of influence and how influence intersects culture. This is a significant contribution to the knowledge base of our profession. Despite advances in linking culture to evaluation theory and method, little explicit attention has been paid to linking culture to evaluation use/influence. This presentation complements the panel presentations on the theory underlying CRE and the implications of the RCEI experience for evaluation practice.
| |
|
Session Title: Accountability, Democracy and Representation in the Global Evaluation Context
|
|
Panel Session 303 to be held in International Ballroom C on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Qualitative Methods TIG
|
| Chair(s): |
| Leslie Goodyear,
Education Development Center Inc,
lgoodyear@edc.org
|
| Discussant(s):
|
| Robert Stake,
University of Illinois at Urbana-Champaign,
stake@uiuc.edu
|
| Abstract:
In this panel discussion, we will present a general analysis that lays out an understanding of accountability, democratic processes and evaluation. Each panelist will then address this framework from his or her own perspective, highlighting questions of evaluator roles and responsibilities, public deliberation, control, accountability structures, representations of programs and people and evaluation's influence on policy. In addition, the panelists will discuss the role qualitative methods can play in evaluations within this framework. The structure for this session is brief presentations followed by panel discussion, including comments from the discussant, and open conversation with the audience.
|
|
Accountability Structures and Evaluator Roles
|
| Lehn Benjamin,
George Mason University,
lbenjami@gmu.edu
|
|
Lehn Benjamin will draw on her research into performance monitoring systems and the concept of risk to address issues of performance measurement frameworks, accountability systems and democratic processes as they relate to domestic governmental and nonprofit programs. She will pose questions regarding the role of evaluators and whether these accountability demands require evaluators to pay attention to new and different issues within evaluation contexts.
|
|
|
Global Accountabilities, the New Public Management and the Millennium Development Goals
|
| Saville Kushner,
University of the West of England,
saville.kushner@uwe.ac.uk
|
|
Saville Kushner brings an international development perspective to his comments, addressing questions of control, citizen deliberation, rights-based approaches to accountability, and the purposes of evaluation within accountability frameworks. He will also describe his ideas regarding a rights-based approach to accountability.
| |
|
The Role of Representation in Democratic Accountability and Evaluation
|
| Leslie Goodyear,
Education Development Center Inc,
lgoodyear@edc.org
|
|
Leslie Goodyear will draw on her work addressing representation in evaluation--how people's lives and experiences are represented in and through evaluations--to highlight the need for forms of representation in evaluation that reflect the complex contexts in which people live and work.
| |
|
Session Title: Organizational Learning and Evaluation Use at the State Level
|
|
Multipaper Session 304 to be held in International Ballroom D on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Evaluation Use TIG
|
| Chair(s): |
| Susan Tucker,
Evaluation and Development Association,
sutucker@sutucker.cnc.net
|
|
Evaluations of High School Exit Examinations: What Have We Learned?
|
| Presenter(s):
|
| Nicki King,
University of California, Davis,
njking@ucdavis.edu
|
| Abstract:
As of June, 2006, 23 of the nation's 50 states require that students pass some form of minimum competence test to receive their high school diplomas. Some states have devoted significant resources to evaluation of this significant new requirement for high school graduation. As a result of these evaluations, schools and citizens have access to information about the impacts and outcomes of these examinations, and as a result they have information on the pass rates of various income and racial groups, the number of students who are ultimately leaving school without diplomas, and the amount of remediation or extra assistance being provided to hello students pass the exams. Few, if any, of the evaluations calculate the cost of initiating this policy or follow graduates and non-graduates to determine the ultimate impact of the policy on the subsequent lives of former students. Without this information, it will be very difficult to make an ultimate determination of the value of the exit exam policy as an educational intervention.
|
|
Using Evaluation as a Management Tool: The Experience of the Tennessee State Improvement Grant Evaluation
|
| Presenter(s):
|
| Chithra Perumal,
University of Kentucky,
cperu2@uky.edu
|
| Brent Garrett,
Independent Consultant,
garrett@win.net
|
| Abstract:
State and local programs often view evaluation as an information tool that provides information not only on efficacy and effectiveness but also at times for decision making (Blalock, 1990). But evaluation can also serve as a management tool in these arenas. The process of evaluation can help agencies to effectively deliver and organize activities. When evaluation is used as a management tool, evaluators establish working relationships with stakeholders and become key players in program improvement. This is crucial if we want evaluation results guiding program improvement, which is overall purpose of evaluation. This paper discusses the how the evaluation process was used as a management tool to facilitate the implementation of the Tennessee State Improvement Grant initiatives. It also discusses the evaluation steps that served as platforms for enacting program improvement strategies.
|
|
Learning From Local Evaluations: How Math-Partnership Project Evaluations Informed State Policy
|
| Presenter(s):
|
| Helene Jennings,
Macro International Inc,
jennings@macroint.com
|
| Nancy Carey,
Maryland State Department of Education,
ncarey@msde.state.md.us
|
| Abstract:
A significant segment of the education community is focused on improving student performance in math and science. Currently, funding is available for STEM (science, technology, engineering, mathematics) initiatives from a number of sources. The Maryland State Department of Education (MSDE) has been implementing math-science partnership grants through a competitive process to consortia throughout the state, with the requirement that an independent evaluation be incorporated. These evaluations have been conducted by various organizations. Macro International is responsible for evaluating the initiatives of two consortia and will present the important outcomes from this program focused on improving teacher professional development, as well as lessons learned in the evaluation process. The MSDE education specialist overseeing the program will discuss how the findings of the local evaluations have significantly influenced the formulation of the next round of grants competed by the state. The interaction between the evaluation and program planning will be made explicit.
|
| | |
|
Session Title: Critical Reflections: Theory and Practice
|
|
Multipaper Session 305 to be held in International Ballroom E on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Graduate Student and New Evaluator TIG
|
| Chair(s): |
| Bianca Montrosse,
University of North Carolina, Chapel Hill,
montrosse@mail.fpg.unc.edu
|
|
Reflections of Emerging Evaluators: Constructing Evaluation Meaning in Situated Learning Contexts
|
| Presenter(s):
|
| Sallie E Greenberg,
University of Illinois at Urbana-Champaign,
greenberg@isgs.uiuc.edu
|
| A Rae Clementz,
University of Illinois at Urbana-Champaign,
clementz@uiuc.edu
|
| Ana Houseal,
University of Illinois at Urbana-Champaign,
houseal2@uiuc.edu
|
| LaShorage Shaffer,
University of Illinois at Urbana-Champaign,
lshaffe1@uiuc.edu
|
| Abstract:
How did you learn to be an evaluator? This paper presents a dialogic analysis of dimensions of evaluation salient to beginning evaluators as they struggle to make sense of evaluation as a practice. Results from the perspective of an ethnographic researcher studying a graduate practicum course on evaluation methods are presented in dialogue with the reflections, observations, and experiences of a group of students conducting their first “real” evaluation within a situated learning environment. The student narrative reveals their constructions of identity and understandings of “what constitutes a good evaluation,” “how do we start” and “at what point are we ready to join the profession as evaluators.” The researcher's perspective focuses on how these questions are translated into practice and subsequently transformed by practice. The entire discourse is framed around a view of evaluation as moral discourse and practical learning, as applied to learning how to become an evaluator.
|
|
An Investigative Study on Evaluation Theory and Practice Using Conceptualization Method
|
| Presenter(s):
|
| Jie Zhang,
Syracuse University,
jzhang08@syr.edu
|
| Abstract:
Empirical knowledge, defined by Smith, is experientially based knowledge acquired through formal study (Smith, 1983). This knowledge is constructed to be evaluation theories which explain the nature of evaluation and guide the evaluation practice. The link between evaluation theory and practice is still an area of much-needed inquiry (Shadish, Cook, & Leviton, 1991; Smith, 1993). The proposed study answers the calls for more empirical studies on evaluation theory and practice. It intends to explore the dynamic relationships between theory and practice in program evaluation. The assessment methodology, based on mental models, is quite unique and distinct from previous studies, which often adopted survey questionnaires. The study will collect mental models of evaluation theorists and practitioners, and inferences will be drawn on how evaluation theorists and practitioners perceive evaluation differently. The paper concludes with a summary of implications of the study to areas of program evaluation and instructional design.
|
|
The Making of Evaluation: An Inquiry Into the Theory-practice Interaction in Evaluation
|
| Presenter(s):
|
| Jeehae Ahn,
University of Illinois at Urbana-Champaign,
jahn1@uiuc.edu
|
| Abstract:
Building on the previous works on the theory and practice relationship in evaluation, in this paper a novice evaluator further engages the issue by presenting a narrative in which she recounts her initiation into the world of evaluation via a specific program context as a way of exploring ways in which the evaluator makes sense of the theory-practice interaction in the field. Positioning the intersection of theory and practice as central to her evaluation, the presenter first delineates important conceptual features of her evaluation theory, and illustrates how they relate to her practical experience in the field by framing her narrative in such a way that honors and highlights the complexity and contextuality of the issues that arise from the ever-evolving entanglement of theory and practice.
|
| | |
|
Session Title: What Theory and Research Tell Us About Evaluation Capacity Building
|
|
Multipaper Session 306 to be held in Liberty Ballroom Section A on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
|
| Chair(s): |
| Christina Christie,
Claremont Graduate University,
tina.christie@cgu.edu
|
| Discussant(s): |
| J Bradley Cousins,
University of Ottawa,
bcousins@uottawa.ca
|
|
Navigating Through the Evaluation Capacity Building Literature: A Compass for Future Practice
|
| Presenter(s):
|
| Shanelle Boyle,
Claremont Graduate University,
shanelle.boyle@gmail.com
|
| Hallie Preskill,
Claremont Graduate University,
hallie.preskill@cgu.edu
|
| Abstract:
As a result of the growing interest in evaluation capacity building (ECB) across organizational sectors around the world, the literature in this area has expanded over the last few years. This paper will present a survey of the ECB literature, spanning the years of 2000-2007, to help attendees develop a better understanding of what is and is not currently known about ECB. The specific populations, sectors, strategies, theoretical frameworks, timing, challenges, and findings that have been purposed, studied, and reported in the ECB literature will be identified, and areas that need further research to advance the field's understanding of how to design and implement effective ECB will be discussed. A handout summarizing the current literature will also be provided.
|
|
Program Evaluations: A Tool to Prevent Organizational Learning Disabilities
|
| Presenter(s):
|
| Bill Thornton,
University of Nevada, Reno,
thorbill@unr.edu
|
| Steve Canavero,
University of Nevada, Reno,
scanavero@gmail.com
|
| Ricky Medina,
Carson City School District,
rmedina@carson.k12.nv.us
|
| Abstract:
Knowledge, organizational learning, and related innovations have become increasingly critical to the successful operation of social organizations. In a complex society, organization learning is necessary for continuous progress; however, it is often limited by functions, policies, and structures of the organization itself. Senge (1990) identified learning disabilities, which promote continual repetition of common mistakes and/or perpetration of erroneous thinking. This paper will briefly summarize organizational learning disabilities and discuss how they prevent organizational learning. Specifically, this paper will analyze how systemic approaches, systems thinking, and program evaluations can reduce learning disabilities and promote organization learning. The interactions among evaluations, existing knowledge, developed knowledge, effective communication, and leadership will be illustrated. Methods by which organizations might develop structures and procedures to promote learning, through effective evaluations will be presented and specific examples will be provided.
|
|
What Organizational Characteristics Facilitate Using Evaluation for Organizational Learning in North Carolina's Nonprofit Sector?
|
| Presenter(s):
|
| Deena Murphy,
National Development and Research Institutes Inc,
murphy@ndri-nc.org
|
| Abstract:
Research on the utilization of evaluation suggests that organizational characteristics significantly influence the extent to which evaluation findings are used to support learning and decision-making. Despite this, little empirical research has looked at the intercorrelations between use of evaluation and organizational characteristics, such as stakeholder engagement, supportive leadership and a learning climate. Using data gathered from 284 nonprofits across North Carolina, this research uses a path analysis model to examine the multiple organizational factors associated with the use of evaluation for organizational learning. How do factors such as leadership and a learning climate impact organizational use of evaluation? How does level of stakeholder engagement in the evaluation process relate to program and organizational use of evaluation? While these questions will not be definitively answered, the goal is to provide a useful foundation for future research into issues of evaluation and organizational learning in the nonprofit sector and beyond.
|
| | |
|
Session Title: Learning From the American Evaluation Association Topical Interest Groups Proposal Review Standards
|
|
Think Tank Session 307 to be held in Liberty Ballroom Section B on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the AEA Conference Committee
|
| Presenter(s):
|
| Daniela C Schroeter,
Western Michigan University,
daniela.schroeter@wmich.edu
|
| Discussant(s):
|
| Chris Coryn,
Western Michigan University,
christian.coryn@wmich.edu
|
| Robert Hanson,
Health Canada,
robert_hanson@hc-sc.gc.ca
|
| Ann Maxwell,
United States Department of Health and Human Services,
ann.maxwell@oig.hhs.gov
|
| Martha Ann Carey,
Azusa Pacific University,
mcarey@apu.edu
|
| Janice Noga,
Pathfinder Evaluation and Consulting,
jan.noga@stanfordalumni.org
|
| Rita O'Sullivan,
University of North Carolina, Chapel Hill,
ritao@email.unc.edu
|
| Elmima Johnson,
National Science Foundation,
ejohnson@nsf.gov
|
| Emiel W Owens Jr,
Texas Southern University,
owensew@tsu.edu
|
| Liesel Ritchie,
Western Michigan University,
liesel.ritchie@wmich.edu
|
| Eunice Rodriguez,
Stanford University,
er23@stanford.edu
|
| John Nash,
Open Eye Group,
john@openeyegroup.com
|
| James Sass,
LA's BEST After School Enrichment Program,
jim.sass@lausd.net
|
| Nino Saakashvili,
Horizonti Foundation,
nino.adm@horizonti.org
|
| Ann Zukoski,
Oregon State University,
ann.zukoski@oregonstate.edu
|
| Heather Boyd,
Virginia Tech,
hboyd@vt.edu
|
| Susan Kistler,
American Evaluation Association,
susan@eval.org
|
| Nicole Vicinanza,
American Evaluation Association,
nvicinanza@jbsinternational.com
|
| Howard Mzumara,
Indiana University Purdue University Indianapolis,
hmzumara@iupui.edu
|
| Otto Gustafson,
Western Michigan University,
ottonuke@yahoo.com
|
| Marcie Bober,
San Diego State University,
bober@mail.sdsu.edu
|
| Tom McKlin,
Georgia Institute of Technology,
tom.mcklin@gatech.edu
|
| Emmalou Norland,
Institute for Learning Innovation,
norland@ilinet.org
|
| Neva Nahan,
Wayne State University,
n.nahan@wayne.edu
|
| Denice Cassaro,
Cornell University,
dac11@cornell.edu
|
| Eric Barela,
Los Angeles Unified School District,
eric.barela@lausd.net
|
| Abstract:
Annually, the American Evaluation Association (AEA) calls for submissions to present at its conference. In 2006 alone, more than 1,000 submissions were received and evaluated by AEA's Topical Interest Groups (TIGs). More than two thirds of all proposals submitted to AEA 2006 were reviewed by the TIGs involved in this session. Their review processes reflect on TIG evaluation practices, while posing important questions about standards the professional evaluation community applies to its conference submissions. Reflecting on experiences with reviewing AEA submissions, related challenges and opportunities, expectation of TIGs, and the current AEA program, this Think Tank asks: Do we hold proposals to the same or similar standards across all TIGs? Should we? Are our standards too high or too low? Should reviews be blinded, or not? How do we deal with tensions between inclusion and quality? Do submitters receive constructive feedback? How can reviews be conducted most efficiently?
|
|
Session Title: Putting the Pieces Together: Making Inferences in a Complex Multimodal Evaluation
|
|
Panel Session 308 to be held in Mencken Room on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Government Evaluation TIG
|
| Chair(s): |
| Sara Speckhard,
United States Citizenship and Immigration Services,
sara.speckhard@dhs.gov
|
| Discussant(s):
|
| Rebecca Gambler,
United States Government Accountability Office,
gamblerr@gao.gov
|
| Abstract:
This session will consist of a panel discussion of how best to collect, integrate, and synthesize information obtained from a complex multi-modal evaluation. The evaluation of the Web-based Basic Pilot program funded by the United States Citizenship and Immigration Services will be used to illustrate issues, such as: how best to use qualitative data collection techniques to frame questions to be used in quantitative data collection; challenges in integrating and synthesizing quantitative and qualitative data collected in site visits; and how to synthesize findings from the various data sources into coherent and practical answers to the questions of interest to the governmental policymakers funding the evaluation. Time will be allowed for audience members to volunteer information about their experiences with other complex multi-modal evaluations.
|
|
Using Focus Groups to Shape Quantitative Data Collection
|
| Denise Glover,
Westat,
gloverd1@westat.com
|
|
Dr. Glover has extensive experience in conducting focus groups and conducting other qualitative research efforts. For the evaluation of the Web Basic Pilot Program, she was the team leader for qualitative data collection efforts, including focus groups. She conducted the focus groups conducted at the start of the evaluation and provided input to the quantitative evaluation team on questions to be asked during the quantitative data collection activities. Her talk will address how to design the focus groups to maximize their usefulness and how to provide input to the quantitative team responsible for designing quantitative data collection instruments.
|
|
|
Integrating Qualitative and Quantitative Data Collected During Site Visits
|
| Molly Hershey-Arista,
Westat,
mollyhershey-arista@westat.com
|
|
For the evaluation of the Web Basic Pilot Program, Ms. Hershey-Arista actively participated in all phases of the site visits, including instrument design, conducting pretest site visits, training interviewers, and synthesizing the qualitative and quantitative information from the site visits (which included employer interviews, employee interviews, and abstracting information from record reviews) to provide descriptions of the employers visited. She also prepared a summary of what information was learned from the site visits to assist in answering each of the policy questions raised by the federal government. She will talk about her experiences during this process.
| |
|
Putting the Pieces Together in the Evaluation Report
|
| Carolyn Shettle,
Westat,
carolynshettle@westat.com
|
|
Dr. Shettle has extensive experience in research design and report writing. For the evaluation of the Web Basic Pilot Program, she served as Project Director and had lead responsibility for writing the final report. She will talk about the challenges faced in writing reports requiring the synthesis of sometimes contradictory findings in order to maximize its usefulness to the government and other policy makers.
| |
|
Session Title: Three Perspectives on Using Evaluation for Alternative Teacher Preparation: Insights From the Evaluator, the Policymaker, and the Program Implementer
|
|
Panel Session 309 to be held in Edgar Allen Poe Room on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
|
| Chair(s): |
| Edith Stevens,
Macro International Inc,
edith.s.stevens@orcmacro.com
|
| Abstract:
This panel will showcase how evaluation was used to influence state policy about alternative teacher preparation and to help design and implement sustainable models. The State of Maryland funded the pilot of eight alternative preparation programs. Each program is managed by a partnership between an institute of higher education and a local school system. The project evaluator, Macro International, collaborated with each partnership and the State to develop a reporting template to capture key inputs and outcomes from each program. Each program also conducted its own evaluation and provided data to the evaluator.
Panelists will address the following:
-How do you collect common data from uniquely structured programs?
-How do you use evaluation findings to improve program design and implementation?
-What does the data say about the strengths and weaknesses of alternative preparation programs?
-How do we use what we learn to design future programs?
|
|
Using Evaluation for Alternative Teacher Preparation: The Evaluator's Perspective
|
| Edith Stevens,
Macro International Inc,
edith.s.stevens@orcmacro.com
|
|
Macro International, Inc. is the external evaluator for the Maryland alternative teacher preparation pilot projects. A critical evaluation task was the collection and dissemination of project data. Macro collaborated with stakeholders to develop an electronic collection instrument to efficiently gather meaningful information from these diverse projects. Findings are shared periodically to inform program implementation. Additionally, at the end of the project, a summative report will be provided showcasing best practices across projects to help stakeholders determine next steps. Macro has been conducting evaluations in the education field for over a decade, particularly focusing on teacher preparation. Specific experience includes: technical assistance to the national Preparing Tomorrow's Teachers to Use Technology (PT3) office; evaluation of several teacher preparation programs funded by the US Department of Education, including a statewide PT3 project led by the Maryland State Department of Education; and evaluation of two mathematics teacher preparation programs in New York City.
|
|
|
Using Evaluation for Alternative Teacher Preparation: The Policymaker's Perspective
|
| Michelle Dunkle,
Maryland State Department of Education,
mdunkle@msde.state.md.us
|
|
Alternative preparation programs play a critical role in addressing the teacher shortage in Maryland. Through the Transition-To-Teaching and Troops-To-Teachers federal grants, Maryland has been able to create new alternative preparation programs and strengthen existing initiatives. The Maryland project is guided by the principle that alternative preparation programs should rise to the level of State Approved Program status when states ensure high quality program performance just as they do for traditional programs. Maryland also seeks to promote greater recognition of these programs across the country. Maryland is currently piloting eight alternative preparation programs. Evaluation of these projects is imperative for two reasons: to help improve program implementation and to identify promising practices across programs to direct next steps for the state. Maryland also hopes that evaluation findings would promote dialogue among states and teacher education providers who seek to deepen conversations about alternative program quality.
| |
|
Using Evaluation for Alternative Teacher Preparation: The Program Implementer’s Perspective
|
| Roger Schulman,
The Maryland Practitioner Teacher Program,
rschulman@tntp.org
|
|
The Maryland Practitioner Teacher Program (MPTP) is a partnership between the Baltimore City Public School System (BCPSS) and The New Teacher Project (TNTP). The MPTP targets career changers and recent graduates to become teachers for hard-to-staff schools. As an alternative to traditional methods of teacher preparation, MPTP relies on an intensive pre-service training institute. Once participants began teaching they participate in a pedagogy-focused content seminar series to leverage their existing content knowledge and hone their skills in the design and delivery of high-quality, standards-based instruction. Project partners developed an internal evaluation plan to help them answer questions that are critical to determining the success of the program based on data collected on the following: participant satisfaction with the program, the effectiveness of the teachers who are placed in the classroom, and the retention rates of the new teachers. Partners believe that evaluation findings will be critically important because they will be used to make program adjustments.
| |
|
Session Title: Policy Evaluation: Learning About What, When and For Whom?
|
|
Panel Session 310 to be held in Carroll Room on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Advocacy and Policy Change TIG
|
| Chair(s): |
| John Sherman,
Headwaters Group,
jsherman@headwatersgroup.com
|
| Abstract:
This session will offer lessons learned about approaches to, challenges of, and rationale for undertaking evaluations of policy and advocacy efforts. The three presenters reflect on their hands-on experience and hard-earned lessons. They provide new ways they are approaching policy change and advocacy evaluations that overcome some of the challenges, help them better understand which aspects of policy evaluation are most critical for which audiences, and how the evaluations can provide relevant learnings for each.
|
|
Let's Get Real About Real-Time Reporting
|
| Julia Coffman,
Harvard Family Research Project,
jcoffman@evaluationexchange.org
|
|
In recent years, the term 'real time' has infiltrated the evaluation world. Evaluators use it to describe their reporting approaches, meaning that they report regularly so their work can inform ongoing learning and strategy decisions. Real-time reporting is particularly important for advocacy efforts, which often evolve without a predictable script. To make informed decisions, advocates need timely answers to the strategic questions they regularly face. But while real-time evaluation reporting to inform their responses makes good sense in theory, it can be difficult to implement successfully in practice. Even when regular reporting takes place, given the rapidly-changing policy context, its success in informing advocacy strategy can be hit or miss. This presentation will offer ideas on successful real-time evaluation reporting for advocates and how to create flexible evaluation plans that can adapt to changing learning needs.
|
|
|
Learning During Intense Advocacy Cycles
|
| Ehren Reed,
Innovation Network Inc,
ereed@innonet.org
|
|
Through a multi-year evaluation of a collaborative effort to change national immigration policy, Innovation Network has employed creative methodological approaches to efficiently capture and manage the large amounts of data generated and to effectively synthesize key learnings about how advocates gain access to, build relationships with, and influence policymakers. Innovation Network will discuss the environmental and contextual factors that posed challenges to employing traditional data collection methods, and then describe a specific focus group protocol that we designed to follow the peaks and valleys of the policy advocacy cycle. The resulting Intense Period Debrief Protocol serves to foster learning by eliciting qualitative information from a group of key players shortly after a policy window-and the inevitably corresponding period of intense advocacy activity-occurs.
| |
|
Accountable Learning in Policy Evaluation: Politics and Practice
|
| John Sherman,
Headwaters Group,
jsherman@headwatersgroup.com
|
|
The policy advocacy landscape is full of complexity. The policy focus (legislative, regulatory, and legal), the scale at which it is occurring (local, state, federal or international), the time period over which it is occurring (months, years or even decades), the capacity of the groups, and the significant amount of unforeseeable factors impacting the work, are some of its notable features. With several cluster-level policy evaluations underway or completed, and years of experience as policy advocates, Headwaters offers its observations on effective evaluation approaches in this dynamic landscape, that also help the target audience(s) determine which aspects of lessons identified in the evaluation are most important for them, and for which they should be accountable.
| |
|
Session Title: Logic Models are Alive and Well: New Applications in the Health Field
|
|
Multipaper Session 311 to be held in Pratt Room, Section A on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Health Evaluation TIG
|
| Chair(s): |
| Kathryn E Lasch,
Mapi Values,
kathy.lasch@mapivalues.com
|
|
Using Logic Models as Learning Tool: Practical Lessons From Evaluating Health Programs
|
| Presenter(s):
|
| Robert LaChausse,
California State University, San Bernardino,
rlachaus@csusb.edu
|
| Abstract:
A fundamental issue in program evaluation is learning to uncover the links between program activities and program outcomes. Evaluators have been encouraged involve stakeholders in developing a logic model that links program activities to anticipated results. An innovative approach to developing logic models will be demonstrated that can be used by program staff and evaluators in developing useful logic models in a wide variety of health and human service programs. Logic models is can be useful to evaluators in helping to focus evaluation questions, identify programmatic theory, and increase organizational learning. This presentation will show participants how the use of logic models in evaluation can facilitate learning in organizations and increase evaluator's competency in developing and using a logic models using an example from an evaluation of an ethnically diverse community-based program.
|
|
From Research to Practice: Measuring the Impact of Health Information Programs
|
| Presenter(s):
|
| Tara Sullivan,
Johns Hopkins University,
tsulliva@jhsph.edu
|
| Saori Ohkubo,
Johns Hopkins University,
sohkubo@jhsph.edu
|
| Abstract:
Health information programs aim to reach target audiences with relevant, evidence-based information that will inform policy and improve program quality and professional practice. Yet measuring the impact of these types of programs continues to be a challenge, in part, because they have not been guided by a comprehensive logic model that links health information products and services to the achievement of health outcomes. To assess the effectiveness of programs, evaluators need to be able to identify, define and measure key program components. To that end, we present an original logic model that shows how health information inputs, processes and outputs logically link to one another to attain outcomes at multiple levels. Using a common framework, evaluators can systematically measure discrete program components, test and establish causal links between them, and help advance an understanding of how to produce effective information programs that facilitate the uptake of evidence into practice.
|
|
Evaluating at the Cross-project or Initiative Level: The Case of Communities First in California
|
| Presenter(s):
|
| Ross Conner,
University of California, Irvine,
rfconner@uci.edu
|
| Kathy Hebbeler,
SRI International,
kathleen.hebbeler@sri.com
|
| Diane Manuel,
The California Endowment,
dmanuel@calendow.org
|
| Abstract:
Since 1998, The California Endowment's CommunitiesFirst program has awarded over 1,000 grants and hundreds of millions of dollars to diverse California communities to define 'health' broadly, to select issues and to work on them in ways relevant to each community. We have conducted a cross-project evaluation (which we term a 'strategic review and assessment') of this multi-project initiative and will discuss highlights from it in three areas: methodology, results and use, highlighting a salient aspect of each. For methodology, we will describe a logic-model development technique that, in keeping with the conference's theme, results in instant learning by the participants. For the results topic, we will share the two most notable findings we identified for successful community-based health promotion. For the use topic, we will highlight the important of attending to the intended users' context and changes in that context.
|
| | |
|
Session Title: Using Case Studies to Teach the American Evaluation Association Guiding Principles: An Introduction to the Guiding Principles Training Package
|
|
Skill-Building Workshop 312 to be held in Pratt Room, Section B on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the AEA Conference Committee
|
| Presenter(s):
|
| Jules M Marquart,
Centerstone Community Mental Health Centers Inc,
jules.marquart@centerstone.org
|
| Dennis Affholter,
Affholter and Associates,
thedpa@yahoo.com
|
| Scott Rosas,
Nemours Health and Prevention Services,
srosas@nemours.org
|
| Abstract:
This session is a Train the Trainers workshop on how to use the AEA Guiding Principles training package developed by the AEA Ethics Committee. The package helps evaluators understand and use the revised Guiding Principles for Evaluators (GP) in the ethical practice of evaluation. We will demonstrate how to conduct an actual training workshop using a case study approach. Several case studies, based on actual evaluations and representing different types of evaluations, will be discussed to elicit ethical issues or dilemmas faced in the evaluation. The workshop uses case analysis, presentations, and small and large group discussion to introduce participants to the components of the GP training package. Participants will receive and be trained in the use of the materials that comprise the package (i.e., PowerPoint presentation and notes pages, case studies, case study worksheets, Facilitator's Guide, supplemental reading on ethics, evaluation form, etc.) that are available on the AEA website. This workshop is useful to any evaluator but especially to anyone who might conduct a training workshop on the revised AEA Guiding Principles, including faculty teaching evaluation, Local Affiliate program chairs, foundation staff, and others.
|
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
Developing Frameworks for Evaluating Knowledge Management Initiatives |
|
Roundtable Presentation 313 to be held in Douglas Boardroom on Thursday, November 8, 9:35 AM to 11:05 AM
|
| Presenter(s):
|
| Thomas E Ward,
United States Army Command and General Staff College,
tewardii@aol.com
|
| Abstract:
Evaluating the effectiveness of knowledge management initiatives in a variety of organizations requires a framework for identifying and defining the different perspectives various stakeholders have on “Knowledge Management.” A three-tier “Knowledge Management Domain Model” provides just such a framework, enabling different aspects of knowledge management to be considered and evaluated from appropriate perspectives and with applicable tools. Using a three-tier domain model allows consideration of an infrastructure layer with familiar tools like Quality of Service (QOS) parameters. It also enables the consideration of an information management layer in which quantitative measurements are most appropriate. Finally, consideration of a true knowledge management layer requires a mix of quantitative and qualitative methods. Integration of these three perspectives enables identification of the areas where KM initiatives are meeting, exceeding, or falling short of expectations, and what to do to reinforce success and apply corrective measures to shortfalls. (Word count: 144)
|
| Roundtable Rotation II:
The Role of Evaluation in Business Intelligence |
|
Roundtable Presentation 313 to be held in Douglas Boardroom on Thursday, November 8, 9:35 AM to 11:05 AM
|
| Presenter(s):
|
| Wes Martz,
Western Michigan University,
wes.martz@wmich.edu
|
| Abstract:
The transdisciplinary nature of evaluation allows for its logic and methodology to be expanded beyond traditional social science applications to corporate settings and other complex environments. In results-oriented and performance driven organizations, strategic evaluation can enhance current business performance management systems with its ability to consider a program's intermediate and longer-term outcomes, measure program implementation, measure unintended outcomes, synthesize measurements, assess the cost-effectiveness of current strategies, attribute causation, and potentially include recommendations on how to improve performance. This roundtable discussion explores the roles evaluation can play in business environments with respect to improving decision-making and business intelligence systems. At the conclusion of the discussion, participants will have a deeper understanding of evaluation's roles in business strategy and decision-making, and insight into how evaluation can add value to contemporary management systems.
|
|
Session Title: There's More Than One Way to Skin a Cat: Cost Effective Online Surveying and Evaluation
|
|
Demonstration Session 314 to be held in Hopkins Room on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Integrating Technology Into Evaluation
|
| Presenter(s): |
| Cheryl Cook,
United States Department of State,
cookcl@state.gov
|
| Abstract:
E-GOALS, Exchange Grantee Outcome Assessment Linkage System- is the Bureau of Educational and Cultural Affairs online performance measurement system. It provides critical data and analysis to the Bureau's partner organizations, State department program managers, Educational and Cultural Affairs leadership, OMB, Congress, and the American people. The Office of Policy and Evaluation works with internal clients to discuss mission, goals and objectives of programs and then designs surveys that measure the effectiveness in reaching these goals. Exchange Participants are able to access E-GOALS surveys directly through a web link and enter their own responses. The Evaluation Division analyzes the data for the program office and delivers comprehensive Key Findings reports and results driven Rapid Reports. E-GOALS enables the Evaluation Division to produce customized web-based surveys, access survey data from a centralized repository and provide customized reports which save money for our partner organizations
|
|
Session Title: Evaluation of Various Educational Programs in Different Countries of the Globe
|
|
Multipaper Session 315 to be held in Peale Room on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the International and Cross-cultural Evaluation TIG
|
| Chair(s): |
| Norma Fleischman,
United States Department of State,
fleischmanns@state.gov
|
|
Evaluation of High School Graduates in Brazil: A Decade of Learning
|
| Presenter(s):
|
| Ana Carolina Letichevsky,
Cesgranrio Foundation,
anacarolina@cesgranrio.org.br
|
| Abstract:
This paper presents the history of standardized evaluation of High School students' in Brazil for the last ten years. Conducted annually by the Federal Department of Education, it collects students' data by means of a survey and an exam. This evaluation's several editions generated ample learning, such as: (a) skills and abilities better developed throughout High School, (b) ways to obtain substantial student adherence within a voluntary evaluation process, (c) ways of mobilizing teachers and technical-pedagogical staff to use evaluation results for High School improvement, (d) how communities at large incorporate these evaluations and their results, (e) the profile of students in the entry and exit High School grades, and (f) what can be done in order to improve the evaluative process. In sum, this paper presents and discusses not only what we have learned, but also how to learn through evaluation.
|
|
Educational Evaluation Across Nations: Methodological and Conceptual Issues Confronting a Cross-country Delphi Study
|
| Presenter(s):
|
| Hsin-Ling Hung,
National Taiwan Normal University,
hsonya@gmail.com
|
| Yi-Fang Lee,
National Chi Nan University,
ivanalee@ncnu.edu.tw
|
| James W Altschuld,
The Ohio State University,
altschuld.1@osu.edu
|
| Abstract:
The impact of globalization suggests that our knowledge of other parts of the world needs to be regularly reviewed. Consequently, there is rising interest in understanding the state of educational evaluation, especially in Asia-Pacific. A cross-country study provides evaluators with international interests an opportunity to learn the state of educational evaluation in the region.
The Delphi technique has been widely employed in various disciplines. Despite different opinions about the Delphi method, generally, it is a good vehicle for collecting expert opinion from a group whose members cannot meet effectively face to face. The modified electronic Delphi study is especially appropriate for a cross-country study involving experts residing in different time zones.
This presentation intends to describe the methodological and conceptual issues (the recruitment of participants, instrument development, language, and cultural diversity) in this cross-country Delphi study. Additionally, problems encountered in cross-country collaboration and strategies for resolving problems will be covered.
|
|
Challenges and Good Practices in Evaluating Anti-child Labor and Basic Education Programs Worldwide
|
| Presenter(s):
|
| Katharine Wheatley,
Macro International Inc,
katharine.a.wheatley@orcmacro.com
|
| Lisa Slifer-Mbacke,
Macro International Inc,
lisa.c.slifer-mbacke@macrointernational.com
|
| Abstract:
This presentation explores the evaluation of anti-child labor and basic educations projects in developing countries worldwide, with an emphasis on the appropriate selection and application of evaluation methodologies, as well as quality control. It is based upon a comparative study conducted for the U.S. Department of Labor of more than 20 mid-term and final evaluations of projects in Africa, Asia, and Latin America during 2005-2007, as well as evaluators' reflections upon the experience. The evaluations used multiple methods, including document reviews, comparative data tables, key informant interviews, group interviews, site visits, and classroom observation. In some cases, participatory evaluation methodologies were employed. The evaluations were conducted under often difficult environmental conditions, and by various evaluators, presenting quality control challenges. The paper presents lessons learned and good practices in evaluating anti-child labor and basic education programs worldwide.
|
|
Undergraduate Education in Vietnam: Insights Gained From an Evaluation of Vietnam's Postsecondary Education From a Cross-national Perspective
|
| Presenter(s):
|
| Peter J Gray,
United States Naval Academy,
pgray@usna.edu
|
| Lynne McNamara,
Vietnam Education Foundation,
lynnemcnamara@vef.gov
|
| Phuong Nguyen,
Vietnam Education Foundation,
phuongnguyen@vef.gov
|
| Abstract:
This paper discusses the Vietnam Education Foundation Initiative on the Status of Undergraduate Education in Vietnam. The purposes of the Initiative are (1) to assess current conditions of teaching and learning in computer science, electrical engineering, and physics at four select Vietnamese universities; (2) to identify opportunities for improvement; (3) to assist in implementing changes in Vietnamese higher education; and (4) to produce models that can be adopted across academic fields and institutions in both Vietnam and elsewhere. The Initiative used a cross-national perspective to conduct qualitative case study research at the four universities. Five critical areas in need of reform were identified: undergraduate teaching and learning, undergraduate curriculum and courses, instructors, graduate education and research, and assessment of student learning outcomes and institutional effectiveness. Opportunities for improvement and scenarios for change are offered in relationship to the issues identified in each of the areas.
|
| | | |
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
This is Not a Test: Building Instruments to Measure Course Outcomes Beyond Knowledge |
|
Roundtable Presentation 317 to be held in Jefferson Room on Thursday, November 8, 9:35 AM to 11:05 AM
|
| Presenter(s):
|
| Kelly Fischbein,
American Red Cross,
fischbeink@usa.redcross.org
|
| Thearis Osuji,
Macro International Inc,
thearis.a.osuji@orcmacro.com
|
| Abstract:
Determining the quality of a training program can be a learning experience for evaluators. In 2006, a nonprofit organization revised its program teaching first aid for lay rescuers. Student achievement in the course is traditionally measured by a written knowledge exam. However, responding to an emergency situation is arguably more about eliciting behavior than recalling knowledge. An internal evaluation team agreed that a more appropriate determinant of the success of the new program would examine to what extent students leave the class willing and able to perform the course skills, in addition to understanding the content. Literature from multiple disciplines suggests that knowledge is only a minor predictor of response behavior. The presentation will focus on how a simple instrument revision process became a revision of the conceptual model, and discuss challenges evaluators may face in revising the criteria of merit they measure.
|
| Roundtable Rotation II:
Experiences With an Online Student Rating System |
|
Roundtable Presentation 317 to be held in Jefferson Room on Thursday, November 8, 9:35 AM to 11:05 AM
|
| Presenter(s):
|
| John Ory,
University of Illinois at Urbana-Champaign,
ory@uiuc.edu
|
| Christopher Migotsky,
University of Illinois at Urbana-Champaign,
migotsky@express.cites.uiuc.edu
|
| Abstract:
After 30 years of using a paper-based student rating system for the evaluation of courses and professors, our university is in the process of converting to an online system. The presentation will share our experiences in deciding why to change, developing the system, marketing it to students and professors, and comparing online and paper-based results. It is hoped that participants at the roundtable will share their experiences with their own student rating systems.
|
|
Session Title: The Safe Start Demonstration Project: Design, Approaches and Outcomes of Evaluating a Systems Change Continuum of Care for Children Exposed to Violence
|
|
Panel Session 318 to be held in Washington Room on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Health Evaluation TIG
|
| Chair(s): |
| David Chavis,
Association for the Study and Development of Community,
dchavis@capablecommunity.com
|
| Discussant(s):
|
| Kristen Kracke,
United States Department of Justice,
kristen.kracke@usdoj.gov
|
| Abstract:
Between 2000 and 2005, the Safe Start Demonstration Project was implemented in 11 sites located in diverse settings (e.g., urban, rural and tribal communities) throughout the United States. During this time more than 15,000 children exposed to violence and their families were involved in a continuum of care addressing their multiple needs.
The evaluation of the Safe Start Demonstration project provides a model for evaluating a continuum of care for children exposed to violence. This panel will provide an overview of the methods and results of the evaluation focusing on innovative approaches to evaluating of this type of initiative. A discussion of the application of a case study methodology to help stakeholders understand the outcomes and impact of a continuum of care for children exposed to violence will be included. Also, the session will discuss the uses of an innovative process to develop site level research incubators in order to strengthen evaluation learning.
|
|
Overview of the Design of the National Evaluation of the Safe Start Demonstration Project
|
| David Chavis,
Association for the Study and Development of Community,
dchavis@capablecommunity.com
|
|
As part of the national evaluation for the Safe Start Demonstration Project, the National Evaluation Team (NET) was expected to conduct a cross-case analysis and generate a report that highlighted patterns across the 11 grantees' efforts. This presentation will provide an overview of the Safe Start Program and its evaluation design. It will also include a discussion of the development of the research incubator as a means to strengthen the understanding of site based issues.
|
|
|
Applying a Theory of Change Approach to the Evaluation of the Safe Start Demonstration Project
|
| Mary Hyde,
Association for the Study and Development of Community,
mhyde@capablecommunity.com
|
| David Chavis,
Association for the Study and Development of Community,
dchavis@capablecommunity.com
|
|
Between 2000 and 2005, the Safe Start Demonstration Project was implemented in 11 sites located in diverse settings (e.g., urban, rural and tribal communities) throughout the United States. During this time more than 15,000 children exposed to violence and their families were identified, and when appropriate, provided mental health treatment and services to address their multiple needs. This presentation will discuss the evaluation of the national initiative which utilized a process of testing a theory of change for the development of a continuum of care for children exposed to violence. It will also discuss how critical evaluation questions regarding if and how the initiative worked were documented using a cross-site case study methodology grounded in the theory of change.
| |
|
Using Process Evaluation Findings and Grantee Level Activities to Generate an Understanding of Systems Change Strategies in a Continuum of Care for Children Exposed to Violence
|
| Mary Hyde,
Association for the Study and Development of Community,
mhyde@capablecommunity.com
|
| David Chavis,
Association for the Study and Development of Community,
dchavis@capablecommunity.com
|
|
During the implementation of the Safe Start Demonstration Project (2000-2006) the Safe Start National Evaluation Team used several evaluation activities to discover and understand the impact of the project on children exposed to violence and their families, the systems (e.g., human services, mental health) with which they interacted and the communities in which they lived. Findings from two of the evaluation activities-annual process evaluations and analyses of grantee level activities identified as promising practices-have been combined in order to map systems change strategies identified through process evaluations with examples of promising practices that supported them. Together, these two evaluation processes yield valuable information for practitioners on how to engage families, systems and communities to create more responsive systems capable of meeting the needs of children exposed to violence. These findings and examples also provide useful strategies and practices for future efforts focused on children exposed to violence.
| |
|
Maximizing Data Collection for Children Exposed to Violence
|
| S Sonia Arteaga,
Association for the Study and Development of Community,
sarteaga@capablecommunity.com
|
| Joie Acosta,
Association for the Study and Development of Community,
jacosta@capablecommunity.com
|
|
The Safe Start Demonstration Project sought to bring about systems change and in the process address practice and research related to exposure to violence among young children (six years and younger). Proper measurement and data collection are essential for developing effective programs and interventions for children's exposure to violence. Based upon discussions with practitioners and researchers, a review of the literature, and a review of practices by Safe Start Demonstration grantees, this presentation will describe the factors that influence practitioners and researchers to choose measures, commonalities in their criteria for children's exposure to violence measures, the process by which practitioners and researchers jointly select and implement measures, and promising data collection practices. The commonalities include: a measure that is brief, non-intrusive, has good psychometric properties, and is clinically useful. Practices for engaging and retaining families and service providers in data collection, maximizing and managing data collection, and data-based decision-making will be discussed.
| |
|
Session Title: Evaluating Volunteering in Low-income Communities: A Participatory Approach
|
|
Panel Session 319 to be held in D'Alesandro Room on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Chair(s): |
| Deborah Levy,
Points of Light Foundation,
dlevy@pointsoflight.org
|
| Abstract:
The Points of Light Foundation practices a family strengthening approach that is rooted in the opportunities that volunteering presents to strengthen vulnerable families and communities. One strategy used for increasing family strengthening ideas, is to engage residents in low-income communities to volunteer. Referred to as "neighboring," this strategy focuses on enabling and empowering residents of low-income communities to contribute their time and talent to address critical needs in their community.
Over the years, POLF has distributed funds to Volunteer Centers across the country to engage in a variety of "Neighboring" projects. POLF has evaluated the process and impact on the communities served; however in the past two years, has moved to a participatory approach in which all grantees are required to collect their own data and submit them to POLF for analysis. Simultaneously, POLF conducts an overall process and outcomes evaluation using its own data collection methods.
|
|
Overview of the Neighboring Concept and the Family Strengthening and Neighborhood Transformation Grant
|
| Polina Mackievsky,
Points of Light Foundation,
pmackievsky@pointsoflight.org
|
|
This presentation will provide the audience with a description of the concept of Neighboring, the overall project that the Points of Light Foundation is funded to do and is funding the Volunteer Centers for, as well as the selection of sites and challenges that arose along the way.
Additionally, she will provide the history of the project and why she believes this new evaluation process has proved successful.
|
|
|
Evaluating Multi-site Grant Funded Projects, A Participatory Approach
|
| Deborah Levy,
Points of Light Foundation,
dlevy@pointsoflight.org
|
|
The lead evaluator will present the entire process from start to finish that engaged the grantees to create their own evaluation plans and collect their own data. In addition, she will present her own evaluation plan of the grant and how all of the data feeds into one evaluation system.
Successes and challenges along the way will be presented and used for a discussion with the audience.
| |
|
A Grantee Perspective
|
| Deborah Levy,
Points of Light Foundation,
dlevy@pointsoflight.org
|
|
Participant three will describe the project that was conducted as well as the evaluation plan that was created. Additionally, information regarding the successes and challenges of the evaluation process will be provided, along with a comparison of other grant projects and different evaluation-related expectations. Finally, preliminary findings related to the individual site will be provided.
| |
|
Session Title: Introduction to You Get What You Measure ™
|
|
Demonstration Session 320 to be held in Calhoun Room on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Presenter(s): |
| Shanna Ratner,
Yellow Wood Associates Inc,
shanna@yellowwood.org
|
| Kim Norris,
Independent Consultant,
jknorris@highstream.net
|
| Abstract:
In development and use for over ten years, You Get What You Measure™ provides a framework for helping organizations direct action toward measurable goals. By recognizing the importance of values in group work, You Get What You Measure« creates a culture of value-driven group learning. Through You Get What You Measure™, participants explore in detail the connections between goals and indicators, learn how to identify key leverage indicators, examine key assumptions, design measures, and use measurement plans to get to action. By implementing measurement plans and revisiting measurement results on a regular basis, organizations can test key assumptions about how or whether specific actions affect progress towards their goals. You Get What You Measure™ incorporates strategic planning and evaluation in one effective and efficient process that incorporates systems thinking and emphasizes learning through measurement.
|
|
Session Title: What Have We Learned From/What Do We Still Need to Learn About Developing Evaluation Organizations?
|
|
Think Tank Session 322 to be held in Preston Room on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
|
| Presenter(s):
|
| Carol Fendt,
University of Illinois, Chicago,
crfendt@hotmail.com
|
| Cindy Shuman,
Kansas State University,
cshuman@ksu.edu
|
| Bret Feranchak,
Chicago Public Schools,
bferanchak@cps.k12.il.us
|
| Stacy Wenzel,
University of Illinois, Chicago,
swenzel@uic.edu
|
| Discussant(s):
|
| Lisa Raphael,
University of Illinois, Chicago,
lisamraphael@yahoo.com
|
| Meghan Burke,
University of Illinois, Chicago,
meghanbm@gmail.com
|
| Abstract:
In creating evaluation organizations, what type of capacity building and role definition is necessary to develop, staff, and manage these groups? Here we bring together a think tank of evaluators interested in developing their organizations. To start the conversation, representatives from 3 relatively new evaluation organizations discuss their stages of development. They consider, for example, how they have created a base understanding of the need for formative evaluation, developed clients, managed workloads, found, hired, trained and retained qualified staff, etc. The 3 cases are 1) the department of program evaluation within the third largest public school district in the USA, 2) a service-oriented office housed within a college at a land grant university, and 3) an evaluation group within a research university, that has moved between colleges, and is not tied to any regular staff or faculty. This will be an interactive session; all participants will be encouraged to engage in the discussion.
|
|
Session Title: The Centrality of Learning to Evaluation Practice and Theory
|
|
Multipaper Session 323 to be held in Schaefer Room on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Theories of Evaluation TIG
|
| Chair(s): |
| Sheila Arens,
Mid-continent Research for Education and Learning,
sarens@mcrel.org
|
|
Evaluation as/of Learning
|
| Presenter(s):
|
| Janice Fournillier,
Georgia State University,
jfournillier@gsu.edu
|
| Cecile Cachaper,
Independent Consultant,
cecile.dietrich@verizon.net
|
| Abstract:
The goal of this paper is to reconstruct the nature of evaluation practice. We posit that in order to do rigorous evaluation of learning programs one must conceptualize evaluation as learning. Using the concept of transaction (Dewey, 1991; Garrison, 2001), and inter-subjectivity (Mead, 1938) we argue that evaluation is a natural outgrowth of quality educational practice. Furthermore, we reconstruct the roles of the stakeholders and evaluators as co-learners within the inter-subjective experience of evaluation. As such, we surmise that one standard used in evaluation should emphasize the mutuality of the learning that takes place within the evaluation process itself. Mutuality of learning should be assessed by the utility of the evaluation (Joint Committee Standards for Educational Evaluation, 1994) and the extent to which the evaluation methodology remained flexible and emergent throughout the evaluation. Using these criteria, we discuss an example of evaluator-stakeholder relationships within a meta-evaluation of a NCLB program.
|
|
Evaluator as Learner: Rethinking Roles and Relationships
|
| Presenter(s):
|
| Tysza Gandha,
University of Illinois at Urbana-Champaign,
tgandha2@uiuc.edu
|
| Abstract:
This paper explores the nature of the connection between evaluation and learning, particularly as manifested in theories of evaluator roles and relationships. Evaluation activities are often understood as pedagogical with evaluators cast as educators (Cronbach and Associates, 1980; Patton, 1997; Weiss, 1999). This paper extends our rethinking of evaluator as both teacher and learner. Drawing from the practical hermeneutics tradition (Schwandt, 2002) and the action research literature, I envision evaluators as learning partners in communities of inquiry (Cochran-Smith and Lytle, 1999; Noffke, 1997; Anderson, Herr and Nihlen, 2007). I argue that without genuine relationships characterized by mutual openness and communication, evaluation's potential for contributing to learning and to social betterment is undermined. Understanding evaluators as learners is one critical response to the increasing co-option of evaluation by management and government in the current era of accountability and performance measurement (Ryan, 2007; Schwandt, 2007).
|
|
How Can Our Society Learn Through Contextualized Evaluation? A Renewed Appreciation of Generalization in Evaluation
|
| Presenter(s):
|
| Wonsuk Lee,
University of Illinois, Urbana,
wlee17@uiuc.edu
|
| Abstract:
The importance of context in evaluation has been regarded as a sort of consensus in the field of evaluation. However, this consensus has inevitably led to a new controversy about external validity, that is, generalization. The core of the controversy is whether generalization is possible in context-based evaluations or not, and if possible, how evaluators can pursue it. This paper deals with the argument that seems to be contradictory. For this, the relationship between the consideration of context and generalization, a new interpretation of generalization and the importance of generalization in contextualized evaluation are discussed. Also a way of pursuing generalization is introduced based on Cronbach's ideas on evaluation. Finally, an evaluation practice on Comprehensive Child Development Program (CCDP) is investigated from the view point of the way of pursuing generalization.
|
|
Standards-based, Competency-based and Appreciative Inquiry: Using Program Theory for Assessing Program Quality and Promoting Organizational Learning
|
| Presenter(s):
|
| Edith J Cisneros-Cohernour,
Universidad Autonoma de Yucatan,
cchacon@uady.mx
|
| Thomas E Grayson,
University of Illinois at Urbana-Champaign,
tgrayson@uiuc.edu
|
| Abstract:
This paper presents the strengths and limitations of three theoretical approaches for determining the benefits and contributions of the program Escuelas de Calidad (Schools for Quality), particularly the quality of school based management and its relationship with student learning in special education schools in the southeast of Mexico. The paper examines how Standards-base, Competency based and Appreciative Inquiry approaches were used for examining the quality of the program. In addition, the paper analyzes the implications for program improvement, and organizational learning. The authors also examine how the decision of choosing a theoretical framework could increase our understanding of the program, its quality and raise issues of equity and fairness.
|
| | | |
|
Session Title: Disaster and Emergency Management Evaluation TIG Business Meeting and Panel: Evaluation of the National Case Management Consortium Katrina Aid Today: What Have We Learned?
|
|
Business Meeting with Panel Session 324 to be held in Calvert Ballroom Salon B on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Disaster and Emergency Management Evaluation TIG
|
| TIG Leader(s):
|
|
Liesel Ritchie,
Western Michigan University,
liesel.ritchie@wmich.edu
|
|
Scott Chaplowe,
American Red Cross,
schaplowe@amcrossasia.org
|
|
Mary Davis,
University of North Carolina, Chapel Hill,
mvdavis@email.unc.edu
|
| Chair(s): |
| Cindy Roberts-Gray,
Third Coast Research and Development Inc,
croberts@thirdcoastresearch.com
|
| Discussant(s):
|
| Celine Carbullido,
United Methodist Committee on Relief-Katrina Aid Today,
celine.carbullido@katrinaaidtoday.org
|
| Abstract:
Katrina Aid Today (KAT), first of its kind, is a consortium of nine national case management partners to facilitate recovery of 100,000 households of Hurricane Katrina. The in-house evaluation team at United Methodist Committee on Relief-KAT, the Federal Emergency Management Agency (FEMA) which sponsors KAT, the Coordinated Assistance Network (CAN) for information sharing, The Salvation Army (TSA) in its role as a member of the consortium, and Third Coast Research & Development, Inc as external mid-term formative evaluator are our panel. Following brief presentations, panel members will join session attendees in small group discussion to identify lessons learned and issues for continued attention. The discussant will review the groups' responses and outline a plan for using lessons learned to 1) facilitate long term recover of disaster survivors and 2) improve disaster and emergency management evaluation.
|
|
United Methodist Committee on Relief: Katrina Aid Today - The Coordinator's Role in the Evaluation
|
| Amanda Janis,
United Methodist Committee on Relief-Katrina Aid Today,
amanda.janis@katrinaaidtoday.org
|
|
Katrina Aid Today (KAT) was created by the United Methodist Committee on Relief as a response to Federal Emergency Management Agency and the Department of Homeland Security's call for a national consortium of case management agencies following Hurricane Katrina. KAT oversees and coordinates all programmatic operations of its nine partners, including a standardized monitoring and evaluation system for reporting results and impact of the consortium's case management operations. KAT's monitoring and evaluation system grew out of the original program design and has been flexible to complement the organizational framework of the individual national partners as well as the evolution of the program and services. Included in KAT's results framework is the use and reporting from the Coordinated Assistance Network (CAN) database, implementing partner site visits, monitoring and evaluative reporting, support for partner evaluations, client focus groups, external and internal mid-term formative evaluations, and final summative evaluations.
|
|
|
Federal Emergency Management Agency: The Sponsor's Role in the Evaluation
|
| Liz Monahan-Gibson,
Federal Emergency Management Agency,
liz.gibson@dhs.gov
|
|
Eight weeks after Hurricane Katrina destroyed the Gulf States and displaced its residents, the Federal Emergency Management Agency (FEMA) announced its sponsorship of a national, nine-partner consortium of case management agencies called Katrina Aid Today (KAT). The first of its kind, Katrina Aid Today, with guidance from FEMA, established a framework for program evaluation in order to measure client outcomes through specified indicators as well as monitor and evaluate the consortium's combined efforts. FEMA's Voluntary Agency Liaisons stepped forward in their traditional role in order to connect and coordinate KAT consortium partners to the larger network of disaster responders. This role has both informed the work of KAT as well as connected the larger federal agency to the teachings of the program. FEMA is now authorized under the Stafford Act to implement case management services in future disasters and looks to KAT as a model for future use.
| |
|
Coordinated Assistance Network (CAN): The Role of Technology in Information Sharing
|
| Noah Simon,
Coordinated Assistance Network,
noah@can.org
|
|
In the recovery phase of natural or man-made disaster, aid organizations work around the clock to bring vital services to those who are suffering. This work often has been hampered by the inability of disaster relief organizations to quickly and effectively communicate client needs and services offered among a continuum of agencies providing services. In order to collect client information, coordinate service, and prevent duplication of services by disaster relief organizations, a secure, web based database was created called the Coordinated Assistance Network (CAN) for the purpose of information sharing. As part of the agreement between United Methodist Committee on Relief/Katrina Aid Today and consortium members, agencies use the CAN technology platform, at no cost, as their common data sharing information system. CAN has proven an effective data collection tool for standardized reporting and monitoring of case management information as well as coordinated provision of referrals and services.
| |
|
The Salvation Army - Southern Territory: A Partner's Role in the Evaluation
|
| Terry Hammond,
Salvation Army,
terry_hammond@uss.salvationarmy.org
|
|
The Salvation Army is one of the nine Katrina Aid Today (KAT) national partners providing long term case management to persons and families affected by Hurricane Katrina. In an effort to ensure quality service to clients as well as compliance with the federal grant, the Salvation Army (TSA) has implemented independent evaluation strategies complementary of the consortium's overall evaluation framework. Primary to TSA's KAT proposal and implementation of disaster recovery case management is improving the wellbeing of the disaster-impacted households they serve. In order to assess this, all TSA case managers administer the 'General Contentment Scale' tool with client households at beginning, midpoint and closure of the case management process. TSA is the only KAT partner to solely focus on client outcomes as part of their independent evaluation. The Salvation Army-Southern Territory division oversees the Army's national implementation of Katrina Aid Today.
| |
|
External Formative Mid-term Evaluation: The Fole of the External Evaluator at Mid-term
|
| Mary Sondgeroth,
Third Coast Research and Development Inc,
sondg@austin.rr.com
|
|
Independent external mid-term evaluation of the innovative National Case Management Consortium, Katrina Aid Today (KAT) in late fall 2006 consisted of five studies providing five ôvoicesö in response to formative evaluation questions. The studies included focus groups with KAT clients (Katrina survivors); focus groups with case managers (many of whom are also Katrina survivors); telephone interviews with program managers representing the nine national case management partners, the Federal Emergency Management Agency (FEMA), and the Coordinated Assistance Network (CAN); secondary quantitative analyses of data recorded in CAN to document assessed needs, disaster recovery plans, use of resources including Long-Term Recovery Committees, and client outcomes; and qualitative analyses of the program's many written records and reports. Implications for assistance to survivors of major disasters in the future will be discussed, and learning from this process and these results will be enhanced through dialogue with conference attendees.
| |
|
Session Title: An Overview of Proven Customer Service Practices for Independent Evaluation Consultants
|
|
Panel Session 325 to be held in Calvert Ballroom Salon C on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Independent Consulting TIG
|
| Chair(s): |
| Carol Haden,
Magnolia Consulting LLC,
carol@magnoliaconsulting.org
|
| Abstract:
The success of independent evaluation consulting depends, in part, upon a consultant's ability to provide exceptional service to clients. This session will feature a discussion of best practices in customer service relevant for independent evaluation consultants. The session will provide an overview of some of the principal issues facing evaluation consultants seeking to meet clients' needs and will highlight ways to successfully navigate the customer service aspect of independent evaluation consulting. Specific topics to be addressed in this session include: 1) developing needs-based, comprehensive, and straightforward proposals; 2) communicating the responsibilities and expectations of all involved parties; 3) utilizing effective and responsive feedback and communication processes and procedures; 4) providing clients with relevant, practical, and timely results; and 5) sustaining customer service beyond project conclusion.
|
|
Serving Clients Through Collaborative Planning and Shared Understanding
|
| Stephanie Wilkerson,
Magnolia Consulting LLC,
stephanie@magnoliaconsulting.org
|
|
This presentation will address how customer service begins before the initial contact with a client and flourishes throughout the evaluation planning process. Presentation attendees will gain a better understanding of how high quality customer service is evidenced during the beginning stages of evaluation work including evaluation design and proposal development. Stephanie's presentation will focus on important elements of customer service that are reflected in the planning and development process. Stephanie has over 10 years of experience working with clients from the onset of evaluations through their completion. It was because of her customer service skills and strong relationships she built with clients that she could leave a large company and start her independent consulting firm. She strongly believes that it is the care with which Magnolia Consulting works with clients and study participants that yields successful evaluations, sustains ongoing work with clients, and leads to new work through client referrals.
|
|
|
Serving Clients through Effective and Responsive Communication
|
| Tracy Herman,
Magnolia Consulting LLC,
tracy@magnoliaconsulting.org
|
|
This presentation will address how evaluators can use effective and responsive feedback and communication processes and procedures. Participants will gain a better understanding of how to
1) respond to clients' specific and ongoing needs in a timely and feasible manner,
2) create feedback loops among stakeholders,
3) staff a project in a way that promotes accessibility to evaluators, and
4) establish internal protocols for communication with stakeholders.
Tracy shares the team responsibility of serving as a site lead for school districts participating in studies. Tracy has insight to the importance of maintaining close contact with study participants to guide them through the timeline and responsibilities associated with each study. She also ensures that study participants' receive timely responses to their questions and concerns. In addition to maintaining contact with study participants, Tracy communicates with the Principal Investigator of the study, the client, and other relevant stakeholders, as needed.
| |
|
Serving Clients Through Useful and Timely Reporting
|
| Lisa Shannon,
Magnolia Consulting LLC,
lisa@magnoliaconsulting.org
|
|
This presentation will address how independent evaluation consultants can maintain high customer service standards by providing clients with relevant, practical, and timely results and by sustaining customer satisfaction after a project has ended. Participants in this session will expand their knowledge about the importance of customizing data presentations for specific audiences. Participants will also learn practical tips and techniques for communicating evaluation results to diverse audiences so their clients will relate to and understand the project's findings. This session will conclude with an overview of the importance of nurturing relationships with clients beyond the duration of a project and will discuss strategies that evaluation consultants can use, such as seeking feedback on completed projects, responding to requests for additional information, and maintaining ongoing communication. Lisa has over nine years of experience serving customers during evaluations of community, parenting, and school-based programs as well as through managing curriculum efficacy studies.
| |
|
Session Title: Building Evaluation Capacity in Extension Systems
|
|
Panel Session 326 to be held in Calvert Ballroom Salon E on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Extension Education Evaluation TIG
|
| Chair(s): |
| William Trochim,
Cornell University,
wmt1@cornell.edu
|
| Discussant(s):
|
| Michael Duttweiler,
Cornell University,
mwd1@cornell.edu
|
| Donald Tobias,
Cornell University,
djt3@cornell.edu
|
| Abstract:
Systems approaches to designing evaluation systems for large multi-program organizations like extension requires a balancing of standardization and customization. At Cornell Cooperative Extension in New York State a new approach to systems evaluation has been developed and is being implemented in seven counties. This approach encompasses several key concepts - stakeholder incentive analysis; program life-cycles and relation to evaluation methods; program pathway models, their interconnections, and their relation to the research evidence base; and a decentralized bottom-up, networked approach to evaluation capacity building. The approach incorporates several new tools - an evaluation planning protocol, a standardized evaluation planning format, and a web-based system for managing information for evaluation system planning. This panel presents the systems evaluation approach, discusses management and implementation challenges, and describes an example of an evaluation special project that resulted from it that uses a switching replication randomized experimental design to evaluate a mature well-established program in nutrition education.
|
|
Protocols, Plans and Networks: The Nuts-and-Bolts of Systems Evaluation
|
| William Trochim,
Cornell University,
wmt1@cornell.edu
|
|
This presentation describes a protocol that is being utilized in seven extension associations throughout New York State to help programs develop evaluation capacity, program models and evaluation plans. The protocol is a series of steps implemented over the course of approximately nine months that includes: describing relevant stakeholders and their motivations and incentives for evaluation; developing a program theory in the form of a pathway logic model that includes articulation of assumptions, contextual issues, and inputs, and describes expected causal connections between activities, outputs and outcomes; classification of programs along a program life-cycle that signals the types and level of evaluation that would be appropriate; development of an evaluation plan for each program; building evaluation capacity through the development of an evaluation network; and the use of a web-based Netway (networked pathway) system for entering and managing all information relevant to program models, evaluation plans, and the relevant research evidence-base.
|
|
|
Motivation and Management in Evaluation
|
| Cath Kane,
Cornell University,
cmk42@cornell.edu
|
|
This presentation addresses several key themes surrounding evaluation incentives and management, focusing on year one of an Evaluation Planning Partnership project with Cornell University Cooperative Extension New York City (CUCE NYC). Understanding the motivations and incentives of staff and participants is a critical component of evaluation planning. Issues include: 1) funding: managers increasingly view evaluation systems as a matter of survival; 2) parallel mandates: incentive analysis can identify synergies with outside mandates that can be used to improve evaluation implementation and quality; and, 3) staff participation: identifying internal staff incentives can create unique opportunities for evaluation design. Several management strategies will be reviewed: the development of an effective Memorandum of Understanding; the use of logic models and evaluation plans; the clarification of the merits of descriptive demographics versus outcome measurement; and, the implementation of systems evaluation in a dynamic environment. Examples of these issues are provided from real-world project contexts.
| |
|
Incorporating Experimental Design into Extension Evaluation: The Switching Replications Waiting List Design
|
| Sarah Hertzog,
Cornell University,
smh77@cornell.edu
|
|
For extension programs that are relatively mature - have been implemented consistently and have well-established high-quality outcome measurement in place - it is useful to undertake evaluations that demonstrate effectiveness with controlled comparative designs. A switching replications randomized experimental design is appropriate in waiting list situations where there are more eligible participants than can receive the program at one time. After obtaining informed consent, participants are randomly assigned to early or later program sessions. All participants are measured on three waves of outcomes - prior to the early session, between sessions and after the later session. This presentation describes implementation and data analysis challenges posed by such a design, and considers the advantages and disadvantages of its use in extension evaluation contexts.
| |
|
Session Title: Integrating Research-based Information into the Educational Practices of School Workers: What We've Learned so far From a Strategy Involving 200 High Schools
|
|
Panel Session 327 to be held in Fairmont Suite on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Michel Janosz,
University of Montreal,
michel.janosz@umontreal.ca
|
| Abstract:
The New Approach New Solutions (NANS) Strategy is the latest plan of the QuTbec government to increase school success among adolescents leaving in disadvantaged areas. The most disadvantaged high schools of the province are invited to develop themselves as learning organizations, to engage in a rigorous problem-solving process leading to the elaboration and implementation of action plans adjusted to the specific needs of their community. The Strategy promotes interventions based on 1) best practices principles and 2) the regular feed-back from the evaluation team. In this panel session, these two kind of information will be designed as Research-Based Information (RBI). The five presentations proposed will depict the work done by our team in the NANS' evaluation to measure the RBI utilization and to identify the strategies that will most likely bring school practitioners (teachers, professionals, and decision-makers at different levels) to use the RBI in their practices.
|
|
The New Approaches New Solutions (NANS) Strategy: Overview of the Evaluation and the Use of Results for Decision Making
|
| Michel Janosz,
University of Montreal,
michel.janosz@umontreal.ca
|
| Jonathan Levesque,
University of Montreal,
jonathan.levesque@umontreal.ca
|
| Jean L Belanger,
University of Montreal,
belanger.j@uqam.ca
|
|
The NANS strategy (2002-2008) pursues the goal of increasing school success amongst adolescents from very disadvantaged communities. Following the principles of successful prevention programs, 200 high-schools were supported to engage in a rigorous process of program planning, implementation and evaluation. Data gathered for the longitudinal evaluation of NANS (70 sampled high-schools, 30 000 students, 2 500 teachers and principals) were also used to give individualized feedbacks to schools about their evolving situation (students and school characteristics). In this presentation, we will first describe the NANS program and its evaluation design. Second, we will present cross-sectional (Spring 2006) results gathered with self-reported questionnaires administered to 100 principals and 1925 teachers showing that (1) principals had more positive views about the utility of research-based information and (2) that they were much greater users of the retroactions given by the evaluation team. Implication for knowledge transfer in education will be further discussed.
|
|
|
Formative Evaluation of CIDA, a Ministry Task Team in Charge of Facilitating the Use of research-based information in New Approach New Solutions (NANS) schools
|
| Frederic Nault-Brière,
University of Montreal,
fred_briere@yahoo.ca
|
| Christian Dagenais,
University of Montreal,
christian.dagenais@umontreal.ca
|
| Didier Dupont,
University of Montreal,
didierdupont@fastmail.net
|
| Julie Dutil,
University of Montreal,
julie.dutil@clipp.ca
|
| Alexandre Chabot,
University of Montreal,
alexandrechabot@fastmail.fm
|
| Michel Janosz,
University of Montreal,
michel.janosz@umontreal.ca
|
|
This presentation reports on the formative evaluation of a knowledge-transfer task team, the Coordination of Interventions in Disadvantaged Areas (CIDA), that was specifically created by the Ministry of Education, Leisure and Sports (MELS) in order to facilitate and promote the use of research-based information in schools involved in the NANS project. The objective of the evaluation was to identify and understand the factors that hinder or facilitate CIDA activities. The analytic material was composed of relevant sections from qualitative semi-structured interviews that were administered to the 6 CIDA members and other NANS participants for the 2006 global evaluation of NANS support activities. Data were analyzed using a technique of thematic analysis inspired from Grounded Theory (Glaser & Strauss, 1967).
| |
|
Factors Influencing the Role of School Boards in Supporting the Use of Research Based Information
|
| Christian Dagenais,
University of Montreal,
christian.dagenais@umontreal.ca
|
| Didier Dupont,
University of Montreal,
didierdupont@fastmail.net
|
| Frederic Nault-Brière,
University of Montreal,
fred_briere@yahoo.ca
|
| Julie Dutil,
University of Montreal,
julie.dutil@clipp.ca
|
| Alexandre Chabot,
University of Montreal,
alexandrechabot@fastmail.fm
|
| Michel Janosz,
University of Montreal,
michel.janosz@umontreal.ca
|
|
In the New Approach New Solutions (NANS) Strategy the school boards are responsible for providing the support to their schools. An important part of that mandate relies on supporting the use of research-based information (RBI). In order to identify the factors influencing the use of RBI we carried out case studies in seven regions, using the same methodology as explained in the previous presentation (c.f. Grounded Theory Methodology). The analysis allows for distinctions to be made among the various regions by taking into account the triangulation of four different perspectives: those of the Ministry, regional offices, school boards and NANS schools. Results show correspondence between the point of view of the different stakeholders. The discussion will focus on the role of school boards as contributors of effective use of RBI by the schools practitioners.
| |
|
The Development and Validation of a Behavior and Attitude Questionnaire to Measure Utilization of Research-based Information by School Practitioners
|
| Philip Abrami,
Concordia University,
abrami@concordia.ca
|
| Christian Dagenais,
University of Montreal,
christian.dagenais@umontreal.ca
|
| Michel Janosz,
University of Montreal,
michel.janosz@umontreal.ca
|
| Robert Bernard,
Concordia University,
bernard@concordia.ca
|
| Larysa Lysenco,
Concordia University,
y_lysenk@education.concordia.ca
|
| Marie Pigeon,
University of Montreal,
marie_pigeon@yahoo.ca
|
| Jonathan Levesque,
University of Montreal,
jonathan.levesque@umontreal.ca
|
|
Analysis of the questionnaires of research utilization by school practitioners revealed a complex of flaws in these instruments. In their majority, being designed for descriptive purposes, these tools lack psychometric properties. Some reduce research utilization to instrumental use; others inquire only about practitioners' attitudes towards research or focus on isolated factors affecting research utilization. Therefore, we developed and validated a behavior and attitude questionnaire designed to measure educators' use of research-based information in school practice and to examine what factors predict this use. We derived the questionnaire from the comprehensive review of existing research, expert focus groups and interviews. We refined and validated its four-factor structure according to the responses of school practitioners participating in the pilot study using factor analysis. The overall questionnaire as well as each factor has high internal consistency reliability (Cronbach's alpha > .73). All four factors contributed to explain variance in the research utilization construct.
| |
|
Beyond Numbers: Qualitative Evaluation of the Factors Influencing the Transfer of Research-based Knowledge Based on Perceptions of Professionals Working in New Approach New Solutions (NANS) Schools
|
| Alexandre Chabot,
University of Montreal,
alexandrechabot@fastmail.fm
|
| Christian Dagenais,
University of Montreal,
christian.dagenais@umontreal.ca
|
| Michel Janosz,
University of Montreal,
michel.janosz@umontreal.ca
|
| Philip Abrami,
Concordia University,
abrami@concordia.ca
|
| Robert Bernard,
Concordia University,
bernard@concordia.ca
|
|
This presentation communicates the results of a qualitative evaluation which sought to identify factors facilitating the transfer of research-based knowledge (RBK) in NANS high-schools. The evaluation was designed to foster the emergence of different staff members' perception on issues concerning the use of RBK by teachers in their school. Four schools were sampled based on their score on scales of RBK consultation, RBK use and perception/openness to RBK, as measured by a quantitative questionnaire. Subjects were selected using snowball sampling (Patton, 1990) in order to pick individuals concerned by research. Semi-structured interviews were administered to 19 respondents. Data analysis was inspired from Grounded Theory (Strauss & Corbin, 1998). Five major themes will be presented and discussed: 1) teachers' interest/attitude toward research, 2) role of school management team, 3) communication inside the school, 4) teachers' workload and 5) current/ideal characteristics of RBK.
| |
|
Session Title: Supporting Partnerships to Assure Ready Kids (SPARK): Cultural Connections to Ready Schools, Native Hawaiians, Spanish Speaking Immigrants, and Refugee Children and Families
|
|
Panel Session 328 to be held in Federal Hill Suite on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Anthony Berkley,
W K Kellogg Foundation,
tb2@wkkf.org
|
| Abstract:
The purpose of the proposed panel is to describe evaluation results from Supporting Partnerships to Assure Ready Kids (SPARK). SPARK is a nation-wide school readiness initiative funded by the W.K. Kellogg Foundation. Panel members include the Initiative Level Evaluation (ILE) Team and local evaluators from Hawaii, Florida, and Georgia. The panel will describe key learning pertaining to cultural connections between schools that serve vulnerable children and their communities. Lessons learned include evaluator experiences in working with Native Hawaiian children, Spanish speaking immigrants in Miami-Dade County, and refugee children and families from Africa, Eastern Europe, and the Middle East now living in suburbs near Atlanta.
|
|
Pathways to Ready Schools: Cultural Connections between Schools That Serve Vulnerable Children and Their Communities
|
| Patrick Curtis,
Walter R McDonald & Associates Inc,
pcurtis@wrma.com
|
| Kate Simons,
Walter R McDonald & Associates Inc,
ksimons@wrma.com
|
|
The W. K. Kellogg Foundation (WKKF) launched a nationwide initiative emphasizing community-based collaboration and the development of strategic infrastructures to support early care and education and school readiness among vulnerable children. As part of the project, the Initiative Level Evaluation Team developed a definition of ready schools called Pathways to Ready Schools. The pathways were included in a survey of 252 elementary school principals in the State of Ohio. Respondents rated the importance of the pathways to a ready school as well as the need for improvement in their own schools. The authors analyzed the data from the survey comparing the ratings of achievement pathways to cultural connections and found that the ratings of principals from schools with 50 percent or more economically disadvantaged students rated the need for improvement in cultural connections in their own schools 11 percent higher than principals from schools with 50 percent or less economically disadvantaged students.
|
|
|
Three Methods for Assessing Pre-K Programs and Elementary Schools in Hawai'i
|
| Morris Lai,
University of Hawaii,
lai@hawaii.edu
|
| Susan York,
University of Hawaii,
yorks@hawaii.edu
|
|
The SPARK Hawai'i evaluators will compare three methods for assessing pre-K programs and elementary schools in Hawai'i: the SPARK developed Pathways to Ready Schools, the Culturally Healthy and Responsive Learning Environments (CHARLE), developed by Hawaiian educators, and state developed Hawaii Preschool Content Standards (HPCS). The evaluation team chose to analyze the connections, overlap, and gaps among the various domains. CHARLE consists of 16 principles that are further delineated into Schools and Institutions, Family, Community, Teacher, Learner, and Organizations. By comparing each domain to the SPARK Pathways and the HPCS, the team was able to determine a 'strength rating' for the bridges linking the groups of principals. The result raised as many questions as answers, but gave SPARK Hawai'i a basis for discussions on how to align the three assessments as an aid to facilitate children's transition to formal education.
| |
|
Supporting Partnerships to Assure Ready Kids (SPARK) Florida: Impact on Spanish Speaking Immigrant Children and Families
|
| Charles Bleiker,
Florida International University,
bleikerc@fiu.edu
|
|
In a four-year evaluation of the SPARK Florida project, one group of children and their families stood out as a special population-Spanish speaking immigrants. The two neighborhoods chosen for the SPARK project, Homestead/Liberty City and Allapattah/Model City have high concentrations of recent immigrant families. One purpose of the study was to evaluate the performance of immigrant children in order to identify factors that predict success or failure in elementary school. The evaluators will present the results from a general developmental assessment (LAP-D) and a social and emotional battery (DECA) administered at the beginning of the project compared to a recent measure of children's school readiness (ESI-K and DIBBELS). Within group differences will be analyzed by country of origin as well as between group differences comparing the immigrant population with a matched comparison group of non-immigrant Spanish speaking children.
| |
|
The Impact of Supporting Partnerships to Assure Ready Kids (SPARK) Georgia on Refugee Children and Families
|
| Kevin Baldwin,
Wellsys Corporation,
kbaldwin@wellsyscorp.com
|
|
SPARK Georgia uses a hub-based approach to service delivery in the community with eight hubs located in two counties. One hub services refugee families exclusively in a community that has experienced a large influx of refugees from Africa, Eastern Europe, and the Middle East. The purpose of this sub-study was to compare the developmental and reading readiness of refugee children compared to groups of Spanish and English speaking children. The evaluation team found that while the refugee and English speaking groups performed at the average levels on both the Ages and Stages Questionnaire (ASQ) and the GRTR! (a reading readiness screener), the Spanish-speaking children scored below average on some subscales of the ASQ. All groups performed average or better on the GRTR!, an encouraging finding given the linguistic challenges faced by these children.
| |
|
Session Title: Hard Cases: Measuring and Facilitating Interdisciplinarity and Inter-Organizational Interactions
|
|
Multipaper Session 329 to be held in Royale Board Room on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Research, Technology, and Development Evaluation TIG
|
| Chair(s): |
| Erik Arnold,
Technopolis,
erik.arnold@technopolis-group.com
|
|
University-Industry Collaboration: An issue for Ireland as an Economy With a High Dependence on Academic Research
|
| Presenter(s):
|
| James Ryan,
CIRCA Group Europe Ltd,
jim.ryan@circa.ie
|
| Abstract:
In the last 7 years, Ireland has invested heavily in the concept of a knowledge economy. A strong emphasis has been on development of the university sector as a national research base in which commercial technologies and expertise will develop. Recent concerns about the state of University-Industry collaboration resulted in a major study of this topic. The study also sought new models and initiatives to assist the level of collaboration. The study was conducted by CIRCA with US and other EU partners and involved consultation with industry, researchers and academic management. The presentation will outline the findings.
|
|
Measuring the Interdisciplinarity of a Body of Research
|
| Presenter(s):
|
| David Roessner,
SRI International,
david.roessner@sri.com
|
| Alan Porter,
Georgia Institute of Technology,
alan.porter@isye.gatech.edu
|
| Anne Heberger,
National Academies,
aheberger@nas.edu
|
| Alex Cohen,
The National Academies,
ascohen@nas.edu
|
| Marty Perreault,
National Academies,
mperreault@nas.edu
|
| Abstract:
This paper describes a methodology developed by a team charged with evaluating the National Academies Keck Futures Initiative, a 15-year $40 million program to facilitate interdisciplinary research and teaching in the US. Over the past three years, the team has developed and tested promising quantitative measures of the integration (I) and specialization (S) of research outputs, the former essential to evaluating the impact of several Futures Initiative programs. Both measures are based on Thomson-ISI Web of Knowledge Subject Categories (SCs). “I” measures the cognitive distance (dispersion) among the SCs of journals cited in a body of research. “S” measures the spread of SCs in which a body of research is published. Pilot results for samples of researchers drawn from 22 diverse SCs show a surprisingly high level of interdisciplinarity. Correlations between integration and the degree of co-authorship of selected bodies of research show a low degree of association.
|
|
Wikis in Evaluation: Evaluating Wikis for Theory Development in a Multi-disciplinary Center
|
| Presenter(s):
|
| P Craig Boardman,
Science and Technology Policy Institute,
pboardma@ida.org
|
| Nathaniel Deshmukh Towery,
Science and Technology Policy Institute,
ndtowery@ida.org
|
| Brian Zuckerman,
Science and Technology Policy Institute,
bzuckerm@ida.org
|
| Abstract:
New, multidisciplinary fields of inquiry are usually inspired by a collective goal to solve a practical problem. For these cases theory development can prove challenging because of competing epistemological norms across disciplines and because of the focus on the practical application of knowledge to the problem at hand. In this manuscript we present a case study of an NSF-funded Center charged with developing a theory of learning. This case presents a new opportunity for the evaluation of theory development by exploring the use of “wiki” technology in theory development efforts. Implications for the use of wikis as a data source for evaluating theory development and other scientific outcomes are discussed.
|
| | |
|
Session Title: Evaluation in the Era of Evidence-based Prevention
|
|
Multipaper Session 330 to be held in Royale Conference Foyer on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
|
| Chair(s): |
| Nikki Bellamy,
United States Department of Health and Human Services,
nikki.bellamy@samsa.hhs.gov
|
| Abstract:
This session presents prepared papers that explore three important areas in which prevention evaluation is evolving. The prevention field has proliferated approaches for participants at different risk levels, policies addressing environmental contexts such as family and community, and strategies for different age levels. This proliferation of approaches and evaluation-based knowledge means that evaluation must become more useful for comparing disparate interventions, and for monitoring and improving the implementation of evidence-based practice. The first paper elaborates the Institute of Medicine framework of universal, selective and indicated prevention as a framework for inter-relating evaluation findings to identify relative effectiveness and efficiency in meeting common objectives. The second paper uses national surveillance data to create information on the differential severity of consequences related to different substances, and the implications for evaluating different prevention strategies. The third paper elaborates performance evaluation methods designed to ensure strong implementation of evidence-based programs and practice.
|
|
The Institute of Medicine Framework as a Meta-construct for Organizing and Using Evaluation Studies
|
| J Fred Springer,
EMT Associates Inc,
fred@emt.org
|
|
The Institute of Medicine (IOM) categorization of prevention into universal, selective and indicated populations has been widely adopted in the prevention field, yet the terms are not precisely defined, systematically used to guide evaluation, or uniformly applied in practice. In this paper, the strong potential for the IOM categories to bring a unifying framework to currently fragmented strategies and practices in prevention is articulated and applied. The underlying implications of the IOM categories for identifying and recruiting participants, selecting interventions that are effective, anticipating attainable positive outcomes and avoiding potential unintended influences are explicated. The ways in which the framework will help to organize and compare evaluation findings of disparate interventions is highlighted, and implications for evaluation design within each category are discussed. Systematically applied, the IOM framework can be a valuable tool for creating a conceptually unified and evidence-based continuum of prevention services.
|
|
A Measure of Severity of Consequences for Evaluating Prevention Policy
|
| Steve Shamblen,
Pacific Institute for Research and Evaluation,
sshamblen@pire.org
|
|
The focus of substance abuse prevention policy is to prevent the harmful health, legal, social and psychological consequences of abuse, yet there is an absence of systematic, comparative research examining the negative consequences that are experienced as a result of using specific substances. Further, techniques typically used for needs assessment (i.e., prevalence proportions) do not take into account the probability of experiencing a negative consequence as a result of using specific substances. An approximated severity index is proposed that estimates the probability of experiencing negative consequences as a result of using specific substances, and is comparable across substances. Data from national surveillance surveys (NSDUH, ADSS) are used to demonstrate these techniques. The findings suggest that substances typically considered priorities based on prevalence proportions are not the same substances that have a high probability of causing negative consequences. The rich policy implications of these findings are discussed.
|
|
Evaluation Techniques for Effectively Implementing and Adapting Evidence-based Programs and Practice
|
| Elizabeth Harris,
EMT Associates Inc,
eharris@emt.org
|
|
Traditional evaluation designs emphasize generation of knowledge concerning whether interventions work. They focus on measuring outcomes and attributing cause. In an era of evidence-based practice, the emphasis of program evaluation should shift to generating information on program implementation, fidelity to design intentions, and need for adaptation. This paper contrasts the design of evaluation research for knowledge generation with a framework for performance evaluation for program improvement. The author presents the concepts, tools, and products that she has used in specific studies. The major components of the approach are a logic model designed to articulate the elements of an evidence-based approach, the logical organization and analysis of quantitative measures at the core of the performance evaluation system, the important uses of qualitative information to interpret the quantitative data, products that are important to planning and decisions for quality improvement, and ways of working effectively with program staff.
|
|
Session Title: Strengthening Communities Through the Use of Evaluation: Issues and Perspectives
|
|
Multipaper Session 331 to be held in Hanover Suite B on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
|
| Chair(s): |
| Kristin Huff,
Independent Consultant,
khuff@iyi.org
|
|
Using Community Indicators for Assessing Progress and Learning From Community Development
|
| Presenter(s):
|
| Jemimah Njuki,
International Centre for Tropical Agriculture,
j.njuki@cgiar.org
|
| Susan Kaaria,
International Centre for Tropical Agriculture,
s.kaaria@cgiar.org
|
| Tennyson Magombo,
Consultative Group on International Agricultural Research,
t.magombo@cgiar.org
|
| Abstract:
The Community Driven Monitoring and Evaluation is an approach which focuses on the development of a system managed and supported by local communities, for their own purposes. Community members with facilitation identify their objectives and initiate activities to achieve them. They develop indicators for measuring progress towards achievement of the objectives, collect, analyze and use data to assess progress and make decisions. The indicators are basically local and are based on the experiences, perceptions and knowledge of the local people. This paper describes a process with six farming communities in Malawi to develop indicators for in a project called the Enabling Rural Innovation. The community indicators were then aggregated and used to analyze at a meta-level the changes across the six communities using a likert scale and case studies. The process helps communities assess and learn from their progress and enables facilitator to learn more about the community priorities.
|
|
Understanding the Power of Homelessness Prevention: A Look at the Experiences of Those at Risk
|
| Presenter(s):
|
| Mandira Kala,
University of Massachusetts, Boston,
mandira.kala@umb.edu
|
| Jennifer Raymond,
University of Massachusetts, Boston,
jennifer.raymond@umb.edu
|
| Abstract:
Evaluation research is traditionally done by trained researchers, as opposed to the research subjects. However, in researching the impact of homeless prevention services through the evaluation of the Homeless Prevention Initiative (HPI), the Center for Social Policy (CSP) opted to fully engage the perspectives of constituents who had experienced homelessness. Through the HPI, $3 million were invested in homelessness prevention funds to nineteen non-profit organizations throughout Massachusetts. In collaboration with an advocacy agency, Homes for Families (HFF), the CSP engaged constituents in the evaluation of homelessness prevention services as constituents are in a unique position to provide a deeper understanding of the causes and prevention of homelessness. The paper highlights some of the important benefits and challenges of this collaboration including the way the different perspectives enriched the analysis; the potential of collaboration and focus groups in empowering constituents; and future opportunities for constituent involvement in policy research and evaluation.
|
|
Assessing the Role of Community-driven Evaluation Approaches in Strengthening Community Learning, Social Capital, and Internal Accountability: A Synthesis of Lessons From Kenya and Colombia
|
| Presenter(s):
|
| Susan Kaaria,
Consultative Group on International Agricultural Research,
s.kaaria@cgiar.org
|
| Jemimah Njuki,
Consultative Group on International Agricultural Research,
j.njuki@cgiar.org
|
| Noel Sangole,
International Center for Tropical Agriculture,
2622268@uwc.ac.za
|
| Kenga Kadenge Lewa,
Kenya Agricultural Research Institute,
lewakk@yahoo.com
|
| Luis Alfredo Hernandez,
Consultative Group on International Agricultural Research,
l.hernandez@cgiar.org
|
| Elias Claros,
Consultative Group on International Agricultural Research,
e.claros.cgiar.org
|
| Abstract:
This paper presents results of a study conducted to assess the benefits of community-driven participatory monitoring and evaluation systems to farmer research group in Kenya and Colombia. The study addressed several questions: Does the CD-PM&E enhance group functioning processes, participation and empowerment? Does it enable communities to have more control over external interventions and community initiatives? Does it improve the execution of community projects? Results comparing groups with and without CD-PM&E found that: (1) There was more consistent participation in group meetings and activities for groups with CD-PM&E compared to those without. (2) Groups with CD-PM&E demonstrated higher levels of trust and joint decision-making. (3) Groups with CD-PM&E were more aware of their group objectives and were more involved in implementing and managing their group projects. (4) Results on improvements on financial accountability and transparency showed no significant improvement on the management of group funds for both groups.
|
| | |
|
Session Title: Creating a Culture of Process Improvement in the Human Services: An Application of Lean Philosophy
|
|
Demonstration Session 333 to be held in International Room on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Human Services Evaluation TIG
|
| Presenter(s): |
| Joyce A Miller,
KeyStone Research Corporation,
joycem@ksrc.biz
|
| Tania Bogatova,
KeyStone Research Corporation,
research@ksrc.biz
|
| Bruce Carnohan,
KeyStone Research Corporation,
brucec@kbsc.biz
|
| Abstract:
This workshop offers practical process improvement tools for human service personnel and the consultants working with them. It will build their capacity by offering a roadmap for the application of lean concepts and methods, where mapping and analyzing current state processes are used to identify areas of waste with respect to time, expense, and material. Once areas of waste and undesirable effects of the current operating processes are identified, then solutions are developed and a future state is designed with processes that have eliminated waste, yet are effective in meeting program goals/objectives. This approach provides a way to rethink evaluation methods and it offers practical tools to improve the implementation of processes that are robust and ensure the achievement of intended outcomes.
|
|
Session Title: Performance Measurement: Getting to Yes With Grantees and Partners
|
|
Panel Session 335 to be held in Versailles Room on Thursday, November 8, 9:35 AM to 11:05 AM
|
|
Sponsored by the Health Evaluation TIG
|
| Chair(s): |
| Thomas Chapel,
Centers for Disease Control and Prevention,
tchapel@cdc.gov
|
| Abstract:
A number of federal programs are implemented by networks of grantees and frontline participants. The process of development and implementation of performance measures is a formidable one because evaluation skills and availability of data sources vary from grantee to grantee, as does willingness to divert time and resources to performance measurement. This panel presents three CDC programs that have trod the performance measurement path and are encountering and solving problems of development and implementation of measures by their partners and grantees. Presenters will talk about their programs, involvement of their grantees and partners in developing performance measurement approaches, and the need for indicators. The process for developing and implementing indicators, decisions on where to impose uniformity or grant autonomy in indicators and data collection, and most effective methods of obtaining grantee participation will be discussed. Transferable lessons from CDC Experience will be identified.
|
|
How do you Keep it Going? Steps That one Centers for Disease Control and Prevention Program Takes to Keep Performance Measures Relevant
|
| Betty Apt,
Centers for Disease Control and Prevention,
bapt@cdc.gov
|
| Dayne Collins,
Centers for Disease Control and Prevention,
dcollins@cdc.gov
|
|
In 2004, CDC's Division of STD Prevention implemented performance measures for the 65 state and local health departments they fund. Among the challenges CDC has faced since implementation is (1) verifying the validity of the data project areas submit, (2) keeping the measures relevant as programs and disease epidemiology change, and (3) determining the best way to help project areas improve their performance. This presentation will describe techniques CDC uses to address these issues, such as internal data analysis to identify trends, 'Learning Tours' to obtain qualitative input from project areas and to validate data collection techniques, regularly-scheduled consultations with project area representatives, the application of specific criteria to assess the merit of each measure, and dissemination of 'lessons learned' and best practices.
|
|
|
Getting From War Stories to Science: Developing Performance Measures in Public Health Emergency Preparedness
|
| Sue Lin Yee,
Centers for Disease Control and Prevention,
sby9@cdc.gov
|
|
Getting from War Stories to Science: Developing Performance Measures in Public Health Emergency Preparedness Increasingly, government programs are utilizing performance measures to demonstrate fiscal and programmatic accountability. In public health emergency preparedness, developing valid and reliable performance measures that generate accurate and comparable data for reporting at the grantee and aggregate levels is made more difficult by the field's expanding evidence base and divergent expert opinion on promising practices. Since 2004, CDC's Public Health Emergency Preparedness cooperative agreement, which funds 62 grantees to build capacity/capability in responding to public health emergencies, has worked in collaboration with federal agencies, national partners, and grantees to develop and implement such performance measures. The presenter will discuss the process taken to develop useful and feasible measures and provide lessons learned for other programs seeking to travel the performance measurement road. Disclairmer: The findings and conclusions in this abstract are those of the authors and do not necessarily represent the views of the Centers for Disease Control and Prevention.
| |
|
Performance Measurement in Centers for Disease Control and Prevention's Division of Diabetes Translation: Some Early Lessons Learned
|
| Kristina Ernst,
Centers for Disease Control and Prevention,
kernst1@cdc.gov
|
| David Guthrie,
Centers for Disease Control and Prevention,
dguthrie@cdc.gov
|
| Richard Hoffman,
Centers for Disease Control and Prevention,
rhoffman@cdc.gov
|
| Wayne Millington,
Centers for Disease Control and Prevention,
wmillington@cdc.gov
|
| Clay Cooksey,
Centers for Disease Control and Prevention,
ccooksey@cdc.gov
|
|
The Division of Diabetes Translation's (DDT) mission is to eliminate the preventable burden of diabetes through leadership, research, programs, and policies that translate science into practice. We describe performance measurement efforts in DDT's Program Development Branch, key challenges and next steps, emphasizing implications for our state Diabetes Prevention and Control Programs and attempts to assess contributions of their efforts. We will describe lessons learned and recommendations for others considering developing a public health performance measurement system.
| |