|
Session Title: Mixed Methods Evaluation of Programs in Schools
|
|
Multipaper Session 201 to be held in Avalon A on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Mixed Methods Evaluation TIG
and the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Linda Meyer,
Claremont Graduate University, linda.meyer@cgu.edu
|
|
A Mixed Methods Approach to Evaluate a School Assembly Designed to Reduce Bullying among Elementary School Students
|
| Presenter(s):
|
| Michele Mouttapa, California State University, Fullerton, mmouttapa@fullerton.edu
|
| Leslie Grier, California State University, Fullerton, lgrier@fullerton.edu
|
| Abstract:
Bullying at elementary schools may be more effectively prevented and/or reduced by school-wide interventions. The purpose of this study was to evaluate 'Bully for You,' a 50-minute show with actors and music, designed to prevent bullying, increase anger management, and promote tolerance towards others. Fourth and fifth grade students attending two elementary schools in Southern California watched the 'Bully for You' performance at their school. The next day, participants completed a self-report survey that assessed their demographic characteristics, opinions about and recall of program components, their experiences with bullying, victimization, and peer relations. A subgroup of the students (n=81) also participated in a classroom discussion about their thoughts regarding bullying and of the show. Follow-up surveys were conducted two months after the initial survey. This presentation will highlight findings from this study and suggest implications for future implementation.
|
|
Discussing the Set of Values and the Importance of the Mixed Methods Evaluation of the Mathematics Multicourse Continuing Education Program
|
| Presenter(s):
|
| Eliane Birman, Roberto Marinho Foundation, eliane.birman@frm.org.br
|
| Abstract:
The purpose of this paper is to demonstrate the evaluation of the Mathematics Multicourse Continuing Education Program, as well as discuss the set of values (regarding the indicators, means of verification, etc), when designing the evaluation project and the importance of the evaluation itself to society and of improving the very educational program.
The ultimate goal of the Mathematics Multicourse Continuing Education Program, which was launched in 2008 in a partnership between the Department of Education of the state of Espírito Santo and Roberto Marinho Foundation, is to train teachers and managers from public high schools in order to improve the Mathematics learning-teaching process.
The evaluation of this program focused on identifying its contributions to the cognitive performance in Mathematics of the students from public high schools, and to the employment of new educational practices by the teachers.
|
|
A Case Study of a Mixed Methods Program Evaluation Engaged in Integrated Data Analysis
|
| Presenter(s):
|
| Daniela Schiazza, Loyola University, Chicago, dschiaz@luc.edu
|
| Leanne Kallemeyn, Loyola University, Chicago, lkallemeyn@luc.edu
|
| Ann Marie Ryan, Loyola University Chicago, aryan3@luc.edu
|
| Abstract:
This paper presents findings from a case study of a mixed methods program evaluation that engaged in integrated data analysis. The sample for this case study is a program evaluation of the U.S. Department of Education Teaching American History grant, the American Dreams Project. This paper will discuss: (1) how different study components (e.g., mixed methods purposes, research questions, design, etc.) influenced integrated data analysis, (2) why specific integrated data analysis techniques produced or did not produce useful results, and (3) how the values, and by extension the mental models, of evaluation team members shaped the development of the evaluation and its integrated analysis.
|
|
Methods and Findings from a Comprehensive Mixed-Method Evaluation of Communities In Schools
|
| Presenter(s):
|
| Allan Porowski, ICF International, aporowski@icfi.com
|
| Heather Clawson, ICF International, hclawson@icfi.com
|
| Abstract:
The Communities In Schools National Evaluation is a multi-level, multi-method study that was designed to identify the most successful strategies for preventing students from dropping out of school. The five-year study includes secondary data analyses, a quasi-experimental study, eight case studies, a 'natural variation' study, an external comparison study, and three randomized controlled trials. At the conclusion of the five-year evaluation, all of the findings were compiled so that the overall impact of the CIS model of integrated student services can be analyzed and replicated. In this presentation, we present both methods and findings from the comprehensive five-year evaluation, and demonstrate how a multiple components from comprehensive evaluation design can be brought together to inform both policy and practice.
|
|
An evaluation of the Black Male Leadership Development Institute (BMLDI): An Open Systems Approach
|
| Presenter(s):
|
| Victoria Hill, Numeritics, tori.hill@numeritics.com
|
| Tayo Fabusuyi, Numeritics, tayo.fabusuyi@numeritics.com
|
| Abstract:
We present an evaluation framework for a year-long program designed to expose African American males in grades 9-12 in the greater Pittsburgh, PA area to leadership and mentoring opportunities. The Black Male Leadership Development Institute (BMLDI) provides a supportive learning environment and offers a challenging curriculum with a view towards positively impacting participants' values and perspectives. While evaluation frameworks for leadership development programs exist, few if any have been applied to minority youth leadership development programs. Our evaluation employs a modified form of the EvaluLEAD framework developed by Grove et al (2005), and is informed by the program goals and the manner in which program events are sequenced. This approach allows us to probe appreciably beyond focusing solely on aspects of program outputs to a more comprehensive framework that takes into consideration the broader influences that often affect program outcomes of this nature.
|
| | | | |
|
Session Title: Data Feedback Loops in Educational Settings: Intervention Process and Student Progress Monitoring Data Visualization Tools and Procedures
|
|
Multipaper Session 203 to be held in California A on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Charles Reichardt, University of Denver, creichar@du.edu
|
| Discussant(s): |
| Charles Reichardt, University of Denver, creichar@du.edu
|
| Abstract:
This session demonstrates management, implementation, and evaluation of a promising high school intervention ("graduation coaching") for students at-risk of academic failure having potential to be implemented widely by the national GEAR UP community and by other schools and school systems to promote retention and graduation. The discussion illustrates how careful project planning strategies and well-designed innovative formative evaluation tools can aid and enhance implementation success and sustainability. We discuss intervention models; data gathering, analysis, and utilization techniques; and new visualization tools for monitoring student educational progress in middle and high school. Among these are site planning and continuous improvement process (CIP) models; intervention tools such as data-based student diagnostic and referral methods, computerized daily activity and intervention logs and case notes; and dashboards utilizing readily available school student information system data for visualizing student performance histories and recent progress in core and academic subject areas and diagnosing strengths and weaknesses.
|
|
The Continuous Improvement Process (CIP) Model Applied to Educational Intervention Projects
|
| Shelly Carpenter, Western Michigan University, shelly.carpenter@wmich.edu
|
| Pamela Zeller, Western Michigan University, pamela.zeller@wmich.edu
|
|
The Continuous Improvement Process (CIP) model is based on three major components: legal, financial, and compliance aspects; intervention implementation and program activities, and formative and summative evaluation. Applied to large or small educational intervention projects, these components are not discrete elements; rather, they function interdependently at all phases of a project and provide the framework for overall project management. Key to success of the model is feedback from formative evaluation of the process itself and input from all stakeholder groups, allowing the three components to be assessed constantly and feedback used for improvements to insure reliability, efficiency, and project effectiveness. Five assessment tools are particularly useful as aids in managing these components: the Site Plan, Technical Reports, Activity and Intervention Reports, Daily Logs, and Case Notes. This presentation demonstrates these and other assessment tools used in CIP, illustrating how they provide structure and guidance for strategic management and successful evaluation.
|
|
Formative Evaluation Tools for Assessing School Intervention Processes
|
| Pamela Zeller, Western Michigan University, pamela.zeller@wmich.edu
|
| Shelly Carpenter, Western Michigan University, shelly.carpenter@wmich.edu
|
|
Formative evaluation tools developed for the GEAR UP/RTI project to determine the effectiveness of graduation coaching in a rural high school are presented and generalized for utility when assessing typical school-based interventions. Goals included identifying at-risk students in need of coaching; improving academic performance; lowering the drop-out rate; and promoting graduation. Evaluation instruments included: Student Referral Forms; Student Plans assessing each student's academic history, strengths, and weaknesses; Daily Activity Log databases recording student intervention information; coach Student Case Notes; and Dashboard computer visualizations of student academic progress. These provided valuable information regarding the impact of the services on students and were useful for continuous formative guidance of the project. Particular attention was given to factors related to urban versus rural implementation of the graduation coaching program. Bi-monthly meetings conducted with the project team, including the coach, identified developing problems and barriers and fine-tuned the coaching intervention in this rural setting.
|
|
Modeling and Visualizing Student Performance Data: Academics and Behaviors
|
| Warren Lacefield, Western Michigan University, warren.lacefield@wmich.edu
|
| Brooks Applegate, Western Michigan University, brooks.applegate@wmich.edu
|
|
This study describes new data visualization tools for data-driven decision-making in educational environments. The methodology involves a well-defined data-based diagnostic identification and selection procedure for choosing students at-risk of academic failure for appropriate academic support services. Dashboards displaying longitudinal performance trajectories covering middle and high school years, disaggregated by subject and combined with behavior, attendance, and other information can serve diagnostic functions by displaying history and progress not only in aggregate core studies but also in math, science, language arts, and history/social studies subject areas. They also provide formative and summative evaluation functions, allowing predictions to be compared directly to outcomes. We present these dashboards in the context of the graduation coaching intervention where students classified at-risk formed an initial caseload and outcome projections were updated and provided to the coach on a quarterly basis, together with weekly electronic gradebook data, allowing individualized intervention adjustment and caseload refocusing activities.
|
|
Analytical Basis for Modeling of Student Performance Data: Validity, Automation, Updating, and Interactive Evaluation Processes
|
| Brooks Applegate, Western Michigan University, brooks.applegate@wmich.edu
|
| Warren Lacefield, Western Michigan University, warren.lacefield@wmich.edu
|
|
New visualization tools and algorithmic procedures use readily available school information system data and "dashboards" for visualizing student performance histories and recent progress to identify students appearing at substantial academic risk; what some of those risks are; and who appears likely to benefit from specific academic support service interventions. This paper examines the theoretic and analytic basis for employing these procedures. Key research questions include: Can humans make valid predictions about high school performance based on visualizations of middle school data? Can pattern-matching algorithms match human judgments? Can humans discriminate and diagnose well enough to guide individualized intervention planning? Can this process be automated? Most important, do such models have strong predictive validity and growth measurement and evaluation potential when applied to actual outcome data with and without intervention efforts (e.g. using continuous or discontinuous regression models).
|
|
Session Title: Reflections on the Canadian Evaluation Society's Professional Designation Program: One Year Into the Program
|
|
Panel Session 204 to be held in California B on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the AEA Conference Committee
|
| Chair(s): |
| James W Altschuld, The Ohio State University, altschuld.1@osu.edu
|
| Abstract:
CES have created a self-directed process among program evaluators and users of program evaluation that would allow us to better define what we do and how it should be done. CES may seem extraordinarily ambitious to establish competencies, standards and ethic rules for a field as vast as program evaluation, and to set a specific bar for someone to be recognized as a Credentialed Evaluator (CE), but we embarked in the PDP process knowing full well that its future depended on the reaction of both program evaluators and users of program evaluation. The PDP is now well underway and there are a number of CEs. We have started to see Request for Proposals being issued, where the lead evaluator must be credentialed by CES. We will visit some of the operational issues, present the perspectives of Credentialing Board members, and discuss responses from both the Canadian and the international stakeholders.
|
|
Canadian Evaluation Society (CES) Professional Designations Program in Operation
|
| Keiko Kuji-Shikatani, Ontario Ministry of Education, keiko.kuji-shikatani@ontario.ca
|
|
The Credentialed Evaluator (CE) is designed to define, recognize and, promote the practice of ethical, high quality and competent evaluation in Canada through a program for professional designations. Keiko Kuji-Shikatani is the first Vice President of the CES Professional Designations Program and has been instrumental in establishing the CE designation for the Canadian Evaluation Society (CES). Launched in May 2010, this designation is the world's first professional designation for evaluators and has created a worldwide interest. CE is about providing a path for new evaluators and a clearer direction for more established evaluators for their ongoing development. It's about allowing CEs to be recognized for what they have achieved to date and for their ongoing commitment to learn and improve program evaluation. The Program is part of an effort to create an environment where CES can spell out what good evaluations entails and help CES members get there.
|
|
|
CES Professional Designations Program's Credentialing Board
|
| Heather Buchanan, Jua, Management Consulting Services, hbuchanan@jua.ca
|
|
The work of the Credentialing Board (CB) has been a vital piece in the transition of moving the Credentialed Evaluator designation from theory to practice. While the CE designation was defined by CES National Council, the application of that definition has been a 'challenging work in progress.' A 24 member CB has responsibility for the review of applications and decisions on the CE. Heather Buchanan, a CB member will speak about the decision process, how it is undertaken and the challenges encountered in this newly created decision-maker role. Developing good judgement is very much a function of experience, and with one year of experience the CB is finding its footing in this important function. Rich exchange and discussions are contributing to building a common understanding and consistent application of the CE designation, but thresholds for acceptable applications are not formulaic. Opportunities and challenges encountered in this first year of operation will be shared.
| |
|
The Canadian Response to the CES Professional Designations Program
|
| Martha McGuire, Cathexis Consulting Inc, martha@cathexisconsulting.ca
|
|
Martha McGuire, CES President will look at who have become CEs in Canada - where do they work? Why did they decide to apply? She will discuss the response from Canadian evaluators and organizations using evaluators. She will draw from the communication she has received as president as well as from some key informant interviews. She will include the perspectives of those who are still questioning the wisdom of CES going in this direction. It will also provide the perspective of the Credentialed Evaluators. What difference does it make in the way CEs approach their practice? What difference does it make to those who are engaging them to do evaluations? And for those who have not yet applied, what are the inhibitors? What are the draws?
| |
|
The International Response to the CES Professional Designations Program
|
| Jean A King, University of Minnesota, kingx004@umn.edu
|
|
The CES Professional Designation process is the first established program in the world by a professional association to credential evaluators, an important next step in evaluation's continuing development as a profession. Jean King, one of the authors of the Essential Competencies for Program Evaluators upon which the CES credentialing is based, will reflect on this development. First, she will ground the work in the context of the historical development of credentials, tracing the origin of licensure in three other professions (law, medicine, and accounting) where the paths to licensure and accreditation of training programs provide counterpoint to current CES work. Second, she will reflect on the international implications of the CES Professional Designation work, including reactions from South Africa where there is governmental pressure to move toward licensure and the USA where the American Evaluation Association, after a gap of over a decade, is again discussing the possibility of an evaluator credential.
| |
|
Session Title: Metaevaluation as a Values Framework to Enhance Evaluation Use: An International Example
|
|
Panel Session 205 to be held in California C on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Evaluation Use TIG
|
| Chair(s): |
| Michael Patton, Utilization-Focused Evaluation, mqpatton@prodigy.net
|
| Abstract:
This session will present an international example of metaevaluation. The Paris Declaration (PD), endorsed in 2005 by international leaders, aims to improve the quality and impact of development aid. An independent, multi-phased, cross-country and synthesis evaluation of the PD was completed in mid-2011, as was an independent metaevaluation. This session reports on that metaevaluation. The Development Assistance Committee (DAC) of the OECD established standards for evaluation in 2010. Those standards were the basis for the metaevaluation. The PD metaevaluation offers an opportunity to compare the DAC standards and the Joint Committee standards as alternative frameworks for valuing and evaluating evaluations. This session will describe the processes and methods used to evaluate the Paris Declaration evaluation by the person who led the metaevaluation, discuss the issues raised based on international values and standards, and get reactions from those who commissioned the metaevaluation and from the evaluation leader whose evaluation was evaluated.
|
|
Metaevaluation of The Paris Declaration Evaluation: Processes and Methods in Support of Credibility and Utility
|
| Michael Patton, Utilization-Focused Evaluation, mqpatton@prodigy.net
|
|
The Paris Declaration (PD), endorsed in 2005 by international leaders, aims to improve the quality and impact of development aid. An independent, multi-phased and cross-country evaluation of the PD was completed in mid-2011, as was an independent metaevaluation. This session reports on that metaevaluation. The Development Assistance Committee (DAC) of the OECD established standards for evaluation in 2010. The DAC standards informed the evaluation of the PD Paris - and were the basis for the metaevaluation of the evaluation. The Paris Declaration metaevaluation offers an opportunity to compare and contrast the DAC standards and the Joint Committee standards as alternative value frameworks for evaluating evaluations, especially with regard to utility. This session will describe the processes and methods used to evaluate the Paris Declaration evaluation, and the issues raised about undertaking a metaevaluation based on international values and standards.
|
|
|
Rationale and Purposes for Evaluating the Evaluation of the Implementation of the Paris Declaration
|
| Niels Dabelstein, Danish Institute for International Studies, nda@diis.dk
|
| Ted Kliest, Netherlands Ministry of Foreing Affairs, tj.kliest@minbuza.nl
|
|
An International Reference Group of key stakeholders was established to provide strategic guidance to the evaluation of the Paris Declaration (PD). The International Reference Group was composed of members of the DAC Network on Development Evaluation, representatives from partner countries, and representatives from civil society. The Reference Group appointed a small Management Group tasked with day-to-day management of the evaluation. The Reference Group and Management Group were supported by a small Evaluation Secretariat headquartered in Copenhagen. In this session, the leaders of the Management Group and the Evaluation Secretariat will discuss and explain their decision to commission an independent evaluation of the PD evaluation, including their views on the process used and the utility of the metaevaluation.
| |
|
The Experience of and Perspectives on Having One's Evaluation Independently Evaluated (Metaevaluation)
|
| Bernard Wood, International Development & Strategies, bwood@magma.ca
|
|
The second phase of the evaluation of the implementation of the Paris Declaration comprised 28 evaluations of donors and developing countries. It was conducted during 2010/2011 with the Synthesis Report published in June 2011. The focus of phase 2 was on the effects of the Paris Declaration on aid effectiveness and development results. This presentation by the leader of the independent evaluation team will reflect on what it was like to have a metaevaluation being conducted in parallel to the actual evaluation and reactions to the actual findings of the evaluation of the evaluation.
| |
|
Session Title: The Agent, the Provocateur, the Activist: Creative Evaluators, Surfacing Undercover Values in a Rubric Revolution
|
|
Multipaper Session 206 to be held in Pacific A on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Presidential Strand
|
| Chair(s): |
| Jennifer C Greene, University of Illinois at Urbana Champaign, jcgreene@illinois.edu
|
| Discussant(s): |
| Jane Davidson, Real Evaluation, jane@realevaluation.com
|
| Abstract:
What happens when evaluation practice meets creativity? Sweeping the practice of evaluation in many parts of the world is a rubrics revolution. The application of this evaluation specific methodology in many diverse contexts has been the impetus and vehicle for evaluators to acknowledge, appreciate and use creative approaches to surface undercover or implicit values, often previously hidden or neglected by more traditional evaluation approaches, tools and practice.
In this session, presenters will demonstrate some of the ways they have drawn on their own creativity, as well as the creative expressions of their evaluands to manage and moderate the tensions between competing and different values and priorities. This blend of evaluative rubrics and creativity is helping to create evaluation frameworks, rubrics and judgments that are accepted, appreciated and considered legitimate in many diverse contexts.
Through a range of creative expression and forms (poetry, music, skit etc.,), this session will seek to demonstrate the power of cultural and creative forms of expression to surface and make visible the values of multiple and diverse perspectives in evaluation contexts.
|
|
The Agent
|
| Kate McKegg, Kinnect Group, kate@kinnect.co.nz
|
| Tessie Catsambas, EnCompass LLC, tcatsambas@encompassworld.com
|
|
The first presentation by Kate McKegg and Tessie Catsambas will focus on the personalized nature of the roles of the evaluator. Using examples of music, art, poetry and pictures, it will explore the notion of evaluators as culturally creative beings, grappling with the messiness of the lived realities of evaluands, juggling the multiple demands of diverse stakeholders - all the while intentionally seeking to play a part in change or transformation. It will illustrate the ways in creativity can situate and position evaluation methodology as a valued practice and legitimate process for change in our communities and evaluands.
|
|
The Provocateur
|
| Kataraina Pipi, FEM Ltd, kpipi@xtra.co.nz
|
| Julian King, Kinnect Group, julian@kinnect.co.nz
|
|
The second presentation by Kataraina Pipi and Julian King will address the notion of the evaluator as an agent of change, precipitating learning, new insight and ways of knowing. It will explore how different creative forms of expression are (i) able to make visible the often undercover values of everyday experience and life in our communities; and (ii) give practical and lived expression to these values in evaluation contexts. It will demonstrate how creative forms of expression are able to reposition evaluation as a worthwhile and legitimate process for evaluands and communities, often previously hostile to evaluation.
|
|
The Activist
|
| Nan Wehipeihana, Kinnect Group, nan@kinnect.co.nz
|
|
The third presentation by Nan Wehipeihana and Thomaz Chianca will draw from experiences in communities where rubrics, which have been populated with the lived values of these people, have been used to make carefully deliberated and collaborative judgments about the 'goodness' and value of programs and services. It will illustrate how by drawing on the many forms of cultural, historical and contemporary forms of expression, including painting, weaving, song and dance in these communities, the evaluative meanings that are subsequently made can be seen as powerful expressions of shifts in the locus of power, from outside experts to within communities.
|
|
Session Title: Master Teacher Series: Practical Issues and Step-by-Step Guidelines for Using Propensity Scores
|
|
Demonstration Session 208 to be held in Pacific C on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Presenter(s): |
| M H Clark, University of Central Florida, mhclark@mail.ucf.edu
|
| Vajeera Dorabawila, New York State Office of Children and Family Services, vajeera.dorabawila@ocfs.state.ny.us
|
| Abstract:
Quasi-experiments are excellent alternatives to true experiments when random assignment is not feasible. Unfortunately, causal conclusions cannot easily be made from results that are potentially biased. Some advances in statistics that attempt to reduce selection bias in quasi-experiments use propensity scores, the predicted probability that units will be in a particular treatment group. Because propensity score research is still relatively new, many applied social researchers are not familiar with the methods, applications and conditions under which propensity scores should be used. Therefore, the proposed demonstration will present an introduction to computing and applying propensity scores using SPSS and STATA. The demonstration will include:
1. introduction of propensity score adjustment methods;
2. how propensity scores can be used to make statistical adjustments using matching, stratifying, weighting and covariate adjustment;
3. examples using stratification and matching
4. a discussion of known limitations and problems when using propensity score adjustments.
|
|
Session Title: Practical Applications of the AEA Public Statement on Cultural Competence in Evaluation
|
|
Think Tank Session 209 to be held in Pacific D on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the AEA Conference Committee
|
| Chair(s): |
| Cindy A Crusto, Yale University, cindy.crusto@yale.edu
|
| Presenter(s):
|
| Denice Cassaro, Cornell University, dac11@cornell.edu
|
| Arthur Hernandez, Texas A&M University Corpus Christi, art.hernandez@tamucc.edu
|
| Donna Mertens, Gallaudet University, donna.mertens@gallaudet.edu
|
| Veronica Thomas, Howard University, vthomas@howard.edu
|
| Discussant(s):
|
| Katrina Bledsoe, Education Development Center Inc, katrina.bledsoe@gmail.com
|
| Jenny Jones, Virginia Commonwealth University, jljones2@vcu.edu
|
| Karen E Kirkhart, Syracuse University, kirkhart@syr.edu
|
| Katherine Tibbetts, Kamehameha Schools, katibbet@ksbe.edu
|
| Elizabeth Whitmore, Carleton University, ewhitmor@connect.carleton.ca
|
| Abstract:
In 2005, the AEA Board approved a proposal to craft a Public Statement on Cultural Competence in Evaluation, which was a result of the Building Diversity Initiative (BDI). The BDI was an effort of the AEA and the W.K. Kellogg Foundation begun in 1999 to address the needs and expectations concerning evaluators working across cultures and in diverse communities. A Task Force of AEA's Diversity Committee developed a statement that affirms the significance of cultural competence in evaluation and identifies essential practices for cultural competence. This statement was, by design, not a "how to" guide. The translation of principles into practice remains an important next step. This session discusses the practical application of concepts in the statement. The session first overviews the statement, followed by panelists who provide practice examples that illustrate aspects of the statement, and group discussion in which attendees reflect on the statement, presentations, and their work.
|
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
Big Brother and Local Program Evaluation: Are the Values Aligned? |
|
Roundtable Presentation 210 to be held in Conference Room 1 on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Advocacy and Policy Change TIG
|
| Presenter(s):
|
| Craig LeCroy, LeCroy & Milligan Associates Inc, craig.lecroy@asu.edu
|
| Lena Malofeeva, LeCroy & Milligan Associates, lenam@lecroymilligan.com
|
| Darcy McNaughton, LeCroy & Milligan Associates, darcy@lecroymilligan.com
|
| Darlene Lopez, LeCroy & Milligan Associates, darlene@lecroymilligan.com
|
| Kerry Milligan, LeCroy & Milligan Associates, kerry@lecroymilligan.com
|
| DeeDee Avery, LeCroy & Milligan Associates, deed@lecroymilligan.com
|
| Abstract:
This roundtable will address issues concerning how the federal government and large contracted evaluation companies are impacting local program evaluations. Recent grant proposals from the Feds have required "rigorous studies" and often a large evaluation company (e.g. Mathmatica) is hired to oversee all evaluation activities. The intent of the these policy changes appears to be an effort to obtain more rigorous findings. How do such changes impact program evaluation at the local level? Are local evaluators are losing decision making authority and at what cost? Are there conflicting values in how these entities perceive "evaluation" and how local program evaluations perceive their mission? In what way can AEA support efforts at both the National level and efforts of local program evaluators? This roundtable will address all these issues and more.
|
| Roundtable Rotation II:
Quality in Advocacy Evaluation: Values and Rigor |
|
Roundtable Presentation 210 to be held in Conference Room 1 on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Advocacy and Policy Change TIG
|
| Presenter(s):
|
| Rhonda Schlangen, Action Impact Evaluation, rschlangen@yahoo.com
|
| Lily Zandniapour, Innovation Network, lzandniapour@innonet.org
|
| Abstract:
This session is designed to facilitate a constructive discussion about rigor in social science evaluation, specifically for advocacy efforts. It will challenge the notion that for a social science evaluation to be rigorous it must use a specific design or methodology. The session will contribute to the broader evaluation field by suggesting realistic, more holistic definitions of rigor that focus on embedding quality research standards and practices in each step of the evaluation process. The roundtable will be framed by an examination of the context around which discussions in the field about rigor are taking place. Using real case advocacy evaluation efforts and focusing on the purpose of evaluation to frame the discussion, the importance of a broader view of rigor is explored. With guiding questions posed by presenters, participants will contribute ideas and experiences about challenges and strategies to reinforce rigor when evaluating advocacy, policy and systems change initiatives.
|
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
Monitoring and Evaluation Following a Harm Reduction Philosophy |
|
Roundtable Presentation 211 to be held in Conference Room 12 on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Health Evaluation TIG
|
| Presenter(s):
|
| Adam Viera, Harm Reduction Coalition, viera@harmreduction.org
|
| Abstract:
Monitoring and evaluation are undoubtedly critical elements of successful public health programming. However, there exists little understanding around how to develop a process and outcome monitoring and evaluation infrastructure that is reflective of harm reduction principles and interventions. There is a need for a monitoring and evaluation framework that is reflective of the harm reduction philosophy at all levels.
The facilitators will lead a discussion around the principles undergirding a harm reduction framework for monitoring and evaluation. This discussion will transition into a discussion of best practices for harm reduction monitoring and evaluation, as well as next steps for continued dialogue around this important issue.
|
| Roundtable Rotation II:
Program Evaluation Activities in Tuberculosis Control Programs in the United States: The Value of Conducting a Large-scale Assessment |
|
Roundtable Presentation 211 to be held in Conference Room 12 on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Health Evaluation TIG
|
| Presenter(s):
|
| Silvia Trigoso, Centers for Disease Control and Prevention, strigoso@cdc.gov
|
| Abstract:
The Centers for Disease Control and Prevention, Division of Tuberculosis Elimination supports grantees with evaluation technical assistance (TA) and consultation to strengthen evaluation capacity among tuberculosis (TB) control and prevention programs. Assessment of current status, progress, and challenges in implementing program evaluation (PE) activities provides valuable information and guidance for grantees, stakeholders, and decision-makers. This presentation focuses on the process to review evaluation plan development, proposed evaluation activities, and progress reports for 68 grantees (50 states, 10 big cities, and 8 islands/territories) submitted for the 2010-2014 funding cycle. The design, method, and tools used to evaluate an array of evaluation plans from TB control programs of varying evaluation capacity and financial constraints will be described. Challenges and lessons learned, translatable to other disciplines and public health programs, will be shared. The result of this large-scale assessment provides an opportunity to shape decisions, enhance TA, and expand grantee's evaluation capacity.
|
|
Session Title: Evaluating the Impacts of International Democracy Projects and Programs
|
|
Multipaper Session 212 to be held in Conference Room 13 on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the International and Cross-cultural Evaluation TIG
|
| Chair(s): |
| Krishna Kumar, United States Department of State, kumark@state.gov
|
| Abstract:
This panel will focus on the complex and controversial topic of evaluating international democracy promotion projects and programs. Such interventions are often, though not always, different from other social and economic interventions in terms of their nature, political settings and impacts. Most of them focus on institution building, and not on service delivery. They affect and can unsettle existing power relations and therefore are often viewed with suspicion, if not hostility, by many governments. Above all, their effects are not always visible at least in the short run. The Panel will present three original papers on the subject from the senior officials of the State Department, National Democratic Institute and International Republican Institute.
|
|
Applicability of Experimental and Quasi-Experimental Designs to Democracy Promotion Projects and Programs
|
| Krishna Kumar, United States Department of State, kumark@stte.gov
|
|
A recent study "Improving Democracy Assistance: Building Knowledge through Evaluation and Research" undertaken by National Research Council has made a case for using experimental and quasi-experimental designs in democracy evaluations. The paper questions this recommendation and identifies a set of conceptual, methodological and practical obstacles which make it extremely difficult to use these designs for most democracy interventions. It argues that measuring of democracy assistance by using these designs can be even counterproductive to the extent many effects of interventions cannot be quantified. Moreover, such designs may result in ignoring unintended or long-term effects of democracy interventions. The paper stresses the need for developing alternative approaches to impact evaluations which generate reliable and relevant information about the outcomes while promoting indigenous ownership of democracy interventions.
|
|
Capturing Indigenous Democratic Values Through Participatory Evaluation
|
| Linda Stern, National Democratic Institute, lstern@ndi.org
|
|
"Democracy" is not a static objective, but a dynamic socio-cultural and political process that reflects the values of the people practicing it. Evaluating democracy promotion requires a delicate balance between measuring the effectiveness of largely western democratic projects, and evaluating the creative agency of non-western citizens in adapting the project to their own needs, values and contexts. In striking this balance, the evaluator must understand the outcomes of a project, as well as their relationship to newly emerging political actors attempting to transform their own institutions and societies. Drawing from NDI's experience in implementing participatory baseline and midterm evaluations, this paper explores the relationship between democratic values, indigenous ownership and evaluation designs. While not eschewing experimental or quasi-experimental designs, the author advocates a rigorous collaborative approach for understanding the complexities of democratic change.
|
|
Assessing the Impact of Democracy and Governance Programs Using Qualitative Methods
|
| Jonathan Jones, International Republican Institute, jdjones@iri.org
|
|
Experimental design style impact evaluations typically use quantitative methods for research. Such research designs however cannot capture the nuance and detail necessary to understand why a program did or did not achieve impact. As well, it is difficult for such methods to capture how a program caused, or responded to, any number of unanticipated factors, including interaction and spillover effects, unintended consequences (positive and negative), and paradigm shifts, such as a coup d'+¬tat, economic crisis or natural disaster. Drawing on two years of qualitative research of IRI's party development programs in eight countries, this paper illustrates that qualitative methods are well suited to capture the real story of IRI's programs on the ground, especially with respect to lessons learned from past experiences that can be applied to future programs. A quasi experimental design evaluation currently underway that is using qualitative methods, as well as an ongoing real time qualitative assessment of an IRI program in a volatile environment (using Patton's Developmental Evaluation framework), will also be discussed to further illuminate the value of qualitative research for assessing the impact of democracy and governance programs.
|
|
Session Title: Bridging Research and Practice: Implementing With Quality Matters
|
|
Multipaper Session 213 to be held in Conference Room 14 on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
|
| Chair(s): |
| Abraham Wandersman, University of South Carolina, wandersman@sc.edu
|
| Discussant(s): |
| Sandra Naoom, National Implementation Research Network, sandra.naoom@unc.edu
|
| Abstract:
The complex and dynamic nature of the implementation process continues to challenge practitioners and researchers. Narrowing the gap between implementation research and practice requires collaborative efforts between multiple systems of stakeholders. The Interactive Systems Framework for Dissemination and Implementation (ISF) provides insights into how implementation can be improved by offering a unique perspective centering on interactions between these systems to build the capacity needed to implement with quality. The papers comprising the proposed presentation discuss the role of quality implementation in the ISF and describe a tool (the Quality Implementation Tool; QIT) that is being used to facilitate collaboration between stakeholders. The session begins with a conceptual overview of the QIT followed by case examples currently utilizing the tool to 1) develop a clinical outcomes tracking system in a university mental health facility and 2) a recovery-oriented intervention working with substance abusing pregnant women to deliver their babies substance free.
|
|
An Overview of the Quality Implementation Tool: Concepts and Applications
|
| Victoria Chien, University of South Carolina, victoria.chien@gmail.com
|
| Jason Katz, University of South Carolina, jaskatz@gmail.com
|
| Annie Wright, University of South Carolina, patriciaannwright@yahoo.com
|
|
The Quality Implementation Tool (QIT) is a synthesis and translation of research literature on the "how to" of implementation. It provides practical strategies to improve implementation; to help program designers, evaluators, researchers, and funders proactively plan systematic quality implementation; and to suggest future directions for research. This presentation will discuss a) the methods and results of a literature review of implementation frameworks which identified practical strategies for quality implementation and b) the methods and results of a synthesis and translation of the strategies identified through the implementation framework review. The synthesis and translation was the team-based mechanism through which the QIT was developed. The QIT is comprised of six components, including "develop an implementation team," and "foster supportive organizational/communitywide climate and conditions". We will close by discussing the versatility of the QIT, and its application for implementation planning, and real-time monitoring and evaluation of implementation quality.
|
|
Using the Quality Implementation Tool for Consultation: Enhancing the Capacity of a University-based Mental Health Clinic
|
| Duncan C Meyers, University of South Carolina, meyersd@mailbox.sc.edu
|
|
The Quality Implementation Tool (QIT) is being used to facilitate a collaborative project which is developing an enhanced outcome tracking system at a university-based mental health facility. The tracking system measures outcomes related to client functioning and therapist performance in an effort to enhance the extent to which client therapeutic goals are met. Specifically, the QIT has been employed as a consultation tool to help plan, monitor, and evaluate the extent to which the outcome tracking system is implemented with quality. As a consultation tool, the QIT has grounded the development and implementation of the tracking system in implementation science theory and facilitated collaboration among stakeholders with diverse roles in the implementation process. Session attendees will be engaged in a discussion related to 1) the utility of the QIT for planning, implementing, and monitoring this innovation and 2) implications for use of this practical tool for consultation purposes.
|
|
Utilizing the Quality Implementation Tool to Provide Quality Innovation Supports with Community Substance Abuse Providers
|
| Jonathan Scaccia, University of South Carolina, jonathan.p.scaccia@gmail.com
|
| Andrea Lamont, University of South Carolina, alamont082@gmail.com
|
| Jennifer Castellow, University of South Carolina, castellj@email.sc.edu
|
|
Maternal Outreach Management Services (MOMS) is a comprehensive, community-informed program comprised of multiple interventions with the primary goal of promoting substance-free childbirth among substance-abusing pregnant women. In collaboration with administrators and clinicians at the Lexington/Richland Drug and Alcohol Abuse Council (LRADAC), the Quality Implementation Tool (QIT) was used to identify specific organizational variables which could be targeted for both general and innovation-specific capacity building to ensure quality implementation. Initially, stakeholders collectively completed the QIT to systematically plan for quality implementation, from which capacity-related needs were identified. Further, the QIT was used to identify foci for the support activities that were developed to ensure quality implementation (e.g., technical assistance, quality assurance). Later, the QIT was used to monitor and evaluate how these actions were executed and whether they effectively enhanced the innovation-specific capacity of both the individual clinicians and the organizational support staff in the implementation of the MOMS program.
|
|
Session Title: System Boundaries: Separators and Filters
|
|
Think Tank Session 215 to be held in Avila B on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Systems in Evaluation TIG
|
| Presenter(s):
|
| Michael Lieber, University of Illinois Chicago, mdlieber@gmail.com
|
| Discussant(s):
|
| Eve Pinsker, University of Illinois, Chicago, epinsker@uic.edu
|
| Geoffrey Downie, University of Illinois Chicago, gdownie@uic.edu
|
| Michael Lieber, University of Illinois Chicago, mdlieber@gmail.com
|
| Margaret Hargreaves, Mathematica Policy Research, mhargreaves@mathematica-mpr.com
|
| Abstract:
If all a system boundary did was to separate the system from its environment, there would be little point in dwelling on it. But, boundaries do much more than that. In closed systems, boundaries are rigid; in open, dynamic systems, boundaries are more porous, acting as an entrance point for inputs to and an exit point for outputs from the system to its environment. Like white blood cells, boundaries filter environmental inputs, selecting which inputs get to other system components and which do not. As in biological systems, filtering can create a barrier in social systems, including evaluations. Managing the filtering process in evaluation is the focus of this Think Tank. We shall give brief presentations sketching the boundary concept and introduce an evaluation case in which boundary/filtering issues are challenging the evaluation. Then, participants will work on in small groups, presenting their findings to the whole group for discussion.
|
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
Evaluation Course Materials and Assignments: Challenges in Distance Learning From a Values Perspective |
|
Roundtable Presentation 216 to be held in Balboa A on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Teaching of Evaluation TIG
|
| Presenter(s):
|
| Sharon Baggett, University of Indianapolis, baggetts@uindy.edu
|
| Abstract:
Teaching evaluation in the distance learning environment creates unique challenges. Students have fewer opportunities to engage in one-to-one or group discussions and activities where philosophies and values affecting their approach to evaluation and attitudes toward its use are debated. This challenge is fairly easily overcome through the design of group exercises in Wikis or other technology platforms, through webinars, teleconference or other approaches. More challenging is to find methods for inclusion of values and philosophy in teaching the steps and practice tools of evaluation. Assignments must be designed to not only encourage examination of values and philosophy at each stage of an evaluation, but support 21st century skills and cover topics in-depth. Roundtable participants will be encouraged to share teaching approaches that foster values-based learning in evaluation, whether in traditional classrooms, workshops, or distance learning environments, with particular focus on adapting and designing distance learning solutions.
|
| Roundtable Rotation II:
Teaching Program Evaluation: Incorporating Values |
|
Roundtable Presentation 216 to be held in Balboa A on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Teaching of Evaluation TIG
|
| Presenter(s):
|
| Katy Nilsen, University of California, Santa Barbara, knilsen@education.ucsb.edu
|
| John Yun, University of California, Santa Barbara, jyun@education.ucsb.edu
|
| Abstract:
Issues relating to values and valuing are a recurring topic of conversation in program evaluation. When teaching a graduate-level course in program evaluation, then, it is essential to incorporate the discussion of values into the class. This includes the curriculum, class discussions, guest speakers, and student assignments. Focusing on a graduate-level program evaluation class at a school of education in Central California, this roundtable will examine how the faculty member and teaching assistant incorporated the discussion of values throughout the course. Roundtable participants will discuss best practices surrounding this issue and highlight challenges that must be overcome.
|
|
Session Title: Evaluative Thinking: Knowledge Flow, Performance Management: The Value of Evaluation
|
|
Multipaper Session 218 to be held in Capistrano A on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Organizational Learning and Evaluation Capacity Building
|
| Chair(s): |
| Joseph Bauer,
American Cancer Society, joseph.bauer@cancer.org
|
|
Knowledge Flow: Valuing This Element of the Evaluation Process
|
| Presenter(s):
|
| Karen Widmer, Claremont Graduate University, karen.widmer@cgu.edu
|
| Abstract:
To knowledgeably evaluate, the organization must evaluate its knowledge. Integrated knowledge management systems are considered crucial to sustainable evaluation practice that builds organizational learning capacity. Preskill & Boyle (2008) elaborate that a mature system, with the ability to 'create, capture, store, and disseminate evaluation-related data and documents…as well as processes, procedures, and lessons learned from evaluation efforts' enables the organization to hone their evaluation efforts. This presentation will take an in-depth look at select Knowledge Management System (KMS) research regarding the characteristics of knowledge that feed organizational learning. Specific knowledge components include Polanyi's (1966) early work regarding tacit and explicit knowledge; Nonaka's (1994) theory of knowledge creation; several models of productive KMS combinations at the individual, group, and organizational levels; and the often-intangible role of the functions and junctions of knowledge transfer, diffusion, and complex adaptivity as facilitators of organizational learning.
|
|
Increasing an Organisation's Value of Evaluation From Within: A Case Study From Australia's Oldest Charity
|
| Presenter(s):
|
| Andrew Anderson, The Benevolent Society, andrewa@bensoc.org.au
|
| Abstract:
Charities in Australia are increasingly investing in internal evaluation expertise to improve the quality of evaluations and their potential to improve practice. Four years ago The Benevolent Society (one of Australia's largest and oldest charities) embarked on a journey of evaluation capacity building. This has involved investing in internal evaluation expertise, undertaking internal evaluations and building the capacity of employees to interpret and use evaluation findings.
This paper will include a brief description of The Benevolent Society's evaluation capacity building project and the results of research with staff around their experience of the project, what worked well, what could have been improved and most importantly the impact of this work on the organisation, its staff and the services it delivers. It will draw some conclusions around the value of internal evaluation capacity building projects for the quality and relevance of evaluations as well as the key challenges for this work.
|
|
Evolution of Learning: How Evaluation Can Inform the Practice of Performance Management
|
| Presenter(s):
|
| Kelci Price, University of Colorado, Denver, kelci.price@ucdenver.edu
|
| Abstract:
In recent years the concept of performance management (PM) has become omnipresent in sectors as diverse as transportation, international aid, and education. Although both PM and evaluation share the core concepts of improving organizational learning and practice, in practice systems of PM have generally fallen far short of their promise. Evaluators can play a key role in the evolution of PM by helping to craft PM systems which integrate multiple data sources to provide meaningful information for decision-making, and which will promote and improve organizational learning. This presentation focuses on three major issues in PM which are of interest to evaluators: a) the exclusion of evaluation from PM systems in favor of simple program monitoring, b) the focus of PM on the creation of data rather than its use, and c) how evaluators can improve the production, dissemination, and utilization of evaluation information within a PM system.
|
|
Evaluative Thinking: What is It? Why Does it Matter? How Can We Measure It?
|
| Presenter(s):
|
| Thomas Archibald, Cornell University, tgs4@cornell.edu
|
| Jane Buckley, Cornell University, janecameronbuckley@gmail.com
|
| William Trochim, Cornell University, wmt1@cornell.edu
|
| Abstract:
Evaluation capacity building (ECB) focuses on facilitating the development of individual and organizational competencies and structures—such as evaluation knowledge and an organizational culture of evaluation—that promote sustained, high quality evaluation. Evaluative thinking is also mentioned in the ECB literature as an important attribute, yet such references are often fleeting. In this paper, we first present our rationale for highlighting evaluative thinking as an important component of ECB practice and as an object of inquiry within research on evaluation. Second, we draw on cognitive and education research to help develop and clarify the construct of 'evaluative thinking.' Finally, we explore some ways of operationalizing and measuring this construct, considering both qualitative and quantitative methods. Our exploratory work on measuring evaluative thinking is situated in a project designed to promote evaluative thinking and foster high quality internal evaluation among non-formal science, technology, engineering and math (STEM) educators.
|
| | | |
|
Session Title: Disasters Come in All Shapes and Sizes: Approaches to Evaluation in Various Contexts
|
|
Multipaper Session 219 to be held in Capistrano B on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Disaster and Emergency Management Evaluation
and the International and Cross-cultural Evaluation TIG
|
| Chair(s): |
| Liesel Ritchie,
University of Colorado, Boulder, liesel.ritchie@colorado.edu
|
|
Assessing the Needs of Youth in the Aftermath of the Gulf Oil Spill
|
| Presenter(s):
|
| Brandi Gilbert, University of Colorado, Boulder, brandi.gilbert@colorado.edu
|
| Liesel Ritchie, University of Colorado, Boulder, liesel.ritchie@colorado.edu
|
| Abstract:
This paper presents the findings of a needs assessment of disaster recovery issues faced by youth ages 12-17 in coastal Alabama following the Gulf oil spill. Undertaken in collaboration with local community- and faith-based organizations, the assessment focuses on youth whose parents are tied to the commercial fishing or seafood industries. This is a critical topic, as prior research on technological disasters shows that recovery processes are associated with long-term social, economic, and ecological impacts. Needs assessment activities were designed to gather data that will advance understanding of shifts in family dynamics, social ties, and recreational and educational activities in the aftermath of the spill. Participants in the assessment include area youth, educators, providers of youth programming, public officials, and mental healthcare providers. The information gathered will be provided to local, state, and regional policy-makers, service providers, and other groups to support future program development for youth.
|
|
What Can One Case Tell About the Value of Federal Disaster Recovery Policy for Elementary and Secondary Education?
|
| Presenter(s):
|
| Cindy Roberts-Gray, Third Coast R&D Inc, croberts@thirdcoastresearch.com
|
| Magdalena Rood, Third Coast R&D Inc, mrood@thirdcoastresearch.com
|
| Shelia Cassidy, Wexford Institute Inc, scassidy@wexford.org
|
| Marcia Proctor, Galveston Independent School District, marcia_proctor@gisd.org
|
| Ryoko Yamaguchi, University of North Carolina, Greensboro, ryamaguc@serve.org
|
| Diana Bowman, University of North Carolina, Greensboro, dbowman@serve.org
|
| Abstract:
The McKinney-Vento Act's Education for Homeless Children and Youth (EHCY) is federal policy for assisting Local Education Agencies (LEA) in the United States to meet the immediate and longer-term educational needs of students in the aftermath of disaster. Literature reviews indicate, however, little is known about specific services individual children receive through the everyday ECHY programs or the temporary disaster assistance initiatives nor how the services are related to academic outcomes. In this paper we review the case of one LEA recovering after Hurricane Ike to investigate the local evaluation as a starting point for an evidence base giving voice to the school administrators, teachers, students and families who are the intended beneficiaries of a National Disaster Recovery Framework whose development and implementation is urged by the National Commission on Children and Disasters.
|
|
Network Analysis-based Methods for Assessing Coordination in Exercises
|
| Presenter(s):
|
| Yee San Su, CNA Education, suy@cna.org
|
| Abstract:
Previous failures in effective, large-scale disaster response (e.g., Hurricane Katrina) can often be traced to failures in effective coordination. As evidenced in after action reports, however, assessments of coordination performance are still largely anecdotal in nature. Network analysis was seen as a possible means to develop quantitative metrics for coordination assessment. In this article, two techniques are proposed. First, Borgatti's technique for quantifying network fragmentation was selected to measure the extent to which coordinating entities play a role in establishing efficient communications. Second, Girvan and Newman's technique for community sub-group identification was utilized to identify potential breakdowns in information transfer. Both techniques were successfully implemented in a case study analysis of the Top Officials 4 exercise. The techniques appear promising for providing additional insights into coordination performance, identifying exercise artificialities, and allowing meta-analysis of coordination performance (e.g., over time, across regions, for different event scales).
|
|
Ushahidi Haiti Project Evaluation: Evaluating the Use of Crowdsourced Information From Crisis Affected People for Emergency Response
|
| Presenter(s):
|
| Nathan Morrow, Tulane University, nmorrow@tulane.edu
|
| Nancy Mock, Tulane University, mock@tulane.edu
|
| Abstract:
Crisis mapping is a new technique that provides crowdsourced information dynamically through a map and graphic aggregator during and after crisis events. It combines advances in mobile computing, social media and internet-based data aggregation, visualization and mapping. The Ushahidi Haiti Project was a volunteer effort that endeavored to bring together information about the needs of earthquake affected people from new media sources such as Facebook, blogs, and Twitter. Affected people were also encouraged to send text messages with their needs to a local phone number in Haiti. These messages were then classified and dynamically mapped. The evaluation's central focus on use of the maps and reports for emergency response showed mixed results. Questions of efficiency also showed room for improvement of many processes. None-the-less, this innovative approach to crisis information was relevant to the emergency response community and will no doubt be a feature of future emergency response efforts.
|
|
Impact Assessment of the Food Crisis: An Experience in Ethiopia
|
| Presenter(s):
|
| Silva Sedrakian, Oxfam America, ssedrakian@oxfamamerica.org
|
| Abstract:
After two-year response to the food crisis in Ethiopia, Oxfam America conducted an impact assessment with the assistance by Tufts University who had developed the tools.
The goal of the project was to promote impact assessment in Ethiopia, by institutionalizing impact assessment capacities within Oxfam America. The basic concept behind this initiative is that aid agencies would carry out impact assessments of their own projects, with training and technical support provided by the Tufts University the country.
The proposal aims to describe the process, methodology used in this project as well as to share the experience of Oxfam America and its partners who participated in this initiative.
|
| | | | |
|
Session Title: Community Involvement in Evaluation
|
|
Multipaper Session 220 to be held in Carmel on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
|
| Chair(s): |
| Jennifer Ely,
University of Pittsburgh, jae39@pitt.edu
|
|
Florida Medicaid Enrollees' Perceptions of Care with Mental Health Services
|
| Presenter(s):
|
| Roger Boothroyd, University of South Florida, boothroy@fmhi.usf.edu
|
| Mary Armstrong, University of South Florida, miarmstr@usf.edu
|
| Abstract:
Consumer perceptions of care are important indicators for evaluating health care quality. They are also significantly associated with treatment adherence (Bogart et al., 2003) and improvements in clinical status (Kane, et al., 1997). The Florida Agency for Health Care Administration began administering the Experience of Care and Health Outcomes Survey (ECHO) to a stratified statewide sample of Florida Medicaid mental health services users in 2009. The 51-item ECHO survey was designed to obtain users' perceptions on nine dimensions of mental health care delivery. To permit comparisons across managed care organizations (MCOs), the sample was stratified by the health care plan in which recipients were enrolled. Responses were obtained from 792 adults (41%) and 763 caregivers of children (51%). This presentation will summarize the findings of consumers' experiences from this mailing of the ECHO with an emphasis on their implications for evaluation and the provision of Medicaid mental health services in Florida.
|
|
An Evaluation of an Innovative, Collaborative Approach to Interfacing Research Systems with the Mental Health Community
|
| Presenter(s):
|
| Deborah Piez, University of Maryland, Baltimore, dpiez@psych.umaryland.edu
|
| Daniel Nieberding, University of Maryland, Baltimore, dnieberd@psych.umaryland.edu
|
| Sandra Sundeen, University of Maryland, Baltimore, sjsundeen@comcast.net
|
| Abstract:
The mission of the Practice Research Network is to build an infrastructure linking investigators at The University of Maryland, Baltimore, Department of Psychiatry with the public mental health system through an innovative approach to nurturing the development of activities that reflect the value of a collaborative and participatory approach to research. There will be an in-depth discussion evaluating the process of partnership development with local mental health authorities (Core Service Agencies), clinics, providers, advocacy organizations, and consumers throughout Maryland. A description of the initial and evolving structure of the network will underscore the importance of involving multiple stakeholders to increase access to studies. Data will be provided to illustrate the number of study referrals generated by the Network. There will be a discussion regarding the modifications that are needed to this approach based on feedback and evaluation received from clients over the past two years, since the network's inception.
|
|
Evaluating the Perceptions of Primary Care Providers that Influence Patient Decision-Making
|
| Presenter(s):
|
| Snigdha Mukherjee, Louisiana Public Health Institute, smukherjee@lphi.org
|
| Greer Sullivan, University of Arkansas, sullivangreer@uams.edu
|
| Patrick Corrigan, Illinois Institute of Technology, corrigan@iit.edu
|
| Zachary Feldman, University of Arkansas for Medical Sciences,
|
| Abstract:
Mental Health stigma is known to be widespread and to have devastating effects on the lives of those with mental disorders. Yet it is not clear why, or how, these health disparities related to mental disorders actually occur in routine primary care. A 2X5 vignette survey design was used to investigate the role of stigma related to mental illness in the decision making process of primary care providers (PCPs). The diagnosis of schizophrenia was varied in the vignettes to assess the effect on the perceptions and decision-making of PCPs. The sample population was Family Medicine and Internal Medicine PCPs and 3rd Year Residents who treated adult patients (N=134). The results of this study indicate that while PCPs stereotype and have prejudicial attitudes to persons with mental illness it does not affect appropriate treatment decision. This may potentially affect patients by delaying treatment seeking or even drop out of treatment.
|
|
Ecological Evaluation Model for Behavioral Health Interventions
|
| Presenter(s):
|
| Ayana Perkins, Georgia State University, aperkins@ikataninc.com
|
| James Emshoff, Georgia State University, jemshoff@gsu.edu
|
| Jennifer Zorland, Georgia State University, jzorland@gmail.com
|
| Abstract:
The Social Action Research Lab coordinated a city wide needs assessment in Atlanta, Georgia in 2009. An evaluation model for behavioral health intervention is proposed based on findings from 2009 pathological gambling needs assessment in Georgia. Focus group data were collected from one hundred and twenty nine community residents in the Atlanta metropolitan area were included in this data collection. An intervention evaluation model emerged from data analyses that would support the evaluation of pathological gambling and other behavioral health interventions for diverse communities. This paper will describe the utility of model for practitioners specializing in behavioral health.
|
| | | |
|
Session Title: Teaching Tools: Evaluation Pedagogy and Novel Instructional Techniques
|
|
Multipaper Session 221 to be held in Coronado on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Teaching of Evaluation TIG
|
| Chair(s): |
| Randall Davies,
Brigham Young University, randy.davies@byu.edu
|
|
Testing an Unusual Model of Professional Development: The Self-Directed Study Group
|
| Presenter(s):
|
| Nancy Carrillo, Apex Education, nancy@apexeducation.org
|
| Abstract:
A local AEA affiliate sponsored a professional development study group in which participants learned about organizational network theory and analysis. Two coordinators neither experts in the field -created a simple syllabus comprised of four pedagogical elements: individual learning (reading, practicing network analysis on freeware), monthly group discussions, on-line dialogue and sharing of resources, and a jointly-created presentation at the end to AEA affiliate members. A non-participating evaluator worked with the coordinators to examine the success of the professional development in terms of participants' perceived linkages between the course and real-life practice, satisfaction with both content and process of the experience, and the acquisition of knowledge and skills. Evaluation methods included a survey following the course, monitoring of attendance and participation each month, and a reiterative reflective exercise in which participants examined their own learning over the course of eight months. Findings suggest do's and don'ts for similar professional development exercises.
|
|
Valuing Evaluation Through New Media Literacy: Using AEA365 Blog to Prepare Next Generation Evaluators
|
| Presenter(s):
|
| Sheila Robinson Kohn, University of Rochester, sbkohn@rochester.rr.com
|
| Kankana Mukhopadhyay, University of Rochester, kankana.m@gmail.com
|
| Abstract:
This paper illustrates and critically analyzes the potential of using a blog - AEA365 Tip-a-day by and for Evaluators- as a teaching and learning tool for educating evaluators in a university's certificate program designed to prepare the next generation of evaluators. The blog is used to encourage students to engage in unique ways of interacting, learning and thinking about evaluation theory and practices. Grounded in empirical evidence obtained from a diverse group of doctoral students for the past three semesters, our paper systematically documents the role of new media literacy, specifically AEA365, in the pedagogical practice of teaching evaluation. In addition, we offer a perspective on how the blog represents a bridge over the theory-practice gap and as such, has enriched our own understanding as evaluators cum instructors of how to best support emerging evaluators in embracing values and valuing in evaluation practice.
|
|
Evaluation Faculty and Doctoral Students' Perspectives on Using a Portfolio as a Comprehensive Exam Option
|
| Presenter(s):
|
| Jennifer Morrow, University of Tennessee, jamorrow@utk.edu
|
| Gary Skolits, University of Tennessee, gskolits@utk.edu
|
| Jason Black, University of Tennessee, jblack21@utk.edu
|
| Susanne Kaesbauer, University of Tennessee, Knoxville, skaesbau@utk.edu
|
| Thelma Woodard, University of Tennessee, Knoxville, twoodar2@utk.edu
|
| Abstract:
In this paper presentation we will discuss using a portfolio as an option for the comprehensive exam in an Evaluation, Statistics, and Measurement (ESM) graduate program. Two years ago while revising our graduate program curriculum we decided to offer a portfolio as one of the three comprehensive exam options (the other two being the traditional four questions or a major area paper) for our doctoral students. As faculty we felt that a portfolio would best represent the students' body of work during their graduate career. We will discuss our reasons for creating the portfolio option, the detailed guidelines we created for students completing their portfolios, and students' perspectives on this comprehensive exam option. Lastly, we will discuss how other faculty can implement this option in their graduate program and how best to evaluate students' completed portfolios.
|
|
Service-Learning Methods in Community-Based Participatory Evaluation: Implications of Service-Learning on Workforce Diversity, Student Capacity Building, and Community Support
|
| Presenter(s):
|
| Tabia Akintobi, Morehouse School of Medicine, takintobi@msm.edu
|
| Donoria Evans, Morehouse School of Medicine, devans@msm.edu
|
| Nastassia Laster, Morehouse School of Medicine, nlaster@msm.edu
|
| Ijeoma Azonobi, Centers for Disease Control and Prevention, iazonobi@msm.edu
|
| Marcus Dumas, Georgia Perimeter College, marcus.dumas@gpc.edu
|
| Debran Jacobs, Morehouse School of Medicine, djacobs@msm.edu
|
| William Moore, ICF Macro, wmoore@icfi.com
|
| Lailaa Ragins, Morehouse School of Medicine, lragins@msm.edu
|
| Abstract:
Broadening the expertise and diversity of the evaluation workforce is a priority for the public health graduate education program at Morehouse School of Medicine. Unique aspects of the program evaluation course include service learning community-based participatory evaluation projects focusing on student capacity/skill building and community-focused participatory principles and practice, evaluation tailoring for special populations and underserved communities, and application of evaluation principles to broaden culturally responsive and appropriate approaches in practice and literature. Associated course participation outcomes include evaluation skill application in course projects and practicum experiences, peer-to-peer technical assistance, facilitation of evaluation in extramural projects, and participation in evaluation organization sponsored trainings/workshops. Students contributed to additional organizational evaluations for a United States Army base Child and Youth Services Division, Morehouse College's Public Health Sciences Institute and 2010 Project Imhotep Internship. This paper discusses program evaluation course methodology, student engagement, evaluation capacity process and outcomes, and implications for workforce development.
|
| | | |
|
Session Title: Communication Strategies and Findings From Tobacco Control Evaluations
|
|
Multipaper Session 222 to be held in El Capitan A on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Health Evaluation TIG
|
| Chair(s): |
| Michael Harnar,
Claremont Graduate University, michaelharnar@gmail.com
|
|
Social Media, Smoking Cessation and Young Adults: A Developmental Evaluation Approach
|
| Presenter(s):
|
| Bruce Baskerville, University of Waterloo, nbbaskerville@uwaterloo.ca
|
| Cameron Norman, University of Toronto, cameron.norman@utoronto.ca
|
| Roy Cameron, University of Waterloo, cameron@uwaterloo.ca
|
| Steve Brown, University of Waterloo, sbrown@uwaterloo.ca
|
| Abstract:
Social Media (SM) may extend the reach and impact of an evidence-based Quitline for smoking cessation among young adults. This paper presents the results of a participatory, developmental evaluation (DE) of an innovative SM approach to support Quitline utilization by young adults. In Canada, young adults have the highest smoking rate of all age groups and they represent the largest single population group using social network media. Given the scope for innovation and reach, there is a need for research on effective SM strategies to reach and assist young adult smokers in successfully quitting. Evaluators partnered with the Canadian Cancer Society to co-create a SM strategy and determine the impact on young adults, service providers and stakeholders. The study employed a mixed-methods approach to collecting data; using administrative data, Internet use data, focus group data, and survey data, with data triangulation. Preliminary evaluation findings will be presented for this dynamic and complex intervention.
|
|
Theories About Theories: How Phenomenological Analysis Contributes to Evaluation of Public Health Initiatives Through Communications Research
|
| Presenter(s):
|
| James Heichelbech, HealthCare Research Inc, jheichelbech@healthcareresearch.com
|
| Abstract:
Communications research is important for public health initiatives, as we need to understand the perspectives of those we serve. Phenomenological analysis (PA) is a method for identifying the essential structure of human experience with respect to a particular point of interest, such as second hand smoke or obesity, through empirical investigation of: 1) what people recognize as real; 2) what matters to them (value); and 3) how those contribute to choices. This allows us to make sense of the ways in which others make sense of the world around them. Whether we hope to deliver services in ways that 'fit' our audience or need to change their perspectives to facilitate changes in behavior, PA helps us determine whether our objectives are aligned with our challenges. In this way, PA contributes not only to the success of public health initiatives, but also to their evaluation.
|
|
Evaluating a Moving Target: Tobacco Evaluation and Management System (TEAMS) Develops as the North Carolina Teen Tobacco Initiative Evolves
|
| Presenter(s):
|
| Leah Ranney, University of North Carolina, leah_ranney@unc.edu
|
| Brandi Behlke, Health and Wellness Trust Fund, brandi.behlke@healthwellnc.com
|
| Candice Justice, Health and Wellness Trust Fund, candice.justice@healthwellnc.com
|
| Jason Derrick, University of North Carolina, jason_derrick@unc.edu
|
| Kearston Ingraham, University of North Carolina, kearston_ingraham@med.unc.edu
|
| Adam O Goldstein, University of North Carolina, aog@med.unc.edu
|
| Laura K Jones, University of North Carolina, lkj@med.unc.edu
|
| Thomas C Brown, Health and Wellness Trust Fund, tom.brown@healthwellnc.com
|
| Nidu Menon, Health and Wellness Trust Fund, nidu.menon@healthwellnc.co
|
| Sterling Fulton-Smith, Health and Wellness Trust Fund, sterling.fulton-smith@healthwellnc.com
|
| Andre Stanley, Health and Wellness Trust Fund, andre.stanley@healthwellnc.com
|
| Abstract:
TEAMS, the Tobacco Evaluation and Management System, is used to evaluate the Teen Tobacco Use Prevention and Cessation Initiative (Teen Initiative), a key component of a statewide, youth-focused tobacco prevention initiative since 2003 supported by the North Carolina Health and Wellness Trust Fund. Grantees across the state are involved in activities designed to prevent tobacco initiation, promote tobacco use cessation, eliminate exposure to secondhand smoke, and reduce health disparities in tobacco use among priority populations. TEAMS is a customized, flexible, web-based tracking system that collects data related to performance outcomes in accordance with the Teen Initiative grantees' Annual Action Plans and comprehensive logic models. Data are aggregated and presented through a number of pre-defined performance indicators for the Teen Initiative. Evaluators and program managers can use TEAMS to monitor programmatic changes, evaluate program processes, collect outcome data, and make necessary rapid program improvements using technology.
|
|
Using Intermediate Health Outcomes to Demonstrate Programmatic Success in Tobacco Use Prevention and Control
|
| Presenter(s):
|
| Nikki Lawhorn, Louisiana Public Health Institute, nlawhorn@lphi.org
|
| Jenna Klink, Louisiana Public Health Institute, jklink@lphi.org
|
| Lisanne Brown, Louisiana Public Health Institute, lbrown@lphi.org
|
| Abstract:
The Louisiana Campaign for Tobacco-Free Living (TFL) is a statewide tobacco use prevention and control program. Goals of the TFL campaign include: elimination of exposure to second-hand smoke, prevention of initiation of tobacco use among youth, and promotion of tobacco use cessation. A significant decrease in tobacco use prevalence among adults is a long term programmatic goal and between 2003 and 2009, there was a significant decrease in Louisiana adult smoking prevalence. However, annual reductions in smoking prevalence were not statistically significant. Tracking short and intermediate term indicators such as changes in social norms and cigarette consumption have been important for sustainability of funding and demonstrating programmatic success.
|
| | | |
|
Session Title: Evaluating Complex Programs
|
|
Think Tank Session 223 to be held in El Capitan B on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the International and Cross-cultural Evaluation TIG
|
| Presenter(s):
|
| Michael Bamberger, Independent consultant, jmichaelbamberger@gmail.com
|
| Discussant(s):
|
| Jim Rugh, Independent Consultant, jimrugh@mindspring.com
|
| Megan Steinke, Save the Children, msteinke@savechildren.org
|
| Michael Bamberger, Independent Consultant, jmichaelbamberger@gmail.com
|
| Abstract:
Development agencies are moving away from support for individual projects towards complex programs that either involve broad, multi-component sector support or national level interventions that often involve several donor agencies and cover policy reform, cross sector or cross-country programs and that frequently provide general budget support to central government agencies who decide how the funds will be utilized. In all of these cases it is not possible to apply conventional project impact evaluation designs (such as pre-test/post-test comparison group designs) and the donor community is searching for new evaluation methodologies appropriate for evaluating these complex programs. Following a brief review of new evaluation approaches that are being applied at this level, participants will share their experiences with promising approaches and challenges in the evaluation of their complex programs. The think tank will build on new approaches discussed in a number of international conferences and workshops that the facilitators have organized over the past two years.
|
| Roundtable:
Evaluation of Youth Serving Programs: An Ongoing Conversation in Support of Field Growth |
|
Roundtable Presentation 224 to be held in Exec. Board Room on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the AEA Conference Committee
|
| Presenter(s):
|
| Kim Sabo Flores, Thrive Foundation For Youth, kim@thrivefoundation.org
|
| David White, Oregon State University, david.white@oregonstate.edu
|
| Abstract:
Over the past decade, a number of AEA members have been meeting informally at the annual conference to discuss specific issues related to evaluating programs and services for young people. Historically, participants have come to these discussion groups from a variety of TIGs, including: Advocacy and Policy Change; Collaborative, Participatory, and Empowerment; Extension Education; Pre-K to 12 Education and social work. No matter our affiliation, we are all passionate about positive youth development and strategies that put youth voice front and center in youth participatory evaluation processes, products, effectiveness, and practical and transformative impacts. As our small group has grown to more than two-dozen members, we find the need to carve out a more formal thread within AEA to share or methods, approaches, and lessons learned. We invite all evaluators interested in the field of positive youth development to discuss the possibility of creating a new TIG.
|
|
Session Title: Government TIG Business Meeting and Keynote Address: Extending Your Evaluation's Short Half-life so it is Longer Than a Day
|
|
Business Meeting with Panel Session 226 to be held in Huntington B on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Government Evaluation TIG
|
| TIG Leader(s):
|
|
Stanley Capela, HeartShare Human Services, stan.capela@heartshare.org
|
|
James Newman, Idaho State University, newmjame@isu.edu
|
|
Samuel Held, Oak Ridge Institute for Science and Education, sam.held@orau.org
|
|
Caroline DeWitt, Human Resources and Skills Development Canada, caroline.dewitt@hrsdc-rhdcc.gc.ca
|
| Chair(s): |
| Stanley Capela, HeartShare Human Services, stan.capela@heartshare.org
|
| Discussant(s):
|
| Leslie Goodyear, National Science Foundation, lgoodyea@nsf.gov
|
| Caroline DeWitt, Government of Canada, steelpan@istar.ca
|
| Connie Kubo Della-Piana, National Science Foundation, cdellapi@nsf.gov
|
| Linda Thurston, National Science Foundation, lthursto@nsf.gov
|
| Abstract:
Very often Government fails to make good use of program evaluation in assessing the quality of services that are offered as well as identifying best practices that would help government as well as contracting agencies improve the quality of it's services. This session will offer an American and Canadian perspective.
|
|
Extending Your Evaluation's Short Half-life so it is Longer Than a Day: A Personal Perspective
|
| David Bernstein, Westat, davidbernstein@westat.com
|
|
Like a still photograph, except the truly historical or special personal ones, evaluations often capture moments in time. One day or one administration later the evaluation is all but forgotten. Like the still photograph, the evaluation may become a distant memory for program staff, funders, and stakeholders unless the evaluation truly captures the "spirit" and "likeness" of the program. In this keynote address, David Bernstein, the self-titled "Chair Emeritus" and founding chair of the Government Evaluation TIG, will reflect on methods to make evaluations more useful and long-lasting for research sponsors and stakeholders. David is a Senior Study Director with Westat, an employee-owned research company in Rockville, Maryland. During his 28 year career in program evaluation, David has planned and conducted research, evaluation, and performance measurement for the Montgomery County (Maryland) Public Schools, Montgomery County Government, the American Red Cross, and the U.S. Department of Health and Human Services.
|
|
|
Session Title: Evaluations During Challenging Economic Times: Strategies for Non-Profits and Foundations
|
|
Think Tank Session 227 to be held in Huntington C on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Presenter(s):
|
| Courtney Malloy, Vital Research LLC, courtney@vitalresearch.com
|
| Pat Yee, Vital Research LLC, patyee@vitalresearch.com
|
| Discussant(s):
|
| Courtney Malloy, Vital Research LLC, courtney@vitalresearch.com
|
| Pat Yee, Vital Research LLC, patyee@vitalresearch.com
|
| Harold Urman, Vital Research LLC, hurman@vitalresearch.com
|
| Abstract:
This session will examine how non-profit organizations and foundations can continue to support evaluation activities during challenging economic times. Participants will generate strategies that organizations can use to control costs while still collecting and analyzing high quality evaluation data. Participants will choose to participate in two of three breakout groups for 20 minutes each throughout the 90-minute session. Breakout groups will discuss the following topics:
1. Choosing what, when and how to evaluate (e.g., which programs and/or evaluation questions, timing, funding options, staffing, etc.);
2. Designing evaluations (e.g., instrumentation, sampling, data sources, use of findings, etc.); and
3. Leveraging technology: Making the right investment choices.
Summary reports from each topic will be provided by breakout leaders followed by comments and questions from participants. Results from the think tank will be documented and made available to AEA as well as posted on an evaluation resources web site hosted by the facilitators.
|
|
Session Title: Collaborative Evaluations: Successes, Challenges, and Lessons Learned
|
|
Multipaper Session 228 to be held in La Jolla on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
|
| Chair(s): |
| Liliana Rodriguez-Campos,
University of South Florida, liliana@usf.edu
|
|
Engaging the Community in Education and Evaluation: Using Collaborative Evaluation to Facilitate Community Member Focus Groups
|
| Presenter(s):
|
| Aarti Bellara, University of South Florida, abellara@usf.edu
|
| Liliana Rodriguez-Campos, University of South Florida, liliana@usf.edu
|
| Michael Berson, University of South Florida, berson@usf.edu
|
| Abstract:
In an effort to assess whether the views of the community were aligned to the goals of a large county-wide school program on civic engagement, community member focus groups were conducted. The various views within the community and the role the community plays in school initiatives are important when implementing school-based programs. The chances of programs being sustainable and successful are often increased when the ideals of the community and the program are aligned. To successfully garner focus groups and meet this evaluation goal, a collaborative evaluation approach was utilized. This paper addresses how the focus groups informed the evaluation, the lessons learned on organizing and conducting focus groups, and the perceived impact school programs have on the surrounding communities.
|
|
Avoiding the Dusty Shelf: Promoting Use through Stakeholder Collaboration and Multi-Step Dissemination Strategies
|
| Presenter(s):
|
| Katie A Gregory, Michigan State University, katieanngregory@gmail.com
|
| Adrienne E Adams, Michigan State University, adamsadr@gmail.com
|
| Deborah McPeek, Turning Point Inc, dmcpeek@turningpointmacomb.org
|
| Nkiru A Nnawulezi, Michigan State University, nkirunnawulezi@gmail.com
|
| Chris M Sullivan, Michigan State University, sulliv22@msu.edu
|
| Deb Bybee, Michigan State University, deborah.bybee@gmail.com
|
| Echo A Rivera, Michigan State University, echorivera@gmail.com
|
| Katherine A Cloutier, Michigan State University, kcloutier28@gmail.com
|
| Abstract:
Many evaluators have demonstrated the utility of a collaborative process when working with stakeholders to develop evaluations that best fit the organization's needs and improve current practices. This paper describes a case study of collaboration between a university-based research team and a non-profit organization interested in examining the extent to which the agency's practices reflected their empowerment philosophy. Adhering to the basic tenets of collaborative evaluations, we conducted a study with a domestic violence shelter that focused on defining and measuring empowering practice. Using a utilization-evaluation approach, we sought to answer the following questions: 'are we doing what we think we're doing and is it making a difference?' Agency stakeholders and evaluators worked together to articulate the program theory and study design, and to interpret and disseminate findings to agency staff. This presentation will describe the collaborative process and methods used to plan for and promote the use of the evaluation findings.
|
|
Is the Cost of That Program Worth It? Teachers' Perceived Worth of a New Teaching Program in an Effective Charter School Through the Use of Collaborative Evaluation
|
| Presenter(s):
|
| Claudia Guerere, University of South Florida, cguerere@mail.usf.edu
|
| Janet Mahowski, University of South Florida, mahowskiJ@pcsb.org
|
| Paige James, University of South Florida, paigejames80@yahoo.com
|
| Abstract:
Implementing any new program incurs costs. Deciding to spend money on a new teaching program could mean less money for students. This study evaluated teacher's perceived effectiveness of a reading program in a Florida school using collaborative evaluation. Though the program was already available to teachers through the district, this was the first time it was implemented in-house. Emphasis was placed determining if teaching strategies presented were new to the participants, establishing worth. A survey was developed and administered to assess how useful the teachers found the training session. Results from the surveys lead to two focus groups to gather specific details about this programs perceived effectiveness by the teachers, predictions of future use of the strategies in their classrooms, and barriers to implementation. Results demonstrated that teachers were familiar with many of the strategies and if this program were to be kept, a more advanced class would be beneficial.
|
|
Working Collaboratively With Multiple Partners to Determine Whether Interventions can Lead to Reduction in Health Disparities
|
| Presenter(s):
|
| Liz Maker, Alameda County Public Health Department, liz.maker@acgov.org
|
| Mia Luluquisen, Alameda County Public Health Department, mia.luluquisen@acgov.org
|
| Abstract:
Evaluators at the ACPHD have developed a participatory evaluation model for working with community partners to determine whether their interventions are reducing health disparities. We developed this model through our eight-year involvement with an initiative to reduce violence and improve community well-being, through addressing social determinants of health in two Oakland neighborhoods. Our evaluation role began when we helped program planners determine intermediate outcomes (such as improved relationships among neighbors or community involvement) along the pathway towards reduction in health disparities. To complement secondary data sources, we designed qualitative and quantitative data collection tools to assess neighborhood conditions and priorities for change at three time periods. A key role has been working closely with diverse stakeholders, including neighborhood residents, program staff and government officials, to track how the interventions are changing individuals and neighborhoods, and how these changes can lead to an eventual reduction in health inequities.
|
|
Valuing Collaboration from the Bottom-Up: A Formative Evaluation of Science Inquiry in Middle School Classrooms
|
| Presenter(s):
|
| Merlande Petit-Bois, University of South Florida, mpetitbo@usf.edu
|
| Teresa Chavez, University of South Florida, chavez@usf.edu
|
| Robert Dedrick, University of South Florida, dedrick@usf.edu
|
| Robert Potter, University of Souhtern Florida, potter@cas.usf.edu
|
| Abstract:
The Leadership in Middle School Science (LIMSS) project's purpose is to increase inquiry and leadership in middle school science. This paper focuses on the collaborative nature of a formative evaluation of the LIMSS project at all levels, from the participants to the leadership team. Taking an eclectic approach (Fitzpatrick, Sanders & Worthen, 2004), we evaluated the extent this program helped science teachers become more effective in their pedagogical knowledge and as leaders. Data were collected from surveys, focus groups, and Blackboard discussion questions. Multiple perspectives from the teachers, teacher leaders, principal investigators, and evaluators were used to gain an understanding of the program's effectiveness and identify areas of improvement. Using feedback from all levels regarding the program's needs has been valuable in improving the LIMSS project.
Fitzpatrick J. L., Sanders, J. R., & Worthen, B. R. (2004). Program evaluation: Alternative approaches and practical guidelines (3rd ed.). Boston: Pearson Education, Inc.
|
| | | | |
|
Session Title: Enhancing Evaluation Practice: Studies Examining the Value of Different Evaluation Practices
|
|
Multipaper Session 229 to be held in Laguna A on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Research on Evaluation
|
| Chair(s): |
| Tarek Azzam,
Claremont Graduate University, tarek.azzam@cgu.edu
|
|
A Call To Action: Evaluating Evaluators' Recommendations
|
| Presenter(s):
|
| Jennifer Iriti, University of Pittsburgh, jeniriti@yahoo.com
|
| Kari Nelsestuen, Education Northwest, kari.nelsestuen@educationnorthwest.org
|
| Abstract:
While there are many components to any evaluation, perhaps none is more visible to clients than the practice of making recommendations. Yet, as a field, we have virtually no empirical understanding of how recommendations are used and what impact they have on programs and outcomes. This means, as a field, we have nothing to demonstrate the merit or worth of our most visible practice. In a profession whose very purpose is to determine whether programs have been well implemented and have had positive outcomes, we must ensure that our practices are also supported by evidence of proper implementation and positive outcomes. In this paper session, the authors make the case for increased empirical study of recommendations by first summarizing the extent literature and identifying what is known and not known, then demonstrating how these gaps threaten the integrity of our profession. Finally, the authors propose a research agenda to strategically advance the field in recommendations.
|
|
Factors Impeding and Enhancing Exemplary Evaluation Practice
|
| Presenter(s):
|
| Nick Smith, Syracuse University, nlsmith@syr.edu
|
| Abstract:
Researchers have conducted few systematic investigations of the nature of exemplary evaluation practice. Awards for outstanding practice typically reflect post hoc recognition of a study's quality evidenced in its design or impact. These awards provide little insight into how to consistently conduct exemplary practice. This paper examines a dozen selected evaluation cases to identify those factors that may either impede or enhance exemplary practice. A variety of factors are considered including those related to evaluation purpose, study design, theoretical and methodological approach, resource management, client relationships, contextual and cultural influences, the nature of the evaluand, and sector of work. Of particular interest are considerations of whether there are necessary and sufficient situational conditions required for exemplary practice and the role of evaluator expert judgment in managing the ongoing implementation of evaluation activities. Through better understanding such factors, evaluators can improve practice, upgrade evaluator training, and strengthen the theory of practice.
|
|
Bias in the Success Case Method: A Monte-Carlo Simulation Approach
|
| Presenter(s):
|
| Julio Cesar Hernandez-Correa, Western Michigan University, julio.c.hernandez-correa@wmich.edu
|
| Abstract:
The Success Case Method (SCM) is an evaluation method intended to collect the minimum amount of information needed at the minimum level of intrusion, time, and cost possible. One of the main objectives of the SCM is to provide estimates of returns on investments that can help an institution assess the cost-effectiveness of an implementation. However, Brinkerhoff (2002) admitted that the SCM produces biased information with respect to the central value. This paper utilizes Monte-Carlo simulations to assess different issues related to the bias of the SCM. The non-random selection of successful and no-successful individuals in the interview step and the existence of outliers, chance, and incorrect counterfactual design may be sources of bias in the SCM. This paper also proposes intermediate steps to correct bias issues in the SCM's results and help to provide accurate estimates of returns on investments.
|
|
How Program Evaluation Standards Are Put Into Professional Practice: Development of an Action Theory for Evaluation Policy and Research on Evaluation
|
| Presenter(s):
|
| Jan Hense, University of Munich, jan.hense@psy.lmu.de
|
| Abstract:
The Program Evaluation Standards, issued by the Joint Committee on Standards for Educational Evaluation, aim to enhance the quality of the professional practice of evaluation. However, three decades and two major revisions after publication of their first edition, little is known beyond anecdotal evidence about the actual use and impact of the standards on the profession. The standards themselves do not explicitly articulate the mechanisms which are expected to make them instrumental in improving the practice of evaluation. Based on theoretical considerations and an analysis of the standards' underlying assumptions, a conceptual framework is proposed which outlines such mechanisms on individual and evaluation policy levels. This action theory aims to guide evaluation policy in further promoting the standards' application and utility. At the same time it can be used as a research framework for analyzing the standards' actual impact. An exploratory study is presented as an example for such research.
|
| | | |
|
Session Title: Measuring New Technologies and the Environment: Lessons From the Field
|
|
Multipaper Session 230 to be held in Laguna B on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the International and Cross-cultural Evaluation TIG
|
| Chair(s): |
| Maureen Rubin,
University of Texas, San Anotnio, maureen.rubin@utsa.edu
|
|
Impact Evaluation of Irrigation Projects
|
| Presenter(s):
|
| Murad Mukhtarov, World Bank Project Implementation Unit, murad_mukhtarov@yahoo.com
|
| Abstract:
The present investigation being based on a standard evaluation approach goes beyond the indicators using modern economical, financial and statistical categories as structural components of the model.
The article describes the approach to impact evaluation in the case of three World Bank projects in Irrigational Sector of Azerbaijan on rehabilitation of infrastructure and establishment of the WUAS (Water Uses Associations). The approach is time-invariant and can be applied as within the project cycle as well as in the long-term perspective after project completion.
The initial impact of the projects has been measured through a baseline surveys and the follow-up impact studies at the projects' completion. Surveys have been based on quasi-experimental and longitudinal design. SPSS, Farmod software were used for economical, financial and statistical analyses and the Citizen Report Cards approach was used for social impact assessment.
|
|
Understanding the Social Impacts of Non-Governmental Organization Water Projects: Lessons From Western Kenya
|
| Presenter(s):
|
| Valerie Were, University of Minnesota, were0005@umn.edu
|
| Karlyn Eckman, University of Minnesota, eckma001@umn.edu
|
| Abstract:
Non-governmental organizations (NGOs) are key change agents working with lakeshore communities and implementing projects around East Africa's Lake Victoria Basin (LVB). These NGOs must navigate complex laws that govern water access and use as they interact with government entities and local populations. Legal complexity exists because customary rules and norms as well as recently adopted statutory and codified laws govern water use. We seek to understand the nexus of NGO water projects, water law, and local participation in the Kenyan portion of the LVB. Ultimately, we will document engagement strategies that are most effective at fostering local participation in water conservation and management. Results will help NGOs, and officials responsible for managing water under statutory law, understand how customary rules influence local participation and rights of access. Overlooking this nexus can lead to more conflict over access to water and to failed projects.
|
|
Competition Calling, No Need to Shout: Diffusion and Use of Mobile Phones in Developing Countries
|
| Presenter(s):
|
| Ann Flanagan, World Bank Group, aflanagan@worldbank.org
|
| Keta Ruiz, World Bank Group, kruiz@worldbank.org
|
| Stephan Wegner, World Bank Group, swegner@worldbank.org
|
| Abstract:
Penetration and access to mobile phone technology has grown rapidly in developing countries, driven by private sector participation and enabled by reform geared towards increasing competition. Interventions by the World Bank in the Information and Communication Technologies (ICT) sector have focused on regulatory and sector reform. IFC, the private sector arm of the World Bank Group, has focused on mobilizing and leveraging private sector investments. The Independent Evaluation Group (IEG) has evaluated World Bank Group activities in the telecommunications sector. Drawing on the evaluation, we construct two unique competition variables (i) World Bank Group involvement in a country's telecommunications sector and (ii) achievement ratings associated with World Bank Group projects. This paper estimates the effects of World Bank Group interventions on the speed of mobile diffusion in developing countries. We find competition, including World Bank Group interventions aimed at increasing competition, have significantly impacted mobile diffusion in the developing world.
|
|
Tanzania Energy Sector Impact Evaluation: Findings from the Zanzibar Baseline Study
|
| Presenter(s):
|
| Denzel Hankinson, DH Infrastructure, denzel@dhinfrastructure.com
|
| Lauren Pierce, DH Infrastructure, lpierce@dhinfrastructure.com
|
| Duncan Chaplin, Mathematica Policy Research, dchaplin@mathematica-mpr.com
|
| Arif Mamun, Mathematica Policy Research, amamun@mathematica-mpr.com
|
| Minki Chatterji, Abt Associates Inc, m.chatterji@verizon.net
|
| Shawn Powers, Princeton University, spowers@princeton.edu
|
| Elana Safran, Harvard University, elana_safran@hks11.harvard.edu
|
| Abstract:
The Millennium Challenge Corporation is funding an electricity project in Tanzania that includes construction of a cable connecting the electricity grid on the mainland of Tanzania to Zanzibar. This report describes findings from a baseline study regarding the potential impacts of the new cable. Our results suggest that in recent years the quality and reliability of electricity in Zanzibar has deteriorated. In addition, Zanzibar has experienced two major blackouts, the most recent of which lasted from December 2009 to March of 2010. That blackout appears to have had large negative impacts on the hotel industry in Zanzibar suggesting that the new cable could have important economic benefits for the island. The cable is scheduled to be built in 2012. We will conduct a follow-up study at that time to assess the degree of improvement in electricity services and associated changes in the hotel industry.
|
| | | |
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
Understanding Program Fidelity and Challenges With Evaluating Multi-site, Multi-level Education Programs |
|
Roundtable Presentation 231 to be held in Lido A on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
|
| Presenter(s):
|
| Sophia Mansori, Education Development Center Inc, smansori@edc.org
|
| Alyssa Na'im, Education Development Center Inc, anaim@edc.org
|
| Abstract:
Presenters will share methods used in the evaluation of the Adobe Youth Voices program, a global youth media initiative that empowers youth in underserved communities to explore and comment on their world. Traditional notions of program fidelity are not a central focus of this program, as activities occur in a variety of settings, with diverse populations, and with an array of training and program support. The evaluation has remained flexible to program adaptations and growth, but faces specific challenges with varied component and delivery models while the program stands to expand considerably. Participants of the roundtable will be invited to share their experiences by responding to questions such as: How do we respond to varied models within a program? To what extent should the evaluations of such programs address issues of program fidelity? What considerations should be made for program replication and scale-up?
|
| Roundtable Rotation II:
Evaluation and Strategy (Chicken, Egg or Something Different?): Using an Evaluation Framework to Refine a Set of Integrated Initiatives |
|
Roundtable Presentation 231 to be held in Lido A on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
|
| Presenter(s):
|
| Jara Dean-Coffey, jdcPartnerships, jara@jdcpartnerships.com
|
| Amy Reisch, First 5 Marin Children and Families Commission, amy@first5marin.org
|
| Aideen Gaidmore, First 5 Marin Children and Families Commission, aideen@mc3.org
|
| Jill Casey, jdcPartnerships, jill@jdcpartnerships.com
|
| Abstract:
Strategy and evaluation are linked. When acknowledged as such, and when they intentionally inform each other, the designer or implementer of an effort is more likely to be successful. This holds particularly true with the delicate, complicated and politicized world of advocacy and public policy. As evaluators, we are more frequently functioning as partners with our clients as they tackle more complicated and entrenched social problems. The art and discipline of using multiple frames to refine thinking, clarify intention and making areas of measurement explicit are important skills for the effective evaluator. Using First 5 Marin as a model, the presenters will discuss the challenges and benefits of multiple frames, the implications for evaluation design and implementation and how this work impacts their roles. Participants will spend time sharing the tools that advance their practice, and exploring their own experiences and biases.
|
|
Session Title: Women Veterans Healthcare Program Evaluation: Women Veteran Health Strategic Healthcare Group, VA Central Office
|
|
Panel Session 232 to be held in Lido C on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Health Evaluation TIG
|
| Chair(s): |
| Deeanna Burleson, Booz Allen Hamilton, deeburleson@gmail.com
|
| Abstract:
WVHSHG contracted with a consultant company to develop a program evaluation approach that could be used at Veteran Affairs (VA) Medical Centers and outpatient clinics nationally. The consultant team, WVHSHG, and subject matter experts from the VA, Department of Defense (DoD) and Health and Human Services (HHS) developed an assessment methodology to include a capability-based protocol assessment tool.
The assessment tool was created by identifying four critical components of a successful Women's Health Program: (1) Program; (2) Health Care Services; (3) Outreach, Communication and Collaboration; and (4) Patient Centered Care (PCC) / Patient Aligned Care Team (PACT). Once the core components were identified, each was associated with key capabilities. For each capability, assessment criteria were developed based on internal/external standards and regulations. Critical success factors were identified for each criteria and are required for developed programs. Each capability was scored on a four point scale (highly developed to needs development).
|
|
Women Veterans Healthcare Program Evaluation: Introduction and Background
|
| Stacy Garrett-Ray, Women Veterans Health Strategic Health Group, stacy.garrett-ray@va.gov
|
|
Stacy Garrett-Ray, MD, MPH, MBA, Deputy Director, Comprehensive Women's Health, Women Veterans Health Strategic Health Care Group, VHA is the primary contact for the program evaluation efforts. She will provide the background of women Veterans health care across the nation leading up to the program evaluation efforts. A description of issues and demographic changes leading to the need of a national program evaluation of the care being provided to women Veterans will be provided. She will also identify the main objectives for the findings of the site visit data.
|
|
|
Women Veterans Healthcare Program Evaluation: Site Assessment Methodolgoy and Approach to the Development of a Protocol Assessment Tool
|
| Deeanna Burleson, Booz Allen Hamilton, deeburleson@gmail.com
|
|
Deeanna Burleson RN, BSN, MSN is the project manager for the development and implementation of the site assessment methodology. The program evaluation methodology was developed by a group of SMEs from the DoVA, DoD, HHS and consultants. The protocol assessment tool was created by identifying four critical components of a successful Women's Health Program: (1) Program; (2) Health Care Services; (3) Outreach, Communication and Collaboration; and (4) Patient Centered Care (PCC) / Patient Aligned Care Team (PACT). Once the core components were identified, each was associated with key capabilities. For each capability, assessment criteria were developed based on internal/external standards and regulations. Critical success factors were identified for each criteria and are required for developed programs. Each capability was scored on a four point scale; highly developed, developed, being developed and needs development. The assessment process includes presurveys, observation, interviews, document reviews and case discussions.
| |
|
Women Veterans Healthcare Program Evaluation: Site Assessment Data Management
|
| Leilani Francisco, Booz Allen Hamilton, leilani.francisco@bah.com
|
|
Leilani Francisco, PhD, MA, is responsible for providing database support and the direction for randomly selecting sites, creation of a database, scoring tool and moving the database and tools to a web based interface. She will provide a description of the statistical methods used to select the sites in order to provide a representative sample of sites to include of all types of VA Medical Centers. Sites were blindly selected on the basis of size, geographical location and facility complexity. A description on database construction and data management will be described.
| |
|
Session Title: Approaches to Biomedical Research and Development Portfolio Analysis: Examples From the National Institutes of Health
|
|
Panel Session 233 to be held in Malibu on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Research, Technology, and Development Evaluation TIG
|
| Chair(s): |
| James Corrigan, National Institutes of Health, corrigan@mail.nih.gov
|
| Abstract:
Analyses of complex portfolios of funded biomedical research are essential to program planning, progress monitoring, evaluation, and impact assessment. Such analyses can be based on existing data, newly developed data, or some combination of the two. In the absence of a single comprehensive method or data source, analyses commonly combine multiple methods and data sources to supply converging evidence on the nature and impact of funding agency R&D portfolios. Examples of portfolio analysis methods addressing the following questions will be presented:
- What role has our sponsored research played in recognized basic and clinical scientific advances?
- What major scientific advances might have been lost if our portfolio was reduced by various amounts?
- What new technologies (e.g., drugs, biomarkers) are linked to our funded portfolio?
- How might we expand our portfolio analyses with new data to more completely represent the universe of funded research in our subject area(s)?
|
|
Identifying the Role of Funded Research in Pivotal Cancer Research Advances
|
| Brian Zuckerman, Science and Technology Policy Institute, bzuckerm@ida.org
|
| James Corrigan, National Institutes of Health, corrigan@mail.nih.gov
|
| Seth Jonas, Science and Technology Policy Institute, sjonas@ida.org
|
| Lawrence Solomon, National Institutes of Health, solomonl@mail.nih.gov
|
|
Approaches to assessing longer-term outcomes of cancer research can include identifying the relative roles of various funders and funding mechanisms in those advances considered by experts to be pivotal. The meaning of "pivotal" in biomedical research includes both scientific importance, as measured by discoveries that form the basis for new lines of research, and clinical relevance. Analyses were conducted focusing on approaches to cancer research. The six special articles in the "Clinical Cancer Advances" series of the Journal of Clinical Oncology ("JCO", issues published in 2005-2010) were analyzed to determine sources of support for the "Major advances" (advances with potential to lead to decreases in mortality) and "Notable research" accomplished in the previous year. In the second activity, a 2006 Nature special article identifying 24 milestones in cancer research in order to "highlight the most influential discoveries in the field of cancer over the past century" was analyzed.
|
|
|
Estimating the Impact of Hypothetical Portfolio Reductions on Production of Major Discoveries Funded by the National Institute of Allergy and Infectious Diseases
|
| Kevin Wright, National Institutes of Health, wrightk@mail.nih.gov
|
| Brandie Taylor, National Institutes of Health, taylorbr@mail.nih.gov
|
| Jamie Mihoko Doyle, Science and Technology Policy Institute, jdoyle@ida.org
|
| Brian Zuckerman, Science and Technology Policy Institute, bzuckerm@ida.org
|
|
There is a reasonable probability that Federal science funding in the United States is entering a period of budgetary stringency. While declining real science funding is certain to decrease the rate of scientific output, one question is whether there will be an impact on highly important and transformative scientific discoveries. One mechanism for estimating the potential impact of reduced real funding is to use scenario analysis that engages in the counterfactual of what might have happened were successful grant applications not funded -and to identify the subsequent discoveries that may not have come to fruition had fewer awards been available. To conduct this counterfactual analysis, the National Institute of Allergy and Infectious Diseases (NIAID), one of the largest Institutes at the NIH, sponsored an assessment of the relationship between the reviewed scores of funded awards and major discoveries. Results from this analysis are presented along with accompanying assumptions and limitations.
| |
|
Using Multiple Methods and Data Sources to Analyze Complex Cancer Research Portfolios
|
| Joshua Schnell, Thomson Reuters, joshua.schnell@thomsonreuters.com
|
| Elizabeth Hsu, National Institutes of Health, hsuel@mail.nih.gov
|
| James Corrigan, National Institutes of Health, corrigan@mail.nih.gov
|
| Sandeep Patel, Thomson Reuters, sandeep.patel@thomsonreuters.com
|
| Lauren Taffe, Thomson Reuters, lauren.taffe@thomsonreuters.com
|
|
The National Cancer Institute's (NCI) Office of Science Planning and Assessment (OSPA) is a central resource for evaluation and assessment support to NCI programs. OSPA has explored multiple approaches to assessing NCI-sponsored projects and related outcomes. Methodology and findings will be reported from multiple projects assessing different research portfolios and subsequent outputs and impacts on the research enterprise and the development of health interventions. Studies presented will include: 1. an analysis of NCI's support of drug development using data from the US Food and Drug Administration (FDA) Orange Book, US Patent and Trademark Office patent data and NIH project data; 2. an evaluation of NCI's support for the development of breast cancer related biomarkers, and 3. an analysis that combines data from the International Cancer Research Portfolio (ICRP) and the Web of Science, providing insight into the challenges of evaluating a complex portfolio sponsored by different funders.
| |
|
Session Title: New Direction of Research and Development Performance Evaluation System in Korea
|
|
Multipaper Session 234 to be held in Manhattan on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Research, Technology, and Development Evaluation TIG
|
| Chair(s): |
| Donghoon Oh, Korea Institute of Science & Technology Evaluation and Planning, smile@kistep.re.kr
|
| Abstract:
Back in the last couple of decades, the performance of Research & Development (R&D) has been dramatically increased in terms of quantitative efficiency, such as the number of publication and patent. During the past era, quantitative efficiency was the main perspective in performance evaluation of national R&D in Korea, which could lead to improvement of national S&T competitiveness, even though qualitative excellence has been stalled so far. Therefore, for five years from now, at least, the performance evaluation system of national R&D in Korea will drive into qualitative excellence from the past quantitative values, which will be in forced under '2nd R&D Performance Evaluation Basic Plan.' This session consists of 4 presentations which are the major issues of R&D performance evaluation system in Korea, evaluation system of Government-funded Research Institutes, and two evaluation practices.
|
|
New Direction of National Research and Development Evaluation System in Korea: From Quantity/Efficiency into Quality/Effectiveness
|
| Seung Jun Yoo, Korea Institute of Science & Technology Evaluation and Planning, biojun@kistep.re.kr
|
| ChangWhan Ma, National Science and Technology Commission, changma@nstc.go.kr
|
|
National R&D competitiveness can be achieved through advancing technology after optimized investment and excellent performance. In terms of performance evaluation, it can be either promoter or inhibitor for better performance. And quantity and quality usually cannot be compatible with each other, even though it could not be better if they are compatible. Since we have achieved quantitative efficiency of national R&D performance which could lead to enhancement of science & technology competitiveness somehow, we are at the transition point to upgrade our position from quantitative efficiency into qualitative excellence and effectiveness, which will lead to stronger competitiveness of science & technology more in substance. This presentation will introduce the action plan to achieve the main goal, shift from quantity/efficiency to quality/effectiveness.
|
|
Evaluation System of Government-funded Research Institutes (GRIs)
in Korea
|
| Woo Chul Chai, Korea Institute of Science & Technology Evaluation and Planning, wcchai@kistep.re.kr
|
|
With the increase of national investment in R&D, the demand for accountability and effectiveness of national R&D investment has been increased. Especially in the investment of government-funded research institutes (GRIs), which accounts for about 25% of national R&D fund, the demand for excellent R&D performances of GRIs is increasing through effective allocation of their R&D investment.
The purpose of the study is to review the history and current evaluation systems of GRIs in S&T areas, which conducts the evaluation of both research results and management performances. It also shows how qualitative evaluation methods are applied for promoting the R&D performances of GRIs and the accomplishment of appropriate roles suitable for national S&T development strategy.
|
|
Conjoint analysis for Contract Strategy for Culture Technology Enhancement Program
|
| Yun Jong Kim, Korea Institute of Science & Technology Evaluation and Planning, yjkim@kistep.re.kr
|
| Uk Jung, Dongguk University, ukjung@dongguk.edu
|
|
This paper aims to calculate relative importance of each attribute of a R&D program organization which is related to promote the Culture Technology enhancement. Conjoint analysis can be used for choice simulation and can determine the best design. A series of interviews were conducted to identify and derive critical attributes for R&D program related to Culture Technology. Conjoint instrument was constructed using attributes identified and administered among actual researchers. Research period, research expenses, types of industry-academia research collaboration, and types of multidisciplinary collaborative research were found to be the most highly valued attributes of the Culture Technology R&D programs. We examine the preferences of various groups of researchers, including industry researchers, academic researchers, and researchers in research institute, using conjoint analysis. The results of this paper are expected to provide valuable information for the evaluator and planner of the Culture Technology R&D programs.
|
|
Performance Factors in Industry-University Collaboration
|
| Youngsoo Ryu, Korea Institute of Science & Technology Evaluation and Planning, soory@kistep.re.kr
|
| Hongbum Kim, Korea Institute of Science & Technology Evaluation and Planning, hbkim@kistep.re.kr
|
|
Industry-university collaboration in government-sponsored program includes various activities between two groups. This study is to empirically explore the factors influential to industry-university collaboration in government-sponsored program. Multiple regression analysis and t-test are conducted based on a survey. It is predicted that some activities (resource sharing, communication etc.) are verified to be the major factors. In addition, the findings will also reveal that there are the perceptional differences on the factors between two groups.
|
|
Session Title: Using Evaluation Perspectives to Develop Organizational Capacity to Conduct Useful Student Learning Outcomes Assessment in Higher Education
|
|
Multipaper Session 235 to be held in Monterey on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Assessment in Higher Education TIG
|
| Chair(s): |
| Hallie Preskill, FSG Social Impact Consultants, hallie.preskill@fsg.org
|
| Discussant(s): |
| William Rickards, Alverno College, william.rickards@alverno.edu
|
| Abstract:
In the evaluation literature, an emphasis on utilization of evaluation has led to a great deal of theoretical and practical development, including the importation of a "learning organization" perspective. Higher education accreditation bodies call for universities to develop student learning outcomes assessment systems that are self-reflective and improvement-oriented, nurturing true learning organizations. However, many educators still experience the outcomes assessment process as a management-owned mandate, and genuine use of assessment findings is relatively rare. This multi-paper session examines how organizational capacity to engage in useful and meaningful student learning outcomes assessment can be tracked and developed, impacting assessment design, implementation, and utilization. Three presentations showcase empirical studies and capacity-building efforts across various program contexts: humanities departments, teacher education programs, and undergraduate general education. The session will conclude with suggestions for future research as well as recommendations for ways to facilitate assessment capacity-building in university settings.
|
|
Fostering Evaluative Thinking, Values, and Identity Through Outcomes Assessment in College Humanities Programs
|
| Yukiko Watanabe, University of Hawaii, Manoa, yukikow@hawaii.edu
|
|
This paper presents findings from a longitudinal multiple case study that explores human, contextual, and assessment factors that enhance or hinder (a) planning and implementing utilization-focused outcomes assessment, and (b) organizational assessment capacity development. Eight college humanities programs across two universities participated in the study. Faculty surveys, chair interviews, and meeting observations showed variability in leadership style, power and decision-making structures, perceived value of assessment, and assessment and evaluation expertise. Update reports and meeting recordings provided insights into value conflicts and assessment issues programs faced. Noteworthy factors influencing assessment capacity development were collaborative culture, disciplinary values and norms, the roles of assessment facilitators, near-peer role models, and program-external demands and opportunities. Readiness for outcomes assessment and capacity development capability had a reciprocal relationship with the way outcomes assessment was pursued and implemented. The presenter will also highlight emerging issues and questions for future research on outcomes assessment in higher education.
|
|
The Role of Evaluation Use in Defining Organizational Readiness for the Purposes of Improvement and Accountability in Teacher Education Accreditation
|
| Georgetta Myhlhousen-Leak, University of Iowa, gleakiowa@msn.com
|
|
This study reports types and factors of use that impact teacher education programs' readiness for and use of accreditation. An examination of the purposes of teacher education program review (i.e., state accreditation) as accountability and improvement sets the stage for recognizing "what" and "how" use occurs. Respondents from four programs completed structured interviews and six response scales detailing how use occurred in their program. Descriptions were coded and interpreted based on the known types and factors of use in the evaluation utilization literature. Findings of types of use revealed that the distinction of process and findings use provided a more comprehensive picture of readiness and use. Descriptions and scale results based on Alkin's (1985) factors of use provided evidence that each factor emerges with a unique impact. This study answers the call by Banta (2002) for research to identify "characteristicsGǪthat inhibit or facilitate use of assessment information" (p.65).
|
|
Nurturing Readiness for a "Culture of Learning" for General Education
|
| John Stevenson, University of Rhode Island, jsteve@uri.edu
|
|
Here's a despairing question from a university administrator: "Other than pouring money we don't have into assessment, how can we create a culture that values the process?" This paper addresses that plea, applying empirical and theoretical ideas drawn from previous evaluation work on learning organizations. The specific focus is on general education, an orphan program with no necessary faculty community. Two case studies of public universities provide data on academic culture, stages in readiness for genuine assessment, and actions that foster utilization. These can be applied to analysis of practical progress and recommendations for enhancing the movement from an initial group of committed faculty who devise outcome objectives for general education to a learning organization that will be able to benefit from assessment findings on a sustainable basis. How can an evaluation perspective aid this process? What values and value conflicts must be understood to nurture such a learning community?
|
|
Session Title: Arts, Culture and Audiences: Prevalent Challenges and Evaluative Solutions
|
|
Multipaper Session 237 to be held in Palisades on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Evaluating the Arts and Culture TIG
|
| Chair(s): |
| Don Glass,
Boston College, donglass@gmail.com
|
|
Evaluating Partnership and Organisational Resilience
|
| Presenter(s):
|
| Annabel Jackson, Annabel Jackson Associates Ltd, ajataja@aol.com
|
| Abstract:
The Arts Council's visual arts strategy in the UK identified problems of fragmentation and a need to strengthen collaboration and resource sharing across the sector. A national network of regional visual arts partnerships was set up with evaluation at its core. The evaluation is of interest because it relates to organizational arts activity, rather than projects or educational activities, which are more commonly evaluated in the arts, and because the benefits to the 200 arts organizations are central. Evaluation was used to identify and resolve potential ambiguities or disagreements in the groups, to share learning and make links between groups. The evaluator measured intangibles such as partnership, synergy and resilience using stakeholder analysis, but also quantitative techniques such as Social Network Analysis. The overall approach strongly respected the values of the arts, e.g. the findings from Social Network Analysis were presented as images showing the development of linkages over time.
|
|
Measuring Excellence, Innovation, and Livability Resulting from the National Endowment for the Arts' Grant Awards
|
| Presenter(s):
|
| Patricia Moore Shaffer, National Endowment for the Arts, shafferp@arts.gov
|
| Abstract:
In its FY 2012-2016 Strategic Plan, the National Endowment for the Arts (NEA) articulated an approach to measure the outcomes of its grant-making activities through performance measurement and program evaluation. NEA strategic performance measures seek to collect national data on the desired outcomes of NEA grants, including the creation of art that meets the highest standards of excellence and the engagement of the American public with diverse and excellent art. New data-collection strategies will include post-award reviews for excellence and innovation. The NEA also will conduct national surveys of grantee audiences as a way to determine how and to what degree those audiences have been affected by their arts experiences. Metrics development and program evaluation, especially in the area of livability, informed development of the NEA's performance measurement plan. This full-paper session will present an overview of the NEA's strategic framework and performance measurement plan.
|
|
Evaluating Social and Emotional Development Through ENACT Theater Workshops: Context, Conditions, Process and Outcomes
|
| Presenter(s):
|
| Robert Horowitz, Columbia University, artsresearch@aol.com
|
| Abstract:
During the 2009-2010 school year, we evaluated ENACT, a New York City theater-based program for adolescent and pre-adolescent students. We chose a mixed-method approach, using extensive qualitative methods and statistical approaches based upon the qualitative data. Through an extensive initial qualitative study, involving coding of interview transcripts and observational description, we identified 22 program implementation and student behavior variables. The analysis was used to develop an observational assessment strategy that would be unobtrusive, yet provide detailed quantitative and qualitative data. We found that we were better able to communicate our methods, analysis and findings to the different stakeholders by categorizing variables and focus within four areas: context, conditions, process and outcomes. The presentation will explore evaluation challenges and solutions, and include the overall findings demonstrating the relationship between ENACT programming and social and emotional learning.
|
| | |
|
Session Title: Using Social Network Analysis to Understand and Enhance Collaboration in Community Coalitions and Inter-Agency Initiatives
|
|
Multipaper Session 238 to be held in Palos Verdes A on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Social Network Analysis TIG
|
| Chair(s): |
| Marah Moore, i2i Institute, marah@i2i-institute.com
|
| Discussant(s): |
| Beth Steenwyk, Michigan's Integrated Improvement Initiatives, beth.a.steenwyk@mac.com
|
| Debra Heath, Albuquerque Public Schools, heath_d@aps.edu
|
| Abstract:
Social network analysis is becoming an increasingly popular tool for describing and developing dynamic systems such as coalitions and inter-agency initiatives. Social network analysis examines how informal relationships and interactions between people and organizations influence key organizational behaviors, such as adoption of innovation, change management, information and resource sharing, effective collaboration, and shared purposes and goals. Panel members will present and discuss two examples of collaborative evaluations that used social network analysis in educational contexts, a School District and a State level Special Education Initiative, and the development of an online network analysis data collection tool. The focus of the panel discussion will be on the collaborative processes used in the examples presented from both evaluator and program staff perspectives, and advantages and challenges associated with this evaluation method.
|
|
Using Network Analysis to Understand and Improve Collaboration Among Michigan's Integrated Improvement Initiatives and Center for Educational Networking
|
| Jan Vanslyke, JVS Evaluation Inc, jan@jvseval.com
|
| David Merves, Evergreen Evaluation and Consulting, david@evergreenevaluation.net
|
|
As part of a collaborative mixed-method evaluation of Michigan's Integrated Improvement Initiatives and Center for Educational Networking (MI3-CEN), network analysis was used to characterize current patterns of collaboration in shared function areas among key network partners, and to identify opportunities for strategic development. Evaluation goals and methods were collaboratively identified by MI3-CEN network staff and evaluators. Panel participants will discuss this collaborative process and results of the network analysis from both an evaluator and user perspective.
|
|
Internet-Facilitated Seamless Data Transfer from Respondent Input-to-UCINET Social Networking Analysis Software
|
| James Frasier, University of Wisconsin-Madison, jfrasier@education.wisc.edu
|
|
Session attendees will learn how Wisconsin is evaluating the intensity of collaboration among geographically dispersed professionals responsible for implementing personnel development activities. Session attendees will receive a formal paper describing how the evaluator designed, developed, and implemented an internet-facilitated software to seamlessly collect, compile and transfer survey respondent data to the UCINET social network analysis tool. Dr. Frasier is a Senior Researcher at the Center on Education and Work at the University of Wisconsin-Madison and is the Lead Evaluator Wisconsin's annual 1.4 million dollar Office of Special Education Programs State Personnel Development Grant (2002-present).
|
|
Considerations in the use of Network Analysis as a Tool for Developing and Examining Changes in Community Coalitions and Other Dynamic Systems
|
| Jeni Cross, Colorado State University, jeni.cross@colostate.edu
|
|
Network analysis is an exciting tool community groups, coalition members, and evaluators can use to understand and improve inter-agency collaboration. Tools such as network analysis can provide a means for measuring dynamic systems and aid in the adoption of innovation, development of community coalitions, and the examination of changes in grant-funded inter-agency networks. An example of network analysis with a Safe Schools/Healthy Students Initiative to examine changes in interagency collaboration over time will be presented to illustrate advantages and challenges associated with this evaluation method.
|
|
Session Title: Strategic Learning: Who is Qualified to do it Effectively?
|
|
Think Tank Session 239 to be held in Palos Verdes B on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Presenter(s):
|
| Julia Coffman, Center for Evaluation Innovation, jcoffman@evaluationexchange.org
|
| Discussant(s):
|
| Tanya Beer, Center for Evaluation Innovation, tbeer@evaluationinnovation.org
|
| Ehren Reed, Innovation Network, ereed@innonet.org
|
| Abstract:
Increasingly, evaluators are being asked to adopt strategic learning approaches to evaluation that help nonprofits and foundations learn in real-time and adapt their strategies to the changing circumstances around them. Strategic learning is different from more traditional approaches to evaluation in some important ways. For example, rather than remain deliberately separate from a program or strategy, evaluators become embedded partners that participate in strategy discussions and help organizations make data-based assessments about their future direction. This trend has led to some important questions about the kinds of preparation and experiences evaluators need in order to use a strategic learning approach effectively. This think tank session will discuss the kinds of knowledge and skills evaluators need to facilitate strategic learning, and how to increase the supply of evaluators who are trained and prepared to work in this new way. An introductory presentation based on recent research will kick off the discussion.
|
|
Session Title: The Value of Evaluation Input for Strategic Planning in Extension
|
|
Multipaper Session 240 to be held in Redondo on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Extension Education Evaluation TIG
|
| Chair(s): |
| Teresa McCoy,
University of Maryland Extension, tmccoy1@umd.edu
|
|
Gathering Stakeholder Input to Redefine the County Extension Delivery System
|
| Presenter(s):
|
| Nick Fuhrman, University of Georgia, fuhrman@uga.edu
|
| Abstract:
As state Cooperative Extension units tighten their financial belts to compensate for reductions in funding, determining the value of programs and justifying the use of resources through evaluation becomes an even more relevant practice. Some Extension organizations have begun involving their own employees and other key stakeholders in determining the most viable and realistic options for redefining the county Extension delivery system. The purpose of this presentation is to share the process used by Georgia Cooperative Extension to gather employee and stakeholder input. Extension administrators first identified a team of key informants to determine the most appropriate data collection methods and questions. An online questionnaire was developed and sent to every Extension employee in Georgia and a 90% response rate was achieved. Responses were then presented to employees during district-level listening sessions. Finally, snowball sampling was used to identify and interview legislators and commodity group officials for additional input.
|
|
Innovation and Diversity in Extension Long-Range Planning: A Case Study
|
| Presenter(s):
|
| Diane Craig, University of Florida, ddcraig@ufl.edu
|
| Abstract:
Universities, like business and industry, are affected by an increasingly complex and global world. To stay relevant it is necessary for leaders to utilize open innovation strategies that lead to successful community engagement. This presentation outlines a strategic planning process that incorporates diversity and technology to engage citizens and faculty.
|
|
Extension's Evolving Alignment of Programs Serving Youth and Families: Organizational Change and its Implications for Evaluators
|
| Presenter(s):
|
| Marc Braverman, Oregon State University, marc.braverman@oregonstate.edu
|
| Nancy Franz, Iowa State University, nfranz@iastate.edu
|
| Roger Rennekamp, Oregon State University, roger.rennekamp@oregonstate.edu
|
| Abstract:
How can evaluation contribute to successful organizational change? A growing trend within Extension at the state and federal levels is a closer integration between its programs serving youth (primarily 4-H Youth Development) and families (primarily Family and Consumer Sciences). This presentation will examine the scope and reasons for the trend, and focus on the implications for Extension evaluation. Evaluation can facilitate closer interdependence between formerly separate programs in several significant ways. First, through the creation of more inclusive program logic models, it can promote broader conceptualization of program operation and outcomes, and reduce program redundancy. Second, it can engage a wider group of program stakeholders and help them view the evolving programs in less traditional ways. Third, evaluators can guide the organization's assessment of the success of the changes. Specific recommendations will address how evaluators can help Extension adjust purposefully to its new organizational configurations while maintaining high program effectiveness.
|
| | |
|
Session Title: Faculty Development in Assessment in Higher Education
|
|
Multipaper Session 241 to be held in Salinas on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Assessment in Higher Education TIG
|
| Chair(s): |
| Juny Montoya,
University of Los Andes, jmontoya@uniandes.edu.co
|
| Audrey Rorrer,
University of North Carolina, Charlotte, arorrer@uncc.edu
|
|
Assessing Assessment in University of Los Andes
|
| Presenter(s):
|
| Juny Montoya, Universidad de Los Andes, jmontoya@uniandes.edu.co
|
| Abstract:
This paper presents a first attempt to systematize the experience of designing and implementing a strategy called "assessing assessment". This strategy is oriented towards fostering reflection among faculty about their teaching practices and to gather information useful to feed the evaluation system of educational program effectiveness at Los Andes University. This paper summarizes the pilot studies developed during the last three semesters at Los Andes and the analysis of the preliminary results. Partial results show that this strategy is useful to promote reflection among faculty about their teaching practices and the design of their courses. Some questions remain regarding the usefulness of the instrument for summative evaluation purposes; the degree in which learning objectives are being achieved in the courses analyzed also remains uncertain.
|
|
Evaluating the Teaching and Use of Information Literacy Skills by Educators
|
| Presenter(s):
|
| Jill Hendrickson Lohmeier, University of Massachusetts, Lowell, jill_lohmeier@uml.edu
|
| Patricia Fontaine, University of Massachusetts, Lowell, patricia_fontaine@uml.edu
|
| Abstract:
This session will discuss the evaluation methods used and the findings from an evaluation of the teaching of information literacy skills for undergraduate education minors, initial teacher certification students and education doctoral students. We conducted faculty focus groups regarding faculty perceptions of their students' information literacy needs and skills. In addition, student focus groups and other data were collected in order to "track" what information literacy skills education students learn in their education classes and either teach with, or teach to their students when they do their student teaching. Although there is general agreement on the importance of information literacy skills, understanding exactly what they are and how to teach them is not always apparent to educators; thus evaluating the effectiveness of teaching these skills is not always straightforward.
|
|
Aligning Curricular Planning, Teaching, and Program Evaluation to Facilitate Educational Program Improvement
|
| Presenter(s):
|
| Kathleen Bolland, University of Alabama, kbolland@sw.ua.edu
|
| Javonda Williams, University of Alabama, jwilliams11@sw.ua.edu
|
| Abstract:
When regional and disciplinary accrediting bodies focus on student learning outcomes or competencies and include mandates to base program improvement on evidence, higher education faculty may need to examine their evaluation/assessment plans as well as their programs. To facilitate program improvement based on evidence, it is sensible to align curricular planning, teaching, and evaluation/assessment. Unfortunately, many faculty members have little background in evaluation/assessment (Kuh & Ikenberry, 2009). Further, although literature abounds on curriculum development, instruction, and program evaluation, little advice has been published on aligning the three. We will describe how our faculty accomplished this alignment. We will discuss and provide examples of several tools that can help provide direction and consistency to faculty about to embark on the journey of aligning improvement efforts to evaluation/assessment and evaluation/assessment to student learning outcomes or competencies. Although we focus on social work, the process can be applied in any educational program.
|
| | |
|
Session Title: Becoming Culturally Responsive Evaluators: Graduate Education Diversity Internship (GEDI) Intern Reflections on the Challenges of Valuing Culture in Evaluation Practice
|
|
Multipaper Session 242 to be held in San Clemente on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Multiethnic Issues in Evaluation TIG
|
| Chair(s): |
| Michelle Jay, University of South Carolina, mjay@sc.edu
|
| Discussant(s): |
| Rita O'Sullivan, University of North Carolina, Chapel Hill, rjosull@mindspring.com
|
| Abstract:
In this multi-paper session, members of the 7th cohort of the AEA Graduate Education Diversity Internship (GEDI) Program reflect on the challenges and successes they experienced in incorporating tenets of culturally responsive evaluation (CRE) into their evaluation internship projects. As emerging evaluators committed to CRE, which "recognizes that demographic, sociopolitical, and contextual dimensions, locations, perspectives, and characteristics of culture matter fundamentally in evaluation," (Hopson, 2009, p.431) the interns believe that valuing culture, context, and responsiveness is critical to the evaluative endeavor. The presenters will thus discuss their personal attempts to "value" culture, context and responsiveness in their respective internship projects. In so doing, they offer insights regarding the multiple contextual factors within their internship site, and within the programs being evaluated, that influenced their ability to translate culturally responsive evaluation theory into practice.
|
|
Sexual Education Needs Assessment for College Women of Color
|
| Tamara Williams Van Horn, University of Colorado Boulder, tamara.williams@colorado.edu
|
|
Community Health at the University of Colorado-Boulder is committed to changing their sexual education content in a manner that provides a multi-stage interactive program that would continue throughout a students' career at CU-Boulder. The staff is aware that underrepresented populations may be better served by culturally relevant information, and that current staff and structures may not provide such information to its diverse constituents. At the same time, there has been little research done on the sexual education needs of female students of color at a predominantly white university. In this paper, I will reflect on the role of culturally responsive evaluation as it pertains to my efforts to conduct a needs assessment for Community Health staff in order to provide them with information that will inform future changes in their sexual education programming. The needs assessment specifically addressed whether Community Health's sexual education content relevant to women of color.
|
|
Starting Right: Adaptation of a Teen Pregnancy/Sexually Transmitted Infection (STI) Prevention Program Using Culturally Responsive Evaluation Approaches
|
| Ebun Odeneye, University of Texas, Houston, ebun.o.odeneye@uth.tmc.edu
|
| Ross Shegog, University of Texas, Houston, ross.shegog@uth.tmc.edu
|
| Stephanie Craig-Rushing, Northwest Portland Area Indian Health Board, scraig@npaihb.org
|
| Cornelia Jessen, Alaska Native Tribal Health Consortium, cmjessen@anthc.org
|
| Christine Markham, University of Texas, Houston, christine.markham@uth.tmc.edu
|
| Gwenda Gorman, Inter Tribal Council of Arizona Inc, gwenda.gorman@itcaonline.com
|
|
Culturally responsive evaluation (CRE) involves acknowledging the cultural context in which a program is planned, implemented and evaluated in seeking truth regarding the program's effectiveness. The University of Texas Prevention Research Center is collaborating with several tribal organizations to adapt and evaluate evidence-based teen pregnancy/STI prevention program for middle-school students ("It's Your Game, Keep it Real", IYG) for American Indian/Alaska Native youth (AI/AN) in the Alaska, Pacific Northwest, and Arizona regions. This multi-site endeavor is grounded in CRE's culture, context and responsiveness framework in that its first phase focuses on capturing and understanding the cultural context and identifying how to appropriately respond to it in terms of program design and implementation. This formative evaluation will involve focus group discussions, key informant interviews, and usability testing of the program components with youth and adults in the community. Resultant data will aid in tailoring IYG to meet the needs of AI/AN youth.
|
|
Process Evaluation of Three Neighborhood Action Teams
|
| Alison Mendoza, University of North Carolina, Chapel Hill, a.mendoza.215@gmail.com
|
|
The Y-USA Educational Achievement Initiative is an effort for branch YMCA's to lead the formation of Neighborhood Action Teams (NAT). These NATs are charged with working to identify barriers to educational achievement, and to design and implement a neighborhood action plan to address those barriers. Members of the NAT include representatives from area non-profits, local businesses, school administrators, teachers, parents, and students. The focus of the evaluation of this initiative is to document the coalition building process. This initiative is currently being piloted in three sites: Minneapolis, MN; Springfield, MA; and Pittsburgh, PA. The National YMCA is gathering best practices from these pilot sites in order to expand the initiative in the future. In this paper, I will reflect on the role of culturally responsive and collaborative evaluation as it pertained to my responsibilities to build relationships with stakeholders, collect and compile meeting documentation, and communicate with the site leaders.
|
|
A Culturally Responsive Evaluation of the HNF (High Need Family) Program: Is it in There?
|
| Christopher St Vil, Howard University, stvil2002@yahoo.com
|
|
WFF's High Need Family (HNF) program is a permanent supportive housing program for homeless families who are experiencing multiple barriers, including mental illness, chemical dependency, domestic violence, trauma , dislocation, HIV/AIDS or other chronic illness, child protective service involvement, and/or criminal history. Begun in 2007, the model provides families with permanent supportive housing, on-site services, intensive strengths-based case management, weekly service contact, cross-provider coordination, referrals to services, and flexible funds to meet family needs for as long as needed. Westat was contracted to develop an effective screening and assessment process, conduct an evaluation that will help assure that program decisions are made in a manner that allows for a more definitive understanding of the effectiveness of supportive housing for high-need families. In this paper, I will reflect on the role of culturally responsive evaluation as it has pertained to my involvement in the evaluation of this initiative.
|
Evaluation With Focus Groups: Working With What You Have
| Ciara Zachary, Johns Hopkins University, ciara.zachary@gmail.com
|
|
This paper will examine the use of focus groups as an evaluation tool to examine intermediate outcomes for Elev8 Baltimore. The program’s evaluation goals include informing and improving Elev8’s efforts at national sites in addition to informing local strategies to help increase the likelihood of meeting and surpassing Elev8 Baltimore’s aims. The evaluation plan calls for several data collection components including gathering quantitative school and program data as well as qualitative focus groups and interviews. Focus groups were conducted with parents, caregivers and students to gain insight into family and student beliefs, attitudes, perceptions and opinions concerning Elev8. While focus groups can be advantageous in that group dynamics can yield information that can make Elev8 staff and researchers aware of methods to improve the effectiveness of the project, I will address the challenges to conducting focus groups within this particular community with regards to communication, resources, and time constraints
|
|
|
Session Title: Empowerment Evaluations: Insights, Reflections, and Implications
|
|
Multipaper Session 243 to be held in San Simeon A on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
|
| Chair(s): |
| Corina Owens,
Battelle, cmowens@me.com
|
|
Human Immunodeficiency Virus/AIDS's Non Gorvenment Organisations Implication in Evaluation and Effectiveness of Interventions in Africa: Is a Link Existed?
|
| Presenter(s):
|
| Maurice T Agonnoude, University of Montreal, amaurte@yahoo.fr
|
| Francois Champagne, University of Montreal, francois.champagne@umontreal.ca
|
| Nicole Leduc, University of Montreal, nicole.leduc@umontreal.ca
|
| Abstract:
Increased implication of NGOs and civil society in fight against HIV/AIDS pandemic is noted for some years in Africa. But even if their role is well appreciated, there is a perception of inefficiency of their actions because of lack of monitoring/evaluation 'capacity. Moreover, implication in monitoring evaluation can help for continuous improvement of their activities. This paper is to understand this link. Objective: analyze influence of local NGO's involvement in evaluation on their intervention effectiveness. Method : Comparative synthesis research of structural modeling type for a hundred NGO; two means of data collection: questionnaire for NGO' executives (one per NGO); questionnaire 'inquiry with an accidental sample of 75 to 100 customers per NGO leading to customers 'satisfaction evaluation of NGO's services. Outcomes: estimation of the part of activities continuous improvement linked to involvement in evaluation and to other factors (expectations of funding agencies, program's resources and localisation of the NGO).
|
|
Is Empowerment Evaluation Empowering? Strategies to Promote and Measure Empowerment Within Adult and Youth Programs
|
| Presenter(s):
|
| Krista Collins, Claremont Graduate University, krista.collins@cgu.edu
|
| Abstract:
Critics of empowerment evaluation have consistently argued that to separate the fields of participatory and empowerment evaluation, there is a need to empirically validate the relationship between evaluation activities and empowerment outcomes (Patton, 1997). While evaluators have identified the current practices designed to promote empowerment (Miller & Campbell, 2006), these processes represent only a portion of what psychological researchers have deemed necessary to facilitate empowerment. Additionally, there is a need to validate the types of outcomes expected of empowerment evaluation designs, and identify the different outcomes that exist for adult and youth participants. The purpose of this presentation is to (1) examine the processes and outcomes unique to participatory and empowerment approaches based on evaluation literature, (2) explore the contributions that psychological research on empowerment theory can provide to evaluation practice, and (3) discuss the implications of these findings for evaluators.
|
|
Use of the Empowerment Evaluation Framework as a Strategy for Promoting Community-Based Participatory Action Research in Statewide Violence Prevention Research Effort
|
| Presenter(s):
|
| Lea Hegge, University of Kentucky, lea.hegge@uky.edu
|
| Patricia Cook-Craig, University of Kentucky, patty.cook@uky.edu
|
| Abstract:
One key challenge facing researchers and community practitioners in violence prevention work is how to collaborate in research projects that produce evidence-based strategies. One approach that aims to address difficulties experienced by both fields when trying to combine expertise and design rigorous research projects is empowerment evaluation (EE). This presentation will demonstrate one state's use of EE principles to build a community-based participatory research (CBPR) project in which 13 rape crisis centers and university partners worked together to design and fund a 26 school randomized control trial study of a violence prevention strategy. Qualitative and quantitative process evaluation data related to project design and implementation will be presented. Successes and barriers will be reviewed. Engagement in CBPR can be enhanced by the use of EE principles when community partners shift their orientation to research, university partners shift their process to a community-driven needs perspective, and both groups share expertise.
|
| | |
|
Session Title: Advanced Analytic Techniques in Educational Evaluation
|
|
Multipaper Session 244 to be held in San Simeon B on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Deborah Carran,
Johns Hopkins University, dtcarran@jhu.edu
|
| Discussant(s): |
| Xiaoxia Newton,
University of California, Berkeley, xnewton@berkeley.edu
|
|
Best Practices in the Analysis of Longitudinal Survey Data for K-12 Evaluations
|
| Presenter(s):
|
| Megan Townsend, North Carolina State University, mbcarpen@ncsu.edu
|
| Melinda Mollette, North Carolina State University, melinda_mollette@ncsu.edu
|
| Dina Walker-DeVose, North Carolina State University, dcwalker@ncsu.edu
|
| Jeni Corn, North Carolina State University, jeni_corn@ncsu.edu
|
| Abstract:
The need to track changes in teacher and student variables means that sharing information about reliable quantitative measures and methodologies that may be used in a variety of contexts is increasingly important. A common challenge faced by evaluators is the inability to locate reliable surveys to measure outcomes of interest. This session presents three surveys used in the evaluation of teachers and students participating in IMPACT, a media and technology program in North Carolina. Each of these empirically-validated, selected-response surveys measures changes in technology skills. Presenters will also focus on some of the more significant methodological challenges experienced during the IMPACT evaluations, including issues related to conducting evaluations across multiple school levels and administering online surveys repeatedly over four years. The discussion will highlight strategies used to overcome these challenges.
|
|
Reviewing Systematic Reviews: Meta-Analysis of What Works Clearinghouse Computer-Assisted Interventions
|
| Presenter(s):
|
| Andrei Streke, Mathematica Policy Research, astreke@mathematica-mpr.com
|
| Tsze Chan, American Institutes for Research, tchan@air.org
|
| Abstract:
The What Works Clearinghouse (WWC) offers reviews of evidence on broad topics in education, identifies interventions shown by rigorous research to be effective, and develops targeted reviews of interventions. This paper systematically reviews research on the achievement outcomes of computer-assisted interventions that have met WWC evidence standards (with or without reservations). Computer-assisted learning programs have become increasingly popular as an alternative to the traditional teacher/student
interaction intervention on improving student performance on various topics. The paper systematically reviews computer-assisted programs featured in reading topic areas. This work updates previous work by the authors, includes new and updated WWC intervention reports released since September 2010, and investigates which program and student characteristics are associated with the most positive outcomes.
|
|
Spatial Methods are Key to Understanding Educational Phenomena
|
| Presenter(s):
|
| Kristina Mycek, University at Albany, km1042@albany.edu
|
| Susan Rogers, State University of New York, Sullivan, susan.roger.edu@gmail.com
|
| Abstract:
As evaluators are asked to assess a ever-widening variety of programs, it becomes more and more important to accurately determine the root causes of an outcome. In order to accomplish this spatial methods are being employed by vast numbers of educational evaluators. This study attempts to explore the importance of spatial methods in education while using an international dataset (PISA).
|
|
Formative Uses of Value-Added Approach for Identifying Best Instructional Practices and Modifying Implementation of Professional Development
|
| Presenter(s):
|
| Chi-Keung Chan, Minneapolis Public Schools, alex.chan@mpls.k12.mn.us
|
| Paul Hegre, Minneapolis Public Schools, paul.hegre@mpls.k12.mn.us
|
| David Heistad, Minneapolis Public Schools, david.heistad@mpls.k12.mn.us
|
| Abstract:
This study breakthroughs the restrictive summative use of value-added approach and illustrates the formative uses of this approach for identifying best instructional practices and modifying professional development (PD) implementation. Using two-year data of the Teacher Advancement Program (TAP) collected at a Mid-West urban school district, the authors first linked teachers' value-added results to their degree of implementation observation scores. Then, the authors adopted a quadrant visualization approach to classify teachers into four categories: (1) high implementation, high value-added (HIHV); (2) high implementation, low value-added (HILV); (3) low implementation, high value-added (LIHV), and (4) low implementation, low value-added (LILV). Teachers in HIHV category are exemplars of best instructional practices. Teachers in LILV category need more intensive PD support. In-depth examination of teachers in HILV and LIHV categories add understanding of the consistencies between instructional practices and student learning that contributes valuable knowledge for modifying the PD.
|
| | | |
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
Evaluation as a GPS Navigating the Land of Confusion: Multiple Interventions, Grants, and Evaluators at One School District |
|
Roundtable Presentation 245 to be held in Santa Barbara on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Evaluation Use TIG
|
| Presenter(s):
|
| Rachael Lawrence, University of Massachusetts, Amherst, rachaellawrence@ymail.com
|
| Sharon Rallis, University of Massachusetts, Amherst, sharonrallis@earthlink.net
|
| Abstract:
Urban school districts often use grant-writing to support their programs, and heavy reliance on grants encourages these districts to have multiple, simultaneous interventions, with overlapping territories. This multi-grant system causes program drift and almost guarantees that multiple evaluators will be working—surveying and interviewing—among the same participants. We are evaluating a long term, large national grant with an urban school district; one of the intended program outcomes is 'alignment' of the various grants and interventions within the district. While evaluating, we are witness to the confusion that arises when multiple interventions and evaluators are mixed. We ask: How effective is the project in promoting program and systems alignment? How can multiple evaluators work in the same setting without confusing the efforts and still respect participants' time? Additionally, how can evaluation be used as a navigation tool to guide the multiple stakeholders through the confusion of interventions, grantors, and evaluators?
|
| Roundtable Rotation II:
Can We Talk?: Strategies for Using Evaluation as a Vehicle for Greater P-20 Partnership Cooperation and Collaboration |
|
Roundtable Presentation 245 to be held in Santa Barbara on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Evaluation Use TIG
|
| Presenter(s):
|
| Dewayne Morgan, University System of Maryland, dmorgan@usmd.edu
|
| Felicia Martin, Prince George's County Public School, felicia.martin@pgcps.org
|
| Erin Knepler, University System of Maryland, eknepler@usmd.edu
|
| Abstract:
This roundtable presentation will engage participants in a discussion about the opportunities associated with using evaluation as a mode for improving partnership collaboration. Presenters include both the evaluator for the project but also school and higher education managers.Presenters will use their diverse set of experiences and qualifications to offer examples for making evaluation examinations and findings relevant to broader education policy and practice, while attending to the expectations from their constituencies.
|
|
Session Title: Getting Down to Cases: Evaluation Results and Decisions in Situ
|
|
Panel Session 246 to be held in Santa Monica on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Chair(s): |
| Lee Sechrest, University of Arizona, sechrest@u.arizona.edu
|
| Abstract:
Results of evaluations are almost always generic: a program "works" (or doesn't), an intervention "is effective" (or isn't). Decisions, on the other hand, are specific: they are made by this practitioner or that one, his administrator or another one; they are made with respect to this particular school or classroom, this particular client, and so on. The process of getting from generic results to particular decisions is not necessarily straightforward. This panel will consider the basis in generic "evidence" for decisions about particular cases, including the specific hypotheses said to have been tested, the specific research questions addressed, the effect sizes claimed, and the implied confidence about the evidence. Frequentist and Bayesian approaches will examined for differences they may reveal about individual decisions, and the bases for modifying decisions about individual cases, including risks, will be examined.
|
|
Are Evaluation Results to be Taken Seriously?
|
| Sally Olderbak, University of Arizona, sallyo@u.arizona.edu
|
| Lee Sechrest, University of Arizona, sechrest@u.arizona.edu
|
|
When a program is said to be "effective," that usually means that in at least a fairly large group of persons at least somewhat homogeneous in at least some ways, those persons exposed to the program were in some ways at least somewhat better off than those persons not exposed to the program.
Such assurance of effectiveness may not be sufficient to persuade local administrators to adopt the program in their particular locale, to persuade an individual practitioner to apply the intervention to his or her clients, or to persuade an individual client to submit to the intervention. Program interventions may seem alien, but they may also seem irrelevant or inappropriate, or the results may seem ephemeral. It may also be that the intervention itself seems unfeasible in the particular instance. These reasons for exceptions in the use of otherwise promising interventions are illustrated with specific examples.
|
|
|
The Shortcomings of P Values
|
| J Michael Menke, Critical Path Institute, menke@u.arizona.edu
|
|
Usual approaches to assessing effects of programs are based on "frequentist" thinking and methods and are exemplified by null hypothesis statistical testing. Even if a program is found to have significant positive effects, those effects may not apply generally in the population involved; in fact, the effects may not apply to more than a small proportion of the sample and, presumably, the population. It is often unclear-usually because not even questioned-whether results mean that a few people were helped a lot, many helped moderately, or most helped a little. Thus, there may not be in program evaluation results a basis for any clear decision about whether, let alone how, program results should be used. And, unless the evaluation provides estimates of effects for segments of the sample, no basis will exist for decisions better tuned to local circumstances. Improvements in ways of reporting results are possible and recommended
| |
|
Can we Cover the Shortcomings of P With a Bayes Leaf?
|
| Kirsten Yaffe, University of Arizona, yaffe@u.arizona.edu
|
|
Bayesian approaches to program evaluation are often recommended as a way of improving our inferences and, presumably, generalizations. Bayesian methods and analyses are aimed at capitalizing on prior knowledge (and expectations) about phenomena in order to improve estimates of the confidence that can be placed in hypotheses after the data are in. Presumably, if those estimates are better, then confidence in generalizability of findings should be improved. That is not necessarily the case, however. Bayesian analyses may provide a better estimate of confidence in a hypothesis, but that hypothesis may still seem irrelevant or inappropriate in situ. In fact, vagueness about in just what hypothesis confidence may be placed and what further conclusions that confidence may justify seems to characterize a good many Bayesian explanations of empirical findings. But as with frequentist approaches, improvements in ways of thinking about Bayesian presentations of results are possible and can be recommended.
| |
|
Decision Making in the Face of Uncertainty
|
| Lee Sechrest, University of Arizona, sechrest@u.arizona.edu
|
|
The controversy over "clinical vs. actuarial" decisions-making has a long history, and, although it has not been completely resolved, the preponderance of the evidence seems to favor the actuarial argument. Still, there must be some cases and some circumstances in which practitioners (and their clients) may reasonably "go against the evidence." In fact, discussions of evaluation and other research results very frequently include, or conclude with, assurances that practitioners and clients must consider the evidence carefully and make their own decisions. Still, unless there is uncertainty, there is no need for decision making. Careful reading of the literature, along with equally careful consideration of "the evidence," suggests that practitioners and clients may indeed make decisions that are against, or beyond, the evidence, but they need to very good reasons for doing so. Harder thinking about the choices and their bases might help.
| |
|
Session Title: Three's Company: Results and Lessons Learned Through a Collaboration Among Funder, Grantee, and Evaluator to Establish Targets and Measure Child Progress and Parent Engagement
|
|
Multipaper Session 247 to be held in Sunset on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Human Services Evaluation TIG
and the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Emily Moiduddin, Mathematica Policy Research, emoiduddin@mathematica-mpr.com
|
| Discussant(s): |
| Emily Moiduddin, Mathematica Policy Research, emoiduddin@mathematica-mpr.com
|
| Abstract:
Through a collaborative process, a funder, a grantee and a team of evaluators developed a set of targets for child progress in multiple domains of school readiness and parent engagement. These performance targets are part of an accountability framework between the funder (First 5 Los Angeles-F5LA) and grantee (Los Angeles Universal Preschool-LAUP). LAUP maintains a network of over 300 preschools serving more than 10,000 4-year-olds throughout Los Angeles County. In the years before targets were set, F5LA commissioned Mathematica Policy Research (Mathematica) to conduct the Universal Preschool Child Outcomes Study (UPCOS). The session papers will describe how data from UPCOS were used to identify metrics and establish targets, how outcomes were measured, and how LAUP performed in the first year. Papers also will describe how the results are being used for program improvement and provide reflections on the collaborative dynamic. Throughout the presentation lessons learned will be highlighted.
|
|
Elements of the Collaborative Dynamic
|
| Sharon Murphy, First 5 LA, smurphy@first5la.org
|
|
This paper will provide an overview of the collaborative process among the partners and identify those elements which proved essential for a successful collaboration. The grantee, funder, and evaluation team were tasked with establishing a set of targets for child progress and parent engagement. Given that mandate, the team worked together over a period of several months and utilized a data-driven, decision-making process that took advantage of previous years' data. The thoughtful, efficient design as well as the quality and characteristics of the partners contributed to the success of the collaborative effort.
|
|
Using Evaluation to Inform the Process of Setting and Meeting Shared Goals
|
| Yange Xue, Mathematica Policy Research, yxue@mathematica-mpr.com
|
| Emily Moiduddin, Mathematica Policy Research, emoiduddin@mathematica-mpr.com
|
| Sally Atkins-Burnett, Mathematica Policy Research, satkins-burnett@mathematica-mpr.com
|
| Elisha Smith, Mathematica Policy Research, esmith@mathematica-mpr.com
|
| Cay Bradley, Mathematica Policy Research, cbradley@mathematica-mpr.com
|
| Ama Atiedu, Los Angeles Universal Preschool, aatiedu@laup.net
|
|
As part of the Universal Preschool Child Outcomes Study, Mathematica has been working with First 5 LA and Los Angeles Universal Preschool to conduct outcomes evaluations in the areas of child progress (since 2007) and family engagement (since 2009). Data from these studies are being used for the purpose of setting targets in the context of the performance-based contract between the two organizations. In this paper we describe the data that were collected (direct child assessments, self-administered questionnaires to parents and providers), report how the findings for those studies were used to develop targets for child progress and engagement, and discuss how data are being used to determine whether the targets have been met (including key findings). This paper illustrates how data from descriptive evaluations can be used to both inform program improvement efforts and enable a funder and program to work together to determine whether shared goals are met.
|
|
Using Evaluation Findings and Lessons Learned for Program Improvement
|
| Kimberly Hall, Los Angeles Universal Preschool, kimberly.m.hall@gmail.com
|
| Schellee Rocher, Los Angeles Universal Preschool, srocher@laup.net
|
|
This presentation will describe how a collaborative process involving a funder, a grantee and a research entity led to policy and program changes in a network of over 350 pre-kindergarten programs in Los Angeles County. With representation from the three entities, the collaborative worked together to address study design issues, set performance targets, interpret results and identify programmatic implications. The purpose of this presentation is to highlight key programmatic changes that were made once targets were set and those that were initiated later based on the study findings. The presenter will also share lessons learned by all three entities throughout the collaborative process.
|
|
Session Title: Preparing for Funding Threats in Tough Economic Times: Early Childhood Initiative Perspectives From North Carolina and California
|
|
Multipaper Session 248 to be held in Ventura on Thursday, Nov 3, 8:00 AM to 9:30 AM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Michael Bates, Mosaic Network, mbates@mosaic-network.com
|
| Abstract:
In the world of multisite, non-profit initiatives, much attention has been put into strategic and evaluation planning to ensure that funders and other stakeholders understand the theoretical links between funding decisions and resulting initiative outcomes. In addition, advances in technology have greatly enhanced direct service providers' ability to collect, manage, store, and share data about program performance and outcomes. But has this abundance of data equated to quality information to help inform decision making at the initiative level and align funding priorities? Especially amid pervasive budget crisis and political scrutiny, it becomes even more critical to examine the degree to which the data we collect informs the tough fiscal choices funders face. In this presentation, we will explore these issues using two community early childhood initiatives as case studies, and discuss tools and strategies to engage stakeholders in meaningful conversations about evaluation information.
|
|
Preparing for Funding Threats in Tough Economic Times: A Funders' Perspective From a Smart Start Initiative in North Carolina
|
| Linda Blanton, Partnership for Children of Cumberland County, lblanton@ccpfc.org
|
|
More than ever, organizations today are looking for practical ways to measure their impact. How can you use performance data to inform strategy? Communicate results to key stakeholders? Allocate resources more efficiently? Ensure staff accountability? Benchmark and continuously improve? Raise funds and justify your budget? In this case study of a local Partnership for Children in North Carolina, we will explore these questions, and discuss the issues, promises, and challenges associated with moving from traditional service silos to an integrated, unified collaborative system. Specifically, we will discuss how to direct and manage for impact, how to refine and focus the data capture process, and how to produce accurate and timely intelligence. We will also discuss strategies and challenges with utilizing this intelligence wisely to ensure we stay true to our mission when faced with drastic funding cuts.
|
|
Preparing for Funding Threats in Tough Economic Times: A Funders' Perspective From a First Five Initiative in California
|
| Pedro Paz, First 5 Santa Barbara, ppaz@co.santa-barbara.ca.us
|
|
The lingering budget crisis in the state of California has further increased the possibility and severity of funding reduction in various state initiatives. This includes increased pressure to reduce spending in the state's Early Childhood Initiatives, including First 5 of California. In this presentation we will offer a case study on how we are using our ten-year investment in evaluation and data in our funded initiatives to make decisions with further cuts in already diminished funding. Specifically, we will present an overview of our evaluation and data collection efforts in the last 10 years. Then, we will present how we are using these results as decision makers at the organizational funding level to re-align our funding priorities in the anticipation of further cuts to our revenues.
|
|
Preparing for Funding Threats in Tough Economic Times: Examining the "Data" in Data-based Decision Making
|
| Prashant Rajvaidya, Mosaic Network, prash@mosaic-network.com
|
|
As implementers of large-scale data systems across a number of multisite initiatives, we have successfully met the challenge of how we collect complex data across a variety of diverse programs and put it into the hands of decision makers in real-time. Increasingly, however, we are faced with the problem of having too much data, with a need to distill it into useful information, as illustrated in these two case studies. This presentation will focus on how we make both the data and the presentation of data more relevant to the kinds of decisions stakeholders need to make. Specifically, how do we design data collection components, integrate third-party data, and incorporate quantitative and qualitative information to maximize data utilization? We'll address design principles and practices that engage stakeholders in conversations about evaluation data, and discuss our experiences with a variety of useful presentation tools-dashboards, balanced scorecards, reports, and presentations.
|