|
Session Title: Truth, Beauty, and Justice: A Conversation With Plenary Speakers
|
|
Panel Session 322 to be held in Lone Star A on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Presidential Strand
|
| Chair(s): |
| Valerie Caracelli, United States Government Accountability Office, caracelliv@gao.gov
|
| Abstract:
The morning plenary speakers addressed issues related to the conference themes of Truth, Beauty, and Justice. This session offers an opportunity to discuss the implications of their ideas for quality in evaluation theory and practice.
|
|
Truth, Beauty, and Justice: A Discussion of Plenary Themes
|
| Valerie Caracelli, United States Government Accountability Office, caracelliv@gao.gov
|
| Rodney Hopson, Duquesne University, hopson@duq.edu
|
|
This presentation will provide a brief reflection on the themes raised in the plenary presentation. The presenters will then facilitate and participate in a discussion between the audience and the morning's plenary speakers.
|
|
|
Session Title: Development Evaluation for Complex Systems: Quality as Speed and Adaptability
|
|
Expert Lecture Session 323 to be held in Lone Star B on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Systems in Evaluation TIG
|
| Chair(s): |
| Michael Quinn Patton, Utilization-Focused Evaluation, mqpatton@prodigy.net
|
| Presenter(s): |
| Michael Quinn Patton, Utilization-Focused Evaluation, mqpatton@prodigy.net
|
| Abstract:
Traditional methodological criteria of quality such as validity, reliability, generalizability, and attribution specificity are often fixed and static. Speed, agility, responsiveness and adaptability can be perceived as threats to rigor when program evaluation is conducted within closed system assumptions, including being able to standardize interventions, predetermine outcome measures, and control uncertainty. Developmental evaluation, in contrast, supports rapid feedback, emergent designs, and adaptability in open and complex adaptive systems characterized by high degrees of uncertainty, nonlinearity, and turbulence. Speed, agility, and adaptability – often considered threats in rigor under traditional evaluation designs – become criteria of quality under conditions of complexity. This session will examine speed and adaptability as alternative criteria of quality. The session is based on the presenter’s new book entitled “Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use.”
|
|
Session Title: Exploring the Multiple Roles of Collaborative, Participatory, and Empowerment Evaluators
|
|
Think Tank Session 324 to be held in Lone Star C on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
|
| Presenter(s):
|
| David Fetterman, Fetterman & Associates, fettermanassociates@gmail.com
|
| Discussant(s):
|
| Abraham Wandersman, University of South Carolina, wandersman@sc.edu
|
| Liliana Rodriguez-Campos, University of South Florida, liliana@usf.edu
|
| Lyn Shulha, Queen's University at Kingston, lyn.shulha@queensu.ca
|
| Rita O'Sullivan, University of North Carolina at Chapel-Hill, ritao@email.unc.edu
|
| Abstract:
This will be a participant focused, interactive session. Members of the audience will be asked to work together and list the multiple roles of collaborative, participatory and empowerment evaluators on poster paper. They will also note strengths and weaknesses associated with these roles. Leaders in the group will be asked to report back a summary of their insights and reflections. A panel of experts in the field will respond to the lists and add their reflections concerning the lists. Panel experts will include: Collaborative Evaluators Rita O'Sullivan and Liliana Rodriguez-Campos; Participatory Evaluator Lyn Shulha, and Empowerment Evaluators David Fetterman and Abraham Wandersman.
|
|
Session Title: Evaluation and Nonprofits: Learning From Experience
|
|
Multipaper Session 325 to be held in Lone Star D on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Chair(s): |
| Charles Gasper,
Missouri Foundation for Health, cgasper@mffh.org
|
|
Outcome Measurement and Nonprofit Relational Work
|
| Presenter(s):
|
| Lehn Benjamin, George Mason University, lbenjami@gmu.edu
|
| Abstract:
Outcome measurement has been one of the most prevalent responses to accountability concerns in the sector. Outcome measurement draws systematic attention to whether program participants are better off as a result of nonprofit work. The hope is that this attention to measurable outcomes will increase organizational responsiveness to those served, lead to greater transparency and improve effectiveness in addressing social problems. Yet despite the stated goal of outcome measurement, scholarly discussions and empirical data paint a mixed picture of outcome measurement’s potential impact on the nonprofit-beneficiary relationship. To better understand how and in what ways outcome measurement may strengthen and/or weaken this relationship, this paper presents the findings from a content analysis of the ten most commonly referred to outcome measurement training material targeted to nonprofits and then compares this to what we know about the work nonprofit staff do with those they serve.
|
|
Enriching the Quality of Consultation and Research With Small Community-based Nonprofit Organizations: Ten Insights and Understandings From the Work of a University-based Consulting Center
|
| Presenter(s):
|
| Leah Neubauer, DePaul University, lneubaue@depaul.edu
|
| Douglas Cellar, DePaul University, dcellar@depaul.edu
|
| Gary Harper, DePaul University,
|
| Abstract:
The Center for Community and Organization Development (CCOD) is a University-based Center that provides not-for-profit service organizations with an array of consultation and research services. The presenter will discuss ten insights/understandings from working with a diverse number of CBOs over the last decade: 1). Finding our Niche was Key, 2). The Effects of Passion are Paradoxical, 3). Preparation Builds the Foundation for Entering into a Partnership, 4). Nurturing a Partnership is a Balancing Act, 5). Funding Sources Influence the Consulting Relationship, 6). Boards of Directors are Omnipresent, 7). Volunteers are Sovereign, 8). Project Scope is Easy to Underestimate and it Expands, 9). A Comprehensive Framework Links Consulting, Advocacy, and Research, & 10). Reflection and Planning Streamline Future Work. Theory-driven, research-grounded, community-based & multidisciplinary approaches to ensuring quality in consultation and research will be addressed. This presentation highlights the co-authors recently published chapter in Consulting and Evaluation (Viola & McMahon 2009).
|
| |
|
Session Title: Bayesian Mixture Modeling Versus Traditional Meta-analysis: Examining the Treatment Advantage Research Using Three Meta-analytic Approaches.
|
|
Multipaper Session 326 to be held in Lone Star E on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Chair(s): |
| Karen Larwin,
University of Akron, Wayne, drklarwin@yahoo.com
|
|
Bayesian Mixture Modeling: Also Known as Bayesian Meta-analysis
|
| Presenter(s):
|
| James Michael Menke, University of Arizona, menke@email.arizona.edu
|
| Abstract:
Comparative effectiveness research (CER) tackles the decisions as to whether a treatment or intervention should be used, by comparing its effectiveness and cost to some standard. CER is an initiative brought forth by the Obama administration as a way of better informing health choices and improving health care system efficiencies by increasing transparency. CER methods have been around for years in Australia and the UK. Their arrival in the US is timely with respect to the current convergence of a health care, health insurance, and economical crisis. Bayesian Mixture Modeling (BMM) offers a way to conduct CER. BMM is a method of data synthesis that allows studies from different specialties and programs to be compared as if they are all arms of a single study, with certain convenient advantages. There are four general steps in CER under Bayesian and decision analysis that will be presented and discussed: 1) convert effect sizes to probabilities, 2) simulate or model a direct comparison between at least two treatments to estimate relative effect sizes, 3) estimate stability of findings via sensitivity analyses, and 4) interpret findings.
|
|
|
Session Title: Evaluating a Multi-site Twenty First Century Learning Program: Strategies and Results
|
|
Panel Session 327 to be held in Lone Star F on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
|
| Chair(s): |
| Mary Nistler, Learning Point Associates, mary.nistler@learningpt.org
|
| Abstract:
This two-part panel presentation summarizes two independent but related evaluation activities intended to bring coherence to a multi-site school improvement initiative implemented in 18 private schools in Hawaii, and lay the groundwork for future evaluation activities. The initiative, Schools of the Future, promotes the incorporation of 21st century learning skills and provides a high level of support to participating schools through online Communities of Practice (COPs). Two early activities are described in this panel. The first is framing the initiative and its diverse projects so that projects can be better understood and evaluated. The second activity addresses our development and use of a coding system and relational database to assess content and level of participation of the online COPs. This session will describe specific analytic strategies, principles guiding decisions, and their benefits and limitations.
|
|
Order From Chaos: Building a Framework for the Evaluation of a Multi-site School Reform Initiative
|
| Mary Nistler, Learning Point Associates, mary.nistler@learningpt.org
|
| Manolya Tanyu, Learning Point Associates, manolya.tanyu@learningpt.org
|
|
This presentation describes a step-by-step approach to give coherence to an initiative comprised of several diverse and loosely related projects. Schools of the Future is an initiative to promote 21st Century skills in Hawai'i private schools. The goals of the evaluation in the first year were to clarify the similarities and differences of the 18 different project sites, and to understand the ways that project design and context influences project success. We specified a conceptual framework into which we categorized each of the project sites. Through a review of written plans, we created an initial set of project typologies and categorized the projects according to dimensions of level of intended change, implementation strategy, and intended use of technology. We will describe the initial problem we faced, data sources, analysis, reporting frameworks, and benefits and limitations of the effort.
|
|
|
Strategies for Evaluating Online Communities of Practice
|
| Jonathan Margolin, Learning Point Associates, jonathan.margolin@learningpt.org
|
|
An evaluation of the use of a social network site is a component of a multi-site evaluation of the Schools of the Future initiative, which aims to promote 21st Century skills. The initiative uses the social networking site Ning to support online Communities of Practice (COPs). The evaluation provided quarterly feedback on the extent to which Ning enhances communication and collaboration across schools, and supports implementation of the projects. The evaluation provided coherent feedback summarizing discussions among the 300+ participants over eight months. By creating a coding system and a relational database to catalog posts and threaded discussions, we were able to describe how the content and volume of Ning participation varied over time and across three different COP discussion groups. The presentation will provide practical tips for addressing the challenges of evaluating participation and effectiveness of online communities of practice social network sites.
| |
| Roundtable:
What Role Should the Evaluator Take When a Project Is Off-Track? |
|
Roundtable Presentation 328 to be held in MISSION A on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Evaluation Use TIG
|
| Presenter(s):
|
| Robert J Ruhf, Western Michigan University, robert.ruhf@wmich.edu
|
| Mary Anne Sydlik, Western Michigan University, mary.sydlik@wmich.edu
|
| Abstract:
Science and Mathematics Program Improvement (SAMPI), an evaluation center within the Mallinson Institute for Science Education at Western Michigan University, has seen its share of programs that have veered from their original stated goals. Examples that will be discussed by presenters include: project staff cannot agree on a direction and become contentious with each other; project staff begin with grand and noble intentions but become increasingly disorganized due to poor planning or inexperience. Additional examples that will be discussed by presenters include: project staff do not like what was written in an evaluation report and attempt to control what evaluators write in future evaluation reports; project staff misinterpret and misuse small parts of an evaluation report to make the project look more successful than it really was. Finally, presenters will engage round table participants in discussions about ways that they and SAMPI have addressed these and similar issues.
|
|
Session Title: Quality Evaluation Includes Everyone! Using Universal Design to Make Your Evaluation More Accessible
|
|
Skill-Building Workshop 329 to be held in MISSION B on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Special Needs Populations TIG
and the Graduate Student and New Evaluator TIG
|
| Presenter(s):
|
| Jennifer Sulewski, University of Massachusetts, Boston, jennifer.sulewski@umb.edu
|
| Richard Petty, Independent Living Research Utilization, richard.petty@bcm.edu
|
| June Gothberg, Western Michigan University, june.gothberg@wmich.edu
|
| Abstract:
Universal design refers to designing products or programs so that they are accessible to everyone. Universal design principles can be applied to evaluation to ensure that all relevant populations are included at every stage of the work, from project design to sharing of findings. Session attendees will learn how to apply the concepts of universal design to evaluation as well as practical techniques for making evaluations inclusive. Specific topics covered will include stakeholder input and involvement, recruitment of difficult to reach populations, consent processes, data collection, and dissemination of findings. Participants are invited to bring materials from an evaluation project (such as an evaluation plan, a data collection instrument, or a report); the session will include a hands-on portion working on how to incorporate universal design to improve these materials. Materials and resources will also be provided for more information.
|
|
Session Title: Marketing the Online You: Developing and Maintaining a Professional Online Presence
|
|
Demonstration Session 330 to be held in BOWIE A on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Independent Consulting TIG
|
| Presenter(s): |
| Shelly Mahon, University of Wisconsin, Madison, mdmahon@wisc.edu
|
| Abstract:
The purpose of this demonstration is to illustrate the basic steps in creating an online presence that extends your professional reach to a much broader audience. Development of your online identity can help you articulate and leverage your value across a variety of different media. This can enhance your recognition as an expert in the field, building credibility that advances your career. Following a brief discussion of what it is and why it is important, the presenter will demonstrate the process of creating an online presence from beginning-to-end. Constructing websites, building online narratives, and integrating and interlinking platforms like Twitter, Facebook, and Google Reader are part of this process. Maintaining online etiquette, identifying new audiences, and determining how often you post are also important. Finally, you will have opportunities to personalize this process as you consider how to begin to craft the online YOU that benefits your personal and professional goals.
|
|
Session Title: Overcoming Specific Challenges to Service Delivery Within Hispanic-American Communities
|
|
Multipaper Session 331 to be held in BOWIE B on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Multiethnic Issues in Evaluation TIG
|
| Chair(s): |
| Leona Johnson,
Hampton University, leona.johnson@hamptonu.edu
|
|
Different Strokes for Different Folks: Designing a Culturally Grounded Implementation Evaluation of Grantees Serving a Diverse Minority
|
| Presenter(s):
|
| Luis R Torres, University of Houston, lrtorres@uh.edu
|
| Karen Gardiner, The Lewin Group, karen.gardiner@lewin.com
|
| Luis H Zayas, Washington University in St Louis, lzayas@wustl.edu
|
| Allison Hyra, The Lewin Group, allison.hyra@lewin.com
|
| Cara Kundrat, The Lewin Group, cara.kundrat@lewin.com
|
| Whitney Engstrom, The Lewin Group, whitney.engstrom@lewin.com
|
| Abstract:
As the largest minority and fastest-growing group in the U.S., Hispanics are an important group for service delivery. Their great diversity is an important consideration when designing plans to evaluate those services in a culturally grounded manner. The federally funded Hispanic Healthy Marriage Initiative Grantee Implementation Evaluation seeks to identify the unique cultural, linguistic, demographic, and other factors that need to be considered in designing and successfully delivering family strengthening services to Hispanic families. This paper outlines the process undertaken by a diverse group of evaluators to develop a culturally grounded evaluation aimed at documenting various programmatic approaches to improve Hispanic family well-being through healthy marriage and relationship education programs. Program evaluations that are responsive to contextual and cultural specificity are still fairly novel and emergent (Chouinard & Cousins, 2009). Our paper makes an important contribution to the emerging field of culturally grounded evaluation.
|
|
Issues in Evaluating Service Delivery to Migrant Farm Workers With Disabilities
|
| Presenter(s):
|
| Karen Cinnamond, Human Development Institute, kecinn2@email.uky.edu
|
| Joanne Farley, Human Development Institute, joanne.farley@uky.edu
|
| Abstract:
As AEA and its members increasingly recognize, “quality” in evaluation practice requires that that practice be culturally competent as well as other things. While the organization is still working toward a cultural competence statement, evaluators continue to work with issues of cultural competence as they evaluate programs involving cultural diversity. This paper will focus on issues involved in evaluators being culturally competent in recognizing the need for and developing an in-depth understanding of how Hispanic migrant workers and their families interpret what it means to have disabilities and how disabilities figure in their self-, familial and social understandings of identity. It will also discuss how needs related to addressing disabilities are identified within the landscapes of this group’s life experiences in attempting to maintain a stable and acceptable quality of life. The focal point of discussing these issues will be the effort to evaluate a program serving primarily Hispanic migrant farmworkers with disabilities. Services provided by this program are intended to be those that meet the needs of migrant workers with disabilities in finding stable employment that allows for economic self-sufficiency. While the evaluators embarking on this evaluation were aware that cultural diversity would require development of new understandings and of evaluation methods tailored to the cultural context of the evaluation, they were unprepared for just how dramatic some of the adjustments they needed to make would be. The lessons they have learned in negotiating the communicative and cultural meanings and experiences of this population have been profound and have significance beyond the evaluation of service delivery to Hispanic migrant farmworkers. This paper will provide an in-depth discussion of both the issues and lessons learned which relate to cultural competency which emerged during the evaluation of this program.
|
| |
|
Session Title: Advocacy Capacity Nuance: Helpful, or Too Much? Considering the Cases of Assessing Coalitions Versus Networks and Grassroots Mobilization Versus Community Organizing
|
|
Panel Session 332 to be held in BOWIE C on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Advocacy and Policy Change TIG
|
| Chair(s): |
| Astrid Hendricks, California Endowment, ahendricks@calendow.org
|
| Discussant(s):
|
| Astrid Hendricks, California Endowment, ahendricks@calendow.org
|
| Abstract:
As advocacy evaluation has advanced over the last few years, important nuances have evolved relevant to evaluating different types of advocacy organizations and the work they do. At what point do the nuances contribute and at what point do they lose their value to either the evaluator or the practitioner? This session will explore new work conducted in the area of assessing advocacy capacity using a comparative lens to try and unpack and discuss the relevance of nuance associated with two important advocacy capacity elements: Coalitions vs. Networks and Advocacy vs. Community Organizing. Jared Raynor of the TCC Group and Sue Hoechstetter of Alliance for Justice will discuss relevant new resources available to the field on the topics, including evaluating coalitions and frameworks for evaluating community organizing. Participants will be invited to comment and ask questions related to the presentations and their own experience dealing with these nuances.
|
|
You Say Network, I Say Coalition: What is the Difference and Why Should Evaluators Care?
|
| Jared Raynor, TCC Group, jraynor@tccgrp.com
|
|
Jared Raynor of TCC Group has played a leading role in articulating and evaluating advocacy capacity. Building on previous work, he has recently focused on coalitions and variant forms of cooperative arrangements. This presentation will respond to the need of evaluators to continue to seek approaches for assessing the capacity of networks and coalitions that do advocacy work by presenting observations, findings, and examples related to how networks and coalitions overlap and differ in their assessment needs. His analysis is grounded in both academic review and practical experience and a recent publication captures critical aspects of evaluating coalitions and coalition work. The presentation will also explore the practicalities of nuancing/segmenting collaborative forms and engage the difficult question of when does the practical value dissolve.
|
|
|
You Say Power, I Say Influence: How Helpful Is It to Link or Separate Community Organizing and Advocacy Evaluation Approaches?
|
| Susan Hoechstetter, Alliance for Justice, sue@afj.org
|
|
Sue Hoechstetter, an early leader in developing advocacy evaluation and capacity assessment as well as community organizing evaluation and capacity assessment, approaches and tools continues to explore these areas. In addition to developing and providing ongoing updates to the web site, Resources for Evaluating Community Organizing, Sue has recently developed a working draft of a community organizing capacity assessment tool. Her work has uncovered the varying perceptions of overlaps and differences between advocacy and community organizing work, and the impact of those perceptions. She will present a comparison and analysis of advocacy and community organizing capacity assessment, including her new community organizing capacity assessment framework, and encourage participants to question and comment on all of the material.
| |
| Roundtable:
Enhancing Evaluation Quality in a Developing Context: A South African Case Study |
|
Roundtable Presentation 333 to be held in GOLIAD on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Social Work TIG
|
| Presenter(s):
|
| Willem Roestenburg, University of Johannesburg, wimr@uj.ac.za
|
| Emmerentie Oliphant, Afri.Yze Consult (Pty) Ltd, et_ol@hotmail.com
|
| Abstract:
Evaluation practice is in its infancy in South African Social Work Service delivery. Evaluation requirements in research briefs from this context remain weak in detail and lack clear specification. The authors conducted a research project that would be the first step in the evaluation of a Governmental program for youth in conflict with the law. The brief was to assess changes in youth crime profiles over two time periods and the extent to which a range of diversion programs were effective in changing youth perceptions about crime and behavioral intentions towards a crime free life. This project presented with a number of contextual, financial and logistical challenges to evaluation quality. This paper demonstrates how the authors overcame these challenges by rigorous conceptualization, formulation, structural and methodological strategies to conduct the project in such a way that higher levels of beauty, truth and justice were achieved than originally required.
|
| Roundtable:
Being on Target: Evaluation Techniques for Disaster Relief Agencies |
|
Roundtable Presentation 334 to be held in SAN JACINTO on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Disaster and Emergency Management Evaluation TIG
|
| Presenter(s):
|
| LaJuana Hector, Texas A&M University, hector@tarleton.edu
|
| Abstract:
In the aftermath of hurricane Katrina recovery efforts, non-governmental organizations, for profit businesses, and churches throughout the northern Texas region provided financial assistance, food, furniture, clothing, and intangible services to disaster victims. Not all agencies were able to demonstrate their outcome success percentage or outcome measure rates. Through empirical research, an eclectic evaluation model is presented as a tool for disaster relief agencies to utilize throughout the programs life-cycle.
|
|
Session Title: Using Business Frameworks to Evaluate Program Impact on Student Learning
|
|
Panel Session 335 to be held in TRAVIS A on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Assessment in Higher Education TIG
|
| Chair(s): |
| Norena Norton Badway, San Francisco State University, nbadway@sfsu.edu
|
| Abstract:
Because federal and state grants increasingly require evidence that student learning outcomes have improved, evaluators are sometimes challenged to match data collection and analysis methods with grantor requirements. In the case of National Science Foundation Advanced Technological Education grants, student populations are too small and targeted to allow for randomized trials. In such cases, evaluators rely on ex post facto research, and often need to create baseline data after intervention has been implemented. In this study, researchers used Kirkpatrick’s model of measuring faculty satisfaction with professional development, as well as faculty learning, changes in instructional behavior, and changes in students’ learning. This session will present the attributes as well as constraints in applying the Kirkpatrick model to evaluating the value of ATE funded professional development.
|
|
Using Business Frameworks to Evaluate Program Impact on Student Learning
|
| Norena Norton Badway, San Francisco State University, nbadway@sfsu.edu
|
|
Because federal and state grants increasingly require evidence that student learning outcomes have improved, evaluators are sometimes challenged to match data collection and analysis methods with grantor requirements. In the case of National Science Foundation Advanced Technological Education grants, student populations are too small and targeted to allow for randomized trials. In such cases, evaluators rely on ex post facto research, and often need to create baseline data after intervention has been implemented. In this study, researchers used Kirkpatrick’s model of measuring faculty satisfaction with professional development, as well as faculty learning, changes in instructional behavior, and changes in students’ learning. This session will present the attributes as well as constraints in applying the Kirkpatrick model to evaluating the value of ATE funded professional development.
|
|
|
Challenges in Data Collection Using Kirkpatrick's Model
|
| Rachel Rich, University of the Pacific, rachel.l.rich@gmail.com
|
|
Rachel Rich is a doctoral candidate at the University of the Pacific, specializing in measuring impact of distance education as a form of professional development for faculty. She has worked with Norena Norton Badway, Ph.D., in designing and conducting this analysis using a business/ corporate model for measuring the success of training. Dr. Badway holds NSF grants to research the ATE program and conducts small and large ATE evaluations.
| |
|
Session Title: From Game Shows, to Universities, to Cooperative Extension: The Benefits, Challenges, and Logistics of Using an Audience Response System ("Clickers") as an Evaluation Tool in Extension Programs
|
|
Demonstration Session 336 to be held in TRAVIS B on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Integrating Technology Into Evaluation
|
| Presenter(s): |
| Michelle Currie, Cornell University, mc825@cornell.edu
|
| Abstract:
This presentation explores the challenges, successes, and general considerations of using an Audience Response System (ARS) or “clickers” as an evaluation tool, as highlighted by three Cornell University Cooperative Extension - NYC (CUCE-NYC) programs. The impact the “clickers” had on planning, implementation, data management, reporting and use of results will be discussed through the experiences of the: College Achievement through Urban Science Exploration Project (CAUSE), a year-long program for low-income, minority teens; Lessons In a Box nutrition and health curriculum as used in one-shot-deals at back-to-work vendors; and the Urban Horticulture and Ecology Training Program (UHETP), a professional development workshop series with indoor and outdoor field experiences. Logistical items including the various receivers, response devices, software, and reporting options will also be covered.
|
|
Session Title: Canadian Evaluation Society's Professional Designation Program - How Our Members Are Applying to be Designated as a Credentialed Evaluator
|
|
Demonstration Session 337 to be held in TRAVIS C on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
|
| Presenter(s): |
| Keiko Kuji-Shikatani, Ontario Ministry of Education, keiko.kuji-shikatani@ontario.ca
|
| Christine Frank, Christine Frank & Associates, christine.frank@sympatico.ca
|
| Martha McGuire, Cathexis Consulting, martha@cathexisconsulting.ca
|
| Discussant(s): |
| Jean A King, University of Minnesota, kingx004@umn.edu
|
| Abstract:
The Canadian Evaluation Society launched the Credentialed Evaluator (CE) professional designation program in May 2010 at the CES Annual Conference in Victoria. The Professional Designation Program (PDP) was developed to define, recognize and, promote the practice of ethical, high quality and competent evaluation in Canada. The designation means that the holder has provided evidence of education and experience required to be a competent evaluator. The designation is a service provided by CES to its members, who may elect to become credentialed on a voluntary basis. It recognizes those with the education and experience to provide evaluation services, and through its maintenance and renewal requirements, promotes continuous learning within our evaluation community. In the demonstration you will visit the web-based application site for the CES Credentialed Evaluator (CE) designation, which is also used by the Credentialing Board to review the submitted application and for CE to record their designation maintenance requirements.
|
|
Session Title: Environmental Scans Using State Legislative Databases for Health Policy Research
|
|
Demonstration Session 338 to be held in TRAVIS D on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Health Evaluation TIG
|
| Presenter(s): |
| Sasigant O'Neil, Mathematica Policy Research, so'neil@mathematica-mpr.com
|
| Mindy Lipson, Mathematica Policy Research, mlipson@mathematica-mpr.com
|
| Abstract:
Results from an environmental scan can be used for multiple purposes in the field of health policy, including policy development, improvement, and assessment. State legislative websites include databases that provide a rich source of publicly available information for an environmental scan. These databases contain enacted and unsuccessful bills that can be used to track and compare policies across states. Each state’s database has unique features, so a systematic searching process is needed to ensure successful searches. Through our work to identify public health laws and regulations, we have become familiar with the types of information in them and developed best practices in searching through these databases. Participants will leave the session with knowledge of state legislative database structures and the information contained in them, strategies for inputting key words for the searches, a summary sheet of each state legislative database’s unique features, and tools for cataloging policies to facilitate analysis.
|
|
Session Title: Including Sexual Orientation Information in an Evaluation: When, Why, and How?
|
|
Multipaper Session 339 to be held in INDEPENDENCE on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Lesbian, Gay, Bisexual, Transgender Issues TIG
|
| Chair(s): |
| Joe E Heimlich,
Ohio State University, heimlich.1@osu.edu
|
|
Gay and Lesbian Visitors in Museum Evaluation
|
| Presenter(s):
|
| Joe E Heimlich, Ohio State University, heimlich.1@osu.edu
|
| Abstract:
Gays and lesbians visit museums somewhat more frequently than heterosexuals, yet are less likely to be members or donors. These differences are observed even when demographic characteristics other than sexual orientation are similar. In a recent evaluation of an exhibits, a museum agreed to the collection of information about whether the visitor identified as gay or lesbian, and whether he or she was in a relationship with a partner. This paper describes what was learned from adding this additional demographic component.
|
|
|
Session Title: Beyond Accountability in Educational Learning and Capacity Building
|
|
Multipaper Session 341 to be held in PRESIDIO B on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
|
| Chair(s): |
| Imelda Castañeda-Emenaker,
University of Cincinnati, castania@ucmail.uc.edu
|
| Discussant(s): |
| Sandra Mathison,
University of British Columbia, sandra.mathison@ubc.ca
|
|
The Challenge of Secondary School Reform: When Evaluation Improves Practice
|
| Presenter(s):
|
| Cathleen Armstead, University of Miami, carmstead@miami.edu
|
| Ann G Bessell, University of Miami, agbessell@miami.edu
|
| Sabrina Sembiante, University of Miami, s.sembiante@umiami.edu
|
| Abstract:
Evaluation of secondary school reform is critical within the context of strict accountability for schools and dwindling resources. Evaluation often becomes part of the process of improving schools and evaluators face the challenges of building capacity and changing minds with school personnel who have competing agendas and multiple demands. This session discusses how evaluation can change minds utilizing Howard Gardiner’s model of the seven levers: reason, research, resonance, re-description, resources and rewards, real world events and resistance. Our experience with providing a logic model of school reform, followed by a presentation of school data that resonates with school personnel was central to the evaluation success. We learned to address resources and rewards, and real world events to enhance the utilization of our findings. This session, based on evaluation data coupled with interviews of school personnel, describes multiple pathways in which an evaluation can change minds and improve practice.
|
|
Evaluation for Learning and Accountability?
|
| Presenter(s):
|
| Katherine Ryan, University of Illinois at Urbana-Champaign, k-ryan6@illinois.edu
|
| Tysza Gandha, University of Illinois at Urbana-Champaign, tgandha2@illinois.edu
|
| Abstract:
Historically, evaluation for learning (formative) and evaluation for accountability (summative) were conceptualized as functional alternatives or a dichotomy (e.g., Scriven, 1991). In contemporary times of externally-driven accountability, evaluators who are committed to supporting organizational learning and evaluation capacity building are increasingly being called to "engage in dialogue with those representing demands for accountability" (Dahler-Larsen, 2009). Building on previous work on internal and external evaluation (Dahler-Larsen, 2009; Nevo, 2006), we propose an evaluation approach (in the education domain) that addresses both audit and learning purposes to present one such ‘dialogue.’ We discuss the implications of engaging evaluation for learning and evaluation for accountability on evaluation quality, and critically consider whether this is a potentially-productive response to accountability pressures so that evaluation can continue to serve multiple purposes in a democratic society.
|
| |
|
Session Title: Science, Technology, Engineering, and Mathematics (STEM) Recruitment Programs: Evaluation of Effectiveness
|
|
Multipaper Session 342 to be held in PRESIDIO C on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the College Access Programs TIG
|
| Chair(s): |
| Kurt Burkum,
ACT, kurt.burkum@act.org
|
|
Measurable Indicators of Effectiveness for National Science Foundation (NSF) Advanced Technological Education (ATE) Centers/ Projects
|
| Presenter(s):
|
| Kelly Robertson, Western Michigan University, kelly.robertson@wmich.edu
|
| Stephen Magura, Western Michigan University, stephen.magura@wmich.edu
|
| Abstract:
The presentation will describe the common metrics and methodologies developed by The Evaluation Center to measure the effectiveness of the U.S. National Science Foundation’s (NSF) Advanced Technological Education (ATE) center and project activities. The central goals of NSF ATE are to produce more science and engineering technicians to meet workforce demands and to improve the technical skills and general educational preparation of these technicians and their educators. Previous to this study, there were no generally accepted means of measuring the effectiveness of ATE activities; thus, the results of this study should allow NSF to better understand variations in success of its ATE grantees and to apply an objective effectiveness measurement strategy to ATE and similar programs in the future. Further, this study may serve as a model for other organizations that manage diverse portfolios of projects, yet have no way of comparing results across activities.
|
|
Beyond the Program: Evaluating the Institutional Impact of Initiatives to Increase the Participation of Underrepresented Minority College Students in Biomedical and Behavioral Science
|
| Presenter(s):
|
| Jack Mills, Independent Consultant, jackmillsphd@aol.com
|
| Maria Elena Zavala, California State University, Northridge, mariaelena.zavala@csun.edu
|
| Abstract:
This paper presents an evaluation of efforts at a major urban university—California State University Northridge--to prepare an ethnically diverse student body for advanced degrees and careers in science. CSUN has long received National Institutes of Health funding to provide mentoring, research experience, academic support and professional development to cohorts of underrepresented minority students majoring in the biomedical and behavioral sciences. These programs are increasingly being called on to demonstrate their impact on students across the institution. The evaluation used multiple methods drawing on Bandura’s theory of self-efficacy to examine a broad sample of science students’ academic and career plans. The theory posits that academic and career intentions result from the interaction of behavioral, internal personal and environmental factors. Thus, the evaluation examined students’ hands-on research experience, support such as financial assistance and social support and their relationship to scientific self-efficacy and expected career outcomes.
|
| |
| Roundtable:
Methodological Issues in Evaluating Potentially Transformative Research |
|
Roundtable Presentation 343 to be held in BONHAM A on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Research, Technology, and Development Evaluation TIG
|
| Presenter(s):
|
| Mary Beth Hughes, Science and Technology Policy Institute, mhughes@ida.org
|
| Bhavya Lal, Science and Technology Policy Institute, blal@ida.org
|
| Asha Balakrishnan, Science and Technology Policy Institute, abalakri@ida.org
|
| Abstract:
In recent years, there has been an explosion of programs aimed at supporting potentially transformative research (also referred to as innovative, pioneering, frontier, high-risk, high-reward research) in the Federal research and development (R&D) portfolio. Evaluators have struggled to develop a methodology to appropriately assess the impact of these programs given the non-traditional nature of these programs. In this roundtable, we present several ongoing and recently completed studies – both formative and summative -- that utilize different approaches to evaluating programs that aim to support potentially transformative research. While the evaluations themselves utilized a multi-method approach, each highlights a single method, and this roundtable will be used to discuss both strengths and limitations of those approaches. The programs examined herein are the NIH Director’s Pioneer Award, the NIH’s New Innovator Award, and the NSF’s Emerging Frontiers in Research and Innovation.
|
|
Session Title: Evaluation: Understanding the Context and Managing Tensions
|
|
Multipaper Session 344 to be held in BONHAM B on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Chair(s): |
| David Campbell,
University of California, Davis, dave.c.campbell@ucdavis.edu
|
|
In Praise of Going Local: The Promise of Workarounds as Evaluative Indicators
|
| Presenter(s):
|
| David Campbell, University of California, Davis, dave.c.campbell@ucdavis.edu
|
| Abstract:
“Going local” is typically a term of derision in evaluation circles, but privileging the local perspective may actually improve evaluative practice. I describe an evaluative approach that takes the perspective of local grantees as its starting point, rather than the perspective of funders interested in whether their resources are being used wisely. The approach uses workarounds—occasions where grantees deliberately evade funder compliance demands that are deemed contradictory to the goals sought—as key evaluative indicators. Current accountability regimes—and the modes of evaluation that support these regimes—drive conversation about workarounds underground. The prevailing attitude of locals is that to reveal creative workarounds is to invite unwanted scrutiny or reprisal from grant giving agencies who interpret funding requirements literally. A more enlightened approach would treat workarounds as indicators of system flaws, using them to provoke negotiations that improve both the substance of policy and the processes of grant making.
|
|
Strengths and Limitations of Nonprofit Evaluations in Times of Organizational Change: The Case of a Volunteer Service and a Disability Advocacy Organization Evaluating Their Programs Concurrently With the Revision of Their Strategic Plan
|
| Presenter(s):
|
| Michele Tarsilla, Western Michigan University, michele.tarsilla@wmich.edu
|
| Abstract:
Nonprofits often conduct evaluations of their programs in response to specific funders’ requests rather than based on concrete strategic needs. As a result, nonprofits often lose the independence of their evaluation function and become less accountable to the populations which they are expected to serve. As imperfect as it is, such scenario is not immutable. External evaluators may help build evaluation capacity and promote the diffusion of an evaluative culture within nonprofits. More interestingly, through the adoption of a participatory approach, external evaluators can enhance nonprofits self-reflection and facilitate their use of evaluation findings for strategic planning purposes. This paper will (i) illustrate the case of two nonprofits in Michigan which, as a result of an external evaluation, gained back the ownership of their evaluation process and managed to successfully revise their strategic plan; and (ii) discuss the feasibility and reproducibility of such process in other nonprofits.
|
| |
|
Session Title: Concordances: Development and Appropriate Uses
|
|
Multipaper Session 345 to be held in BONHAM C on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Rochelle Michel,
Educational Testing Service, rmichel@ets.org
|
|
The Concordance Development Process
|
| Presenter(s):
|
| Rochelle Michel, Educational Testing Service, rmichel@ets.org
|
| Abstract:
Concordance refers to the case in which equating methods are used to link scores on tests that are built to different specifications. This paper presentation uses an example of an information literacy assessment. Two earlier versions of the assessments were replaced with a single revised version; institutions need to know how these two exams compare. A concordance study was conducted showing the correspondence between the scores on the two previous versions and the revised version. This paper presentation will 1) provide a general review of procedures used to link scores on assessments, with an emphasis on concordances; 2) provide a step-by-step process for developing concordances; 3) describe how the process was applied to the development of the concordance between the previous and revised versions of the information literacy assessment and 4) highlight key considerations that need to be made when developing concordances.
|
|
|
Session Title: Building Evaluation Capacity Among School Leaders
|
|
Multipaper Session 346 to be held in BONHAM D on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Phyllis Clay,
Albuquerque Public Schools, phyllis.clay@aps.edu
|
| Discussant(s): |
| Sheryl Gowen,
Georgie State University, sgowen@gsu.edu
|
|
The Role of Evaluation in Educational Reform: What Evaluation Skills Should Be Required of School Leaders?
|
| Presenter(s):
|
| Tara Shepperson, Eastern Kentucky University, tara.shepperson@eku.edu
|
| Abstract:
PreK-12 educational administration is a discipline in flux. Long criticized as partially responsible for the failure of American public education, doctoral educational administration training is undergoing national revision. Initiatives including the Carnegie Project on the Educational Doctorate (CPED) and other reforms seek to dramatically revise training programs for administration practitioners away from the traditional academic structure towards a system that better prepares educational leaders to meet the needs of modern schools. Part of this transformation includes moving away from academic research to inquiry into solutions for real problems facing schools. What is less clear is the role evaluation will have in these training programs for school administrators. This presentation looks at the evidence from existing documents, literature, and interviews about the perceived value and role of evaluation in administrator preparation and American schools.
|
|
Evaluation Quality and Program Evaluation Skills for School Leaders: An Analysis of School Improvement Plans
|
| Presenter(s):
|
| Tamara M Walser, University of North Carolina at Wilmington, walsert@uncw.edu
|
| Abstract:
Given increasing accountability requirements, school improvement planning has become common and mandated in schools across the United States. Although State Departments of Education often provide guidelines and templates for developing School Improvement Plans, program evaluation skills are a necessary part of school improvement planning and implementation. These skills include needs assessment and context evaluation, goal development, literature review and evaluation of research, implementation monitoring, formative evaluation, and outcome evaluation. Additionally, educators must understand and apply standards of quality evaluation. The purpose of this presentation is to present the results of a content analysis of a sample of 50 School Improvement Plans, focusing on the quality of evaluation presented in the plans, and related implications for training school leaders in program evaluation to build evaluation capacity for school improvement.
|
| |
|
Session Title: Toolkit for Evaluating Impacts of Public Participation in Scientific Research
|
|
Demonstration Session 347 to be held in BONHAM E on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Environmental Program Evaluation TIG
|
| Presenter(s): |
| Tina Phillips, Cornell Univeristy, cbp6@cornell.edu
|
| Richard Bonney, Cornell Univeristy, reb5@cornell.edu
|
| Abstract:
The Toolkit for Evaluating Impacts of Public Participation in Scientific Research has been developed to meet a major need in the field of informal science education: To provide project developers and other professionals, especially those with limited knowledge or understanding of evaluation techniques, with a systematic method for assessing project impact that facilitates longitudinal and cross-project comparisons. This demonstration provides an overview of how to use the toolkit to evaluate the educational impacts of public participation in scientific research. The online tool provides customizable instruments and techniques for qualitative and quantitative research focused on understanding educational impacts within the NSF evaluation framework categories of knowledge, engagement, skill, attitudes, and behavior (Friedman, 2008).
This demonstration will be designed to introduce the toolkit, show how it is intended to be used, and encourage participants to provide feedback to improve the toolkit before it is widely released.
|
|
Session Title: Strategies for Evaluation Quality in Government
|
|
Multipaper Session 348 to be held in Texas A on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Government Evaluation TIG
and the Crime and Justice TIG
|
| Chair(s): |
| Margaret Braun,
Oregon Department of Corrections, margaret.j.braun@doc.state.or.us
|
|
Approaching Multi-site Evaluations in Government: A Process Evaluation of Drug Court and Treatment as Usual
|
| Presenter(s):
|
| Shannon Myrick, Oregon Youth Authority, shannon.myrick@oya.state.or.us
|
| Margaret Braun, Oregon Department of Corrections, margaret.j.braun@doc.state.or.us
|
| Abstract:
Over the last three decades, drug courts have emerged as an alternative method to reduce substance abuse among drug-addicted offenders through a combination of intensive supervision and treatment. Preliminary findings regarding the impact of drug courts are positive; however the NIJ (2006) recently called upon evaluators to address what specific processes of the drug court model are explicably related to offender outcomes. In this presentation we describe our efforts to answer this call in an evaluation of the comparative processes of drug courts and treatment as usual for offenders in four Oregon counties. We illustrate our evaluation approach and the strategies used to establish consistency and measure program theory across evaluation sites. We also describe challenges we encountered designing, implementing, and managing a multi-site, multi-condition evaluation within the local criminal justice system. Preliminary findings and the importance of conducting process evaluations to improve government evaluation quality will be discussed.
|
|
|
Session Title: Techniques and Tools for Reporting and Communicating Evaluation Findings Using NVivo
|
|
Demonstration Session 349 to be held in Texas B on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Qualitative Methods TIG
|
| Presenter(s): |
| Kari Greene, Oregon Public Health Division, kari.greene@state.or.us
|
| Abstract:
This session will offer useful information for maximizing the utility of NVivo when reporting evaluation findings. After reviewing the fundamentals of NVivo, the presenter will explore various techniques and tools for communicating and reporting to stakeholders, particularly community partners and non-researchers. The session will demonstrate how to use queries and visualizations of qualitative data sources to find “just the right quote” for a report, to build charts and visualizations that are accessible and understandable to community members, and to create data-driven models for program planning and improvement. Examples used throughout the demonstration come from interviews and focus group data on tobacco control programs with Alaska Native communities. The session is intended to build the working knowledge of participants to visualize and share data with multiple stakeholders and to demonstrate how software can improve the quality of reporting findings from qualitative data.
|
|
Session Title: New Cost Tools for The Evaluator’s Toolkit
|
|
Skill-Building Workshop 350 to be held in Texas C on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
|
| Presenter(s):
|
| Nadini Persaud, University of the West Indies, npersaud07@yahoo.com
|
| Abstract:
Cost analysis is a critical component of professional evaluations. Knowing that a program has helped participants is insufficient—evaluators must demonstrate that supposedly successful programs are not only cost effective, but cost effective to similar programs. According to the evaluation literature however, many evaluators do not have the requisite training to conduct rigorous cost analysis. This inadequacy can have serious repercussions—for example, termination of beneficial programs or implementation of new programs or continuation of existing programs that are not cost effective. This workshop will introduce evaluators to three cost analysis tools that will help them to conduct rigorous cost studies. After providing the audience with a brief overview of the tools, attendees will break into small groups and work on a practical exercise with the facilitator’s guidance. Tools will be disseminated free to participants and participants will be encouraged to use the tools and send questions/queries to the facilitator.
|
|
Session Title: Pursuit of Knowledge Discovery Opportunities Using Existing and Novel Structured and Unstructured Biomedical Science Related Datasets Using Diverse Knowledge Management Tools
|
|
Demonstration Session 351 to be held in Texas D on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Research, Technology, and Development Evaluation TIG
|
| Presenter(s): |
| Timothy Hays, National Institutes of Health, thays@od.nih.gov
|
| Abstract:
The National Institutes of Health (NIH) Portfolio Analysis Group (PAG) provides exploratory analysis of biomedical data to better understand research gaps, opportunities for research, and a more thorough understanding of existing NIH research activities. PAG also explores how this research evolves over time. To this end, PAG must investigate various processes, datasets and tools to help explore research activities and outcomes as well as social networks. To do so a variety of open source, proprietary and NIH created tools and algorithms are being assessed and evaluated. This workshop will explore elements of the NIH process and demonstrate a select set of portfolio analysis processes in current use as well as methods for assessing tools and algorithms for their usefulness.
|
|
Session Title: Evaluating Program Interventions in a Rapidly Changing World: Exploring the Potential of Mid-term Reviews, Real Time and Prospective Evaluation
|
|
Panel Session 352 to be held in Texas E on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Evaluation Use TIG
|
| Chair(s): |
| Marvin Taylor-Dormond, World Bank,
|
| Discussant(s):
|
| Patrick Grasso, World Bank, pgrasso45@comcast.net
|
| Abstract:
The global financial crisis, coupled with earlier food and energy crises threatened economic growth and living standards in developed and developing countries. Policy makers and development agencies responded with a range of direct spending, lending, and technical assistance to stabilize and stimulate economic growth and prevent negative welfare outcomes. In turn, evaluators are expected to provide relevant, appropriate, and timely assessments of how program funds are spent or implemented in this rapidly changing development context. Real-Time (RTE) and Prospective (PE) Evaluation and Mid-Term Reviews (MTR), offer the promise of timely and relevant evaluation, but with some drawbacks. Within the evaluation community there is still no clear, common understanding of what constitutes RTE and PE and of the potential risks and rewards. This panel would introduce examples crisis related interventions by multilateral development banks and would explore the pros and cons of RTE, PE and MTR and their implications for decision making.
|
|
Evaluating Program Interventions in a Rapidly Changing World: Exploring the Potential of Mid-term Reviews
|
| Chris Olson, European Bank for Reconstruction and Development, olsonc@ebrd.com
|
|
Chris Olson will give insight into the scope of the Global Financial Crisis in member countries of the European Bank of Reconstruction and Development (EBRD), and will discuss some of the measures taken by Multilateral Development Banks in the Crisis' wake. The discussion will then focus on the challenges of using Mid-Term Reviews for active projects in an unstable environment, providing examples from real EBRD cases. The discussion will be relevant to participants who evaluate ongoing projects in rapidly changing and uncertain environments where reliable information cannot yet be obtained. The presenters will seek to answer such questions as how to structure and conduct a Mid-Term evaluation in such environments? How can Mid-Term Reviews be used to cope with uncertainties, complexities and interdependencies? and How can lessons learned help improve a project’s remaining time?
|
|
|
Evaluating Program Interventions in a Rapidly Changing World: Exploring the Potential Real Time and Prospective Evaluation
|
| Stoyan Tenev, World Bank, stenev@ifc.org
|
|
Stoyan Tenev will discuss the challenges of evaluating under uncertainty and will look at the complexities, dynamism, uncertainties, and interdependencies in the context in which evaluations, including development evaluations, take place. The discussion will be relevant to participants who evaluate in highly dynamic and uncertain environments where obtaining reliable and robust knowledge can be difficult and past experience can be a poor guide for the future. The presenter will seek to answer such questions as how to structure and conduct an evaluation in such environments? What methods can be used to cope with uncertainties, complexities and interdependencies? What is the role of RTE and PE versus traditional ex-post assessments?
| |
|
Session Title: Thinking Again About Concept Mapping
|
|
Think Tank Session 354 to be held in CROCKETT A on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Extension Education Evaluation TIG
|
| Presenter(s):
|
| Benjamin Silliman, North Carolina State University, ben_silliman@ncsu.edu
|
| Abstract:
This Think Tank will orient participants to the purpose and process of concept mapping as a means of documenting quantitative and qualitative changes in attitudes or knowledge and the dynamics of change. Methods and applications will focus on essential elements of mapping (vs. elaborate data collection and analysis) that can be used in non-formal education programs. Following on that orientation, small groups will be challenged to describe how the method might be adapted to diverse age and cultural groups, used to document quantitative vs. qualitative growth, and serve as a context for tracking group process and maturation. The Think Tank will reconvene to share insights and outline the key elements and emphases of an Extension fact sheet that can be used to describe and utilize concept mapping. Subsequently, evaluators will be engaged to refine and elaborate the use of concept mapping in Extension practice.
|
|
Session Title: Behavioral and Systematic Causal Influences in Health and Safety
|
|
Multipaper Session 355 to be held in CROCKETT B on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Business and Industry TIG
|
| Chair(s): |
| Thomas Ward,
United States Army, thomas.wardii@us.army.mil
|
|
Evaluation of Health and Safety Incidents: The Search For Systemic Causes and Solutions
|
| Presenter(s):
|
| Katherine King, University of Michigan, krking@umich.edu
|
| Judith Daltuva, University of Michigan, jdal@umich.edu
|
| Abstract:
Most occupational accidents are attributable to two types of conditions: deviations which are events that exist for a relatively short duration (seconds, minutes, days) and determining factors which are more systemic conditions in the work environment that tend to be stable over extended periods of time (weeks, months, years.) Under the facilitation of the University of Michigan, a joint skilled trades-management safety culture action research team at a large United Automobile Workers represented automotive facility evaluated a health and safety incident. Using an in-depth method developed by Kjellan and Larsen (1981), the team identified determining factors at the physical/technical, organizational/ economic, and social/individual levels. The methodology helped to broaden the team's perspective from a focus on worker behavior to more systemic contributions to workplace safety. The group identified several potential opportunities for positive interventions, many which were implemented over the course of the first year of the project.
|
|
Organizational Assessment for Finding the Right Mode of Human-Robot Interaction
|
| Presenter(s):
|
| Se Jin Heo, University of Minnesota, heoxx005@umn.edu
|
| Jim Brown, University of Minnesota, brown014@umn.edu
|
| Abstract:
I am proposing to conduct a qualitative study on medical staffs in a hospital where surgical robots are used. The emphasis will be on the lived experience of nurses, technicians, and surgeons in a hospital equipped with the surgical robot. Central to the research agenda will be an organizational assessment of the cultural change for innovation in health care service regarding the use of new technology. I will first access the readiness to change of three groups (nurses, technicians, and surgeons) on the subject of the use of new technology. Besides, in order to investigate the cultural change for innovation, I will focus on observing a hospital’s error management culture in terms of both its tolerance for risk taking or failures and its training/intervention strategies to reduce errors. How the hospital deals with medical errors arising from the new technology is strongly related to medical staffs’ stress management because the way of treating medical errors cannot help influencing the attitude of medical staffs about innovation. How well or badly medical staffs cope with the new technology will strongly influence the quality of patient care as well as medical staffs’ stress-related health.
|
| |
|
Session Title: Differing Perspectives of Quality Throughout an Evaluation
|
|
Multipaper Session 356 to be held in CROCKETT C on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the
|
| Chair(s): |
| Andrew Lejeune,
University of Alberta, andrew.lejeune@ualberta.ca
|
|
Examining Evaluators’ and Stakeholders’ Perspectives of Quality
|
| Presenter(s):
|
| Stanley Varnhagen, University of Alberta, stanley.varnhagen@ualberta.ca
|
| Jason Daniels, University of Alberta, jason.daniels@ualberta.ca
|
| Andrew Lejeune, University of Alberta, andrew.lejeune@ualberta.ca
|
| Cheryl Poth, University of Alberta, cpoth@ualberta.ca
|
| Abstract:
Evaluators have a responsibility to ensure the quality of the evaluation process based upon their knowledge and experiences. However, since stakeholders often have different knowledge and experiences, their judgement of the quality of the evaluation would seldom be identical to those of evaluators. This paper reports our perspectives as evaluators with regard to quality and how we have tried to maintain quality throughout the ever-changing evaluation process. Specifically, we discuss the elements of how we determine quality, the challenges we have experienced when the stakeholders’ perspectives seem to differ from our own, and the strategies we have used to bridge these divergent perspectives, while still meeting the evaluation’s goals.
|
|
|
Session Title: Crime and Justice TIG Business Meeting and Paper Presentations
|
|
Business Meeting and Multipaper Session 357 to be held in CROCKETT D on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Crime and Justice TIG
|
| TIG Leader(s): |
|
Roger Przybylski, RKC Group, rogerkp@comcast.net
|
|
Drug Court Evaluation on a Shoestring Budget
|
| Presenter(s):
|
| Roger Przybylski, RKC Group, rogerkp@comcast.net
|
| Abstract:
This paper presents the findings and lessons learned from a low-budget evaluation of an adult drug court program operating in a mid-size Tennessee city. The outcome evaluation employed a quasi-experimental research design that compared the reoffending of drug court participants and a naturally occurring comparison group consisting of substance abusing offenders who were eligible for but did not participate in the program. Other outcome measures included changes over time in Addiction Severity Index (ASI) and Client Evaluation of Self and Treatment (CESI) scores. The evaluation was conducted on a shoe-string budget which, along with various other factors, provided significant obstacles and challenges. The evaluation’s findings will be discussed, along with the challenges encountered during the course of the study and the strategies employed to address them.
|
|
Lessons Learned Evaluating an International Domestic Violence Program
|
| Presenter(s):
|
| Cecilia Hegamin-Younger, St George's University, chyounger@mac.com
|
| Rohan Jeremiah, St George's University, rjeremiah@sgu.edu
|
| Abstract:
The United Nation’s Development for Women’s Partnership for Peace Program (PFP) was established in 2005 to safeguard the rights of women by reducing the prevalence of domestic violence and risky behavior patterns among Caribbean men. This presentation will discuss the lessons learned from an ongoing two-year project evaluating the PFP program and its use of a Caribbean-specific intervention model.
|
| |
|
Session Title: Graduate Student and New Evaluators TIG Business Meeting
|
|
Business Meeting Session 358 to be held in SEGUIN B on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Graduate Student and New Evaluator TIG
|
| TIG Leader(s): |
|
Gargi Bhattacharya, Southern Illinois University Carbondale, gargi@siu.edu
|
|
Nora Gannon, University of Illinois at Urbana-Champaign, ngannon2@illinois.edu
|
|
Jason Burkhardt, Western Michigan University, jason.t.burkhardt@wmich.edu
|
|
Session Title: Julia Child and Richard Feynman: An Oblique Approach to Evaluation Quality
|
|
Expert Lecture Session 359 to be held in REPUBLIC A on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the International and Cross-cultural Evaluation TIG
|
| Chair(s): |
| Alice Willard, International Relief & Development, willardbaker@verizon.net
|
| Presenter(s): |
| Alice Willard, International Relief & Development, willardbaker@verizon.net
|
| Abstract:
Rather than looking at how to improve the quality of evaluations directly, this lecture uses two different careers as a modeling exercise for what constitutes excellence, what characteristics define those individuals, and then uses those characteristics to launch an analysis of their relevance to the quality of evaluation.
This is not, at first glance, a natural or logical pairing. Most basically, their careers can be divided into four quadrants: competence, innovation, persistence, and passion. The lecture will briefly touch on the specifics for each individual in those quadrants before using that heuristic to elaborate on a model for evaluation quality.
Dr. Alice Willard is well-known for lateral approaches in a career that encompasses policy, standards, training, design, quality control, monitoring and evaluation both donor and implementing partner organizations. She has worked in the health, agriculture, education, infrastructure, community/institutional development, relief, community stabilization sectors of international development for almost 30 years.
|
|
Session Title: Improving the Design, Methods, and Data Quality of a Public Health Outcome Monitoring Project
|
|
Panel Session 360 to be held in REPUBLIC B on Thursday, Nov 11, 3:35 PM to 4:20 PM
|
|
Sponsored by the Health Evaluation TIG
|
| Chair(s): |
| Gary Uhl, Centers for Disease Control and Prevention, gau4@cdc.gov
|
| Discussant(s):
|
| Gary Uhl, Centers for Disease Control and Prevention, gau4@cdc.gov
|
| Abstract:
The Centers for Disease Control and Prevention (CDC) evaluates the outcomes of HIV prevention behavioral interventions as they are implemented by CDC-funded community-based organizations through the Community-based Organization Behavioral Outcomes Project (CBOP). Multiple client-, agency-, and community- level factors at these organizations impact the CBOP design, methods and data quality. The first speaker will present an overview of CBOP and will describe how evaluation and data quality challenges and lessons learned were identified by agency and CDC staff. The second speaker will present specific lessons learned across CBOP evaluations and describe how these have been used to improve the design, methods, and data quality of subsequent CBOP evaluations. The panel will be moderated by Gary Uhl, Team Leader of the Evaluation Studies Team in the Program Evaluation Branch.
|
|
The Community-based Organizations Behavioral Outcomes Project: Identifying Challenges and Lessons Learned to Improve Evaluation Design, Methods, and Data Quality
|
| Andrea Moore, MANILA Consulting Group Inc, dii7@cdc.gov
|
| Elizabeth Kalayil, MANILA Consulting Group Inc, ehk2@cdc.gov
|
| Tobey Sapiano, Centers for Disease Control and Prevention, gvf8@cdc.gov
|
| Holly Fisher, Centers for Disease Control and Prevention, hkh3@cdc.gov
|
| Alpa Patel-Larson, Centers for Disease Control and Prevention, aop2@cdc.gov
|
| Gary Uhl, Centers for Disease Control and Prevention, gau4@cdc.gov
|
|
The Centers for Disease Control and Prevention (CDC) conducts outcome monitoring evaluations on HIV prevention behavioral interventions through the Community-based Organizations Behavioral Outcomes Project (CBOP). Through CBOP, CDC funds community-based organizations to collect quantitative and contextual data to assess clients’ risk behaviors between baseline and two post-intervention follow-ups, and to describe intervention delivery in real-world settings. In addition, CDC aims to improve the quality of the outcome monitoring project by collecting information on the implementation of the evaluation across agencies. For example, CDC uses semi-structured, open-ended questions from progress reports, and communication such as individual and group calls with agencies, to identify challenges and formulate solutions for data collection, data entry, and quality assurance. These data are combined with site visit observations and other experiences of CDC staff to develop lessons learned.
|
|
|
The Application of Lessons Learned to the Design, Methods and Data Quality of The Community-based Organizations Outcome Monitoring Project
|
| Tobey Sapiano, Centers for Disease Control and Prevention, gvf8@cdc.gov
|
| Gary Uhl, Centers for Disease Control and Prevention, gau4@cdc.gov
|
| Andrea Moore, MANILA Consulting Group Inc, dii7@cdc.gov
|
| Elizabeth Kalayil, MANILA Consulting Group Inc, ehk2@cdc.gov
|
| Adanze Eke, MANILA Consulting Group Inc, hzi4@cdc.gov
|
| Holly Fisher, Centers for Disease Control and Prevention, hkh3@cdc.gov
|
| Tanisha Grimes, Centers for Disease Control and Prevention, imf2@cdc.gov
|
| Tamika Hoyte, MANILA Consulting Group Inc, imb5@cdc.gov
|
| Alpa Patel-Larson, Centers for Disease Control and Prevention, aop2@cdc.gov
|
| Ekaterine Shapatava, Northrop Grumman Corporation, fpk7@cdc.gov
|
|
Since 2006, 22 community-based organizations have been funded to conduct outcome monitoring on five HIV prevention behavioral interventions through the Community-based Organizations Behavioral Outcomes Project (CBOP). To improve CBOP evaluations, CDC modifies the evaluation methods and data quality assurance procedures for each CBOP based on lessons learned from the previous CBOP. Over the course of CBOP, modifications based on lessons learned have been made to training, data collection, data entry, data reporting and quality assurance procedures. For example, during the first two CBOPs, agency staff reported challenges interpreting the data variables- in response, CDC made changes to the data collection instrument and proposed recommendations for additional training for client interviews. In turn, subsequent CBOPs implemented more rigorous training on the data variables and on interviewer techniques. The design of future outcome monitoring projects will continue to evolve and be strengthened by the insight gained through this process.
| |