| Session Title: Evaluating National and State Policy Change Efforts: Campaigner and Funder Perspectives on Evaluation Context, Methods and Lessons | ||||
| Panel Session 102 to be held in Panzacola Section F1 on Wednesday, Nov 11, 4:30 PM to 6:00 PM | ||||
| Sponsored by the Advocacy and Policy Change TIG | ||||
| Chair(s): | ||||
| Lester Baxter, Pew Charitable Trusts, lbaxter@pewtrusts.org | ||||
| Discussant(s): | ||||
| Jacqueline Williams Kaye, Atlantic Philanthropies, j.williamskaye@atlanticphilanthropies.org | ||||
| Abstract: A sizable share of the growing discussion of advocacy evaluation practice relates to campaigns that seek to effect policy change at the local level, often involving grassroots efforts led by community-based non-profits. This panel seeks to build on this body of work by focusing on advocacy campaigns that seek to inform or effect policy change at the national or state level-- recognizing and exploring the ways in which they operate in a context that can differ sharply from that of local policy change efforts. Informed by The Pew Charitable Trusts' Planning and Evaluation group's experiences over a twelve-year period, the panel will first address methods, incorporating real world examples and sharing lessons panelists have learned about how to design and conduct successful evaluations of advocacy efforts that aim for state or national policy change. We will complement this focus on methods and practice with panel discussions designed to incorporate the pivotally important context of a key audience for advocacy evaluations, the senior strategists who design and implement the campaigns, and the types of evaluative information and lessons campaign designers value most. | ||||
| ||||
| ||||
|
| Session Title: Multiple Dimensions in Needs Assessment Application | ||||||||||||||||||||
| Multipaper Session 103 to be held in Panzacola Section F2 on Wednesday, Nov 11, 4:30 PM to 6:00 PM | ||||||||||||||||||||
| Sponsored by the Needs Assessment TIG | ||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||
| Janet Matulis, University of Cincinnati, janet.matulis@uc.edu | ||||||||||||||||||||
| Discussant(s): | ||||||||||||||||||||
| Ann Del Vecchio, Alpha Assessment Associates, delvecchio.nm@comcast.net | ||||||||||||||||||||
|
| Session Title: Demonstration of Evaluation Frameworks in a Variety of Health Projects | |||||||||||||||||||||||||||
| Multipaper Session 104 to be held in Panzacola Section F3 on Wednesday, Nov 11, 4:30 PM to 6:00 PM | |||||||||||||||||||||||||||
| Sponsored by the Health Evaluation TIG | |||||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||||
| Stacey Farber, Cincinnati Children's Hospital Medical Center, stacey.farber@cchmc.org | |||||||||||||||||||||||||||
|
| Session Title: Complex Systems Evaluation and Dynamic Logic Modeling: Lessons From the Field | |||
| Panel Session 105 to be held in Panzacola Section F4 on Wednesday, Nov 11, 4:30 PM to 6:00 PM | |||
| Sponsored by the Systems in Evaluation TIG and the Human Services Evaluation TIG | |||
| Chair(s): | |||
| Bob Williams, Independent Consultant, bobwill@actrix.co.nz | |||
| Discussant(s): | |||
| Margaret Hargreaves, Mathematica Policy Research Inc, mhargreaves@mathematica-mpr.com | |||
| Bob Williams, Independent Consultant, bobwill@actrix.co.nz | |||
| Abstract: Understanding the differences between self-organizing and organized system dynamics can be a key to useful planning and evaluation of initiatives in complex settings. The presenters in this session are exploring ways to visually and conceptually integrate attention to self-organizing system dynamics along side attention to the planned, organized system dynamics typically represented in logic models. The initiatives (funded all or in part by the federal Children's Bureau) are an evidence-based home visitation program, parental involvement in early childhood programs, and a quality improvement center for early childhood maltreatment prevention. The panelists illustrate how they framed their thinking, gathered information, and visually represented findings about both organized and self-organizing system dynamics. They particularly address the nature of logic models in these situations. Their focus is on helping users of their work understand options for influencing the key dynamics of their situation to move in a desired direction. | |||
| |||
| |||
|
| Session Title: Interactive Techniques to Facilitate Evaluation Learning |
| Demonstration Session 106 to be held in Panzacola Section G1 on Wednesday, Nov 11, 4:30 PM to 6:00 PM |
| Sponsored by the Extension Education Evaluation TIG |
| Presenter(s): |
| Ellen Taylor-Powell, University of Wisconsin, ellen.taylor-powell@ces.uwex.edu |
| Nancy Brooks, University of Wisconsin, nancy.brooks@ces.uwex.edu |
| Chris Kniep, University of Wisconsin, christine.kniep@ces.uwex.edu |
| Abstract: At AEA 2008, we had a room full of engaged participants looking for new ideas they could use to help non-evaluators engage in and learn evaluation. We will add to the techniques we shared last year with some new ideas from our toolbox that bring evaluation to life. This year we will include -- learning peripherals, games, and creative expression -- that facilitate active learning of tough evaluation concepts or tasks. We will demonstrate these interactive techniques and provide clear explanations of how we use each to build evaluation capacity. We will discuss their strengths and weaknesses and applications for other evaluation learning purposes, in different settings with different audiences. Bring your own techniques to share. |
| Session Title: The Social Context of Water Quality Improvement Evaluation: Issues and Solutions | |||||
| Panel Session 107 to be held in Panzacola Section G2 on Wednesday, Nov 11, 4:30 PM to 6:00 PM | |||||
| Sponsored by the Presidential Strand and the Environmental Program Evaluation TIG | |||||
| Chair(s): | |||||
| Linda P Thurston, Kansas State University, lpt@ksu.edu | |||||
| Abstract: State and Federal agencies support a watershed approach to better address water quality problems in the U.S. and to build capacity of watershed stakeholders to develop and implement effective, comprehensive programs for watershed protection, restoration, and management. This local watershed approach to address water quality problems involves water quality specialists, technical advisors, and local stakeholders such as landowners and county extension agents. Contextual issues are a vital consideration in planning and evaluation for local watershed improvements. Evaluation practitioners have much to offer to this watershed approach. Evaluation is essential to developing, assessing, and improving successful curriculum design, trainings, demonstrations, workshops and conservation practices. Evaluators can address involving stakeholders, developing consumer-friendly evaluation frameworks, building evaluation capacity, and identifying indicators and developing assessment tools. This panel will discuss contextual issues in water quality improvement evaluation and will provide examples of evaluation tools and practices in water quality improvement work. | |||||
| |||||
| |||||
| |||||
|
| Session Title: Dealing Effectively With Different Views and Perspectives: The Circular Dialogue Technique |
| Skill-Building Workshop 108 to be held in Panzacola Section H1 on Wednesday, Nov 11, 4:30 PM to 6:00 PM |
| Sponsored by the Qualitative Methods TIG |
| Presenter(s): |
| Richard Hummelbrunner, OEAR Regional Development Consultants, hummelbrunner@oear.at |
| Abstract: Circular Dialogues are systemic forms of communication among people who come from or act in different contexts. The aim is to use the different perspectives and views of the participants as a resource e.g. for appraising/validating diverse experiences or identifying joint solutions. Therefore these dialogues can be used very effectively by evaluators dealing with different stakeholder perspectives. After a short introduction of the principles and rules to be followed in a Circular Dialogue, participants will be invited to break into sub-groups and to agree on one issue in each to deal with. In addition, at least three perspectives (or roles) will be identified and a facilitator appointed. Then several dialogue sessions will be run in parallel to give the participants a hands-on opportunity to practice. In the final session the experience of these sessions will be discussed, commented by the presenter who has observed them, as well as participants' questions. |
| Session Title: Design and Construction of Measures |
| Demonstration Session 109 to be held in Panzacola Section H2 on Wednesday, Nov 11, 4:30 PM to 6:00 PM |
| Sponsored by the Quantitative Methods: Theory and Design TIG |
| Presenter(s): |
| Lee Sechrest, University of Arizona, sechrest@u.arizona.edu |
| Katherine McKnight, Pearson Corporation, kathy.mcknight@gmail.com |
| Mei-kuang Chen, University of Arizona, kuang@u.arizona.edu |
| Abstract: Measures used for evaluation should be deliberately and carefully conceptualized, designed, and constructed in order to provide the best possible information. That is true even if, eventually, measures are selected from among those already available. Three classes of measures can be identified: common factor measures, emergent variable measures, and quasi-latent variable measures. Each of these classes of measures entails particular assumptions about their structure and require distinctly different approaches to their construction and interpretation. The three classes of measures are defined and differentiated, and the implications of their definitions for reliability and validity are outlined. Procedures for the construction and assessment of each of the three types of measures are presented and illustrated with simulated and real data. Their use in evaluation and their proper interpretations are discussed. |
| Session Title: Internal Evaluation: Its Unique Contexts Challenges, Opportunities and Uses | |||
| Panel Session 110 to be held in Panzacola Section H3 on Wednesday, Nov 11, 4:30 PM to 6:00 PM | |||
| Sponsored by the Evaluation Use TIG | |||
| Chair(s): | |||
| Wendy DuBow, University of Colorado at Boulder, wendy.dubow@colorado.edu | |||
| Discussant(s): | |||
| Wendy DuBow, University of Colorado at Boulder, wendy.dubow@colorado.edu | |||
| Abstract: Internal evaluators often encounter both challenges and opportunities unique to their status as insiders. The panelists in this session have all worked as both external and internal evaluators and, therefore, bring a perspective on how the issues they currently face as internal evaluators differ from, or are similar to, those they have faced as external evaluators. Many of these issues have a direct impact on the use of the evaluations they have conducted. The three panelists will share the challenges and opportunities they have encountered as internal evaluators and focus on how these issues impacted the evaluation utilization. The discussant will provide an opportunity for audience participation in a discussion of the larger philosophical and practical issues these situations spark. As the panelists and discussant all work in different fields, the inter-disciplinary discussion promises to be stimulating. | |||
| |||
|
| Session Title: Evaluation During Challenging Economic Times: Strategies for Non-profits and Foundations |
| Think Tank Session 111 to be held in Panzacola Section H4 on Wednesday, Nov 11, 4:30 PM to 6:00 PM |
| Sponsored by the Non-profit and Foundations Evaluation TIG |
| Presenter(s): |
| Courtney Malloy, Vital Research, courtney@vitalresearch.com |
| Pat Yee, Vital Research, patyee@vitalresearch.com |
| Discussant(s): |
| Courtney Malloy, Vital Research, courtney@vitalresearch.com |
| Pat Yee, Vital Research, patyee@vitalresearch.com |
| Harold Urman, Vital Research, hurman@vitalresearch.com |
| Abstract: This session will examine how non-profit organizations and foundations can continue to support evaluation activities during challenging economic times. Participants will generate strategies that organizations can use to control costs while still collecting and analyzing high quality evaluation data. Participants will choose to participate in two of three breakout groups for 20 minutes each throughout the 90-minute session. Breakout groups will discuss the following topics: 1. Choosing what, when and how to evaluate (e.g., which programs and/or evaluation questions, timing, funding options, staffing, etc.); 2. Designing evaluations (e.g., instrumentation, sampling, data sources, use of findings, etc.); and 3. Leveraging technology: Making the right investment choices. Summary reports from each topic will be provided by breakout leaders followed by comments and questions from participants. Results from the think tank will be documented and made available to AEA as well as posted on an evaluation resources web site hosted by the facilitators. |
| Session Title: Evaluation in International Development and Oversight | |||||||||||||||||||
| Multipaper Session 112 to be held in Sebastian Section I1 on Wednesday, Nov 11, 4:30 PM to 6:00 PM | |||||||||||||||||||
| Sponsored by the Government Evaluation TIG and the International and Cross-cultural Evaluation TIG | |||||||||||||||||||
| Chair(s): | |||||||||||||||||||
| Sarah Brewer, United States Department of State, brewerse@state.gov | |||||||||||||||||||
|
| Session Title: Strengths and Limitations of Collaborative, Participatory, and Empowerment Evaluation Approaches |
| Think Tank Session 113 to be held in Sebastian Section I2 on Wednesday, Nov 11, 4:30 PM to 6:00 PM |
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG |
| Presenter(s): |
| David Fetterman, Fetterman & Associates, fettermanassociates@gmail.com |
| Discussant(s): |
| Rita O'Sullivan, University of North Carolina, ritao@unc.edu |
| Lyn Shulha, Queens University, lyn.shulha@queensu.ca |
| Abraham Wandersman, University of South Carolina, wanderah@gwm.sc.edu |
| Liliana Rodriguez-Campos, University of South Florida, lrodriguez@coedu.usf.edu |
| Abstract: This will be a participant focused, interactive session. Members of the audience will be asked to work together and list the strengths of collaborative, participatory and empowerment evaluation approaches on poster paper. They will rotate and list the major limitations of each approach. The group will also be asked to rotate one more time and complete a list of recommendations to respond to their list of limitations. Leaders in the group will be asked to report back a summary of their findings and insights. A panel of experts in the field will respond to the lists and add their reflections concerning the lists. Panel experts will include: Collaborative Evaluators Rita O'Sullivan and Liliana Rodriguez-Campos; Participatory Evaluator Lyn Shulha, and Empowerment Evaluators David Fetterman and Abraham Wandersman. |
| Session Title: Can Whole Systems Be Evaluated? If so, How? |
| Think Tank Session 114 to be held in Sebastian Section I3 on Wednesday, Nov 11, 4:30 PM to 6:00 PM |
| Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG and the Systems in Evaluation TIG |
| Presenter(s): |
| Molly Engle, Oregon State University, molly.engle@oregonstate.edu |
| Discussant(s): |
| Martha Ann Carey, Maverick Solutions, marthaann123@sbcglobal.net |
| Mary Ann Scheirer, Scheirer Consulting, mascheirer@gmail.com |
| Andrea M Hegedus, Northrop Grumman Corporation, ahegedus@cdc.gov |
| Abstract: As evaluators, we are well aware that we work within larger systems that form the context for both the evaluand and the evaluation process. Rarely is the evaluand a complete system itself. Other questions become relevant including what improvements can result if the whole system is evaluated? Can evaluation feasibly address initiatives intended to change the ways a broader system operates? This session will address these and other related questions. The facilitators' work with whole systems has raised these questions, with no easy answers. Evaluators tend to evaluate parts of systems, but have neglected putting the parts together to evaluate the whole. Understanding interrelations among the parts may be a key to improvement in a specific program within its systemic context. The participants will contribute to this thought provoking session by sharing work they have done with whole systems and taking away suggestions for exploring whole systems evaluation further. |
| Session Title: We See With More Than Our Eyes: Gathering Data in Migrant and Native American Communities |
| Skill-Building Workshop 115 to be held in Sebastian Section I4 on Wednesday, Nov 11, 4:30 PM to 6:00 PM |
| Sponsored by the Multiethnic Issues in Evaluation TIG |
| Presenter(s): |
| David Sul, Sul & Associates LLC, dsul@sulandassociates.com |
| Charlene Sul, Sul & Associates LLC, csul@sulandassociates.com |
| Abstract: The story telling tradition is strong in Migrant and Native American communities. Expert story tellers use more than their voices to transport the listener to a distant time and place. The techniques presented in this workshop remain true to that tradition. Incorporating modern technologies, the evaluators attempt to share the program participant experience and in so doing, add even more context to the evaluation. The evaluators will present findings from a collection of methods - the family portrait, digital stories, photo mosaic and word clouds. These techniques are used to draw out hidden messages in the lives of program participants. Additionally, they can be used to monitor individual or group progress both instantaneously and over time. Workshop participants will participate in a live deployment of selected techniques. A debriefing of the experience will focus on the impact such techniques may have as critical reports are being created. Further, participants will be left to consider how to substantiate the evidence created in summary reports with prior work and a credible framework for analysis. |
| Session Title: Incorporating the International Political and Cultural Context in Evaluation | ||||||||||||||||||||
| Multipaper Session 116 to be held in Sebastian Section L1 on Wednesday, Nov 11, 4:30 PM to 6:00 PM | ||||||||||||||||||||
| Sponsored by the International and Cross-cultural Evaluation TIG | ||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||
| Gwen Willems, Willems and Associates, wille002@umn.edu | ||||||||||||||||||||
| Discussant(s): | ||||||||||||||||||||
| Gwen Willems, Willems and Associates, wille002@umn.edu | ||||||||||||||||||||
|
| Session Title: Context and Sector-Specific Evaluation: Agriculture, Water and Sanitation, Infrastructure | |||||||||||||||||||||||
| Multipaper Session 117 to be held in Sebastian Section L2 on Wednesday, Nov 11, 4:30 PM to 6:00 PM | |||||||||||||||||||||||
| Sponsored by the International and Cross-cultural Evaluation TIG | |||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||
| Tessie Catsambas, EnCompass LLC, tcatsambas@encompassworld.com | |||||||||||||||||||||||
| Discussant(s): | |||||||||||||||||||||||
| Tessie Catsambas, EnCompass LLC, tcatsambas@encompassworld.com | |||||||||||||||||||||||
|
| Session Title: Symphonic Capability Curve |
| Demonstration Session 118 to be held in Sebastian Section L3 on Wednesday, Nov 11, 4:30 PM to 6:00 PM |
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG |
| Presenter(s): |
| Alonford Robinson, Symphonic Strategies, ajrobinson@symphonicstrategies.com |
| Abstract: The Symphonic Capability Curve (SCC) is an organizational assessment instrument that measures an organization's collective capability in 12 key areas. Using more than 100 separate attributes of high performance, the self-assessment tool helps leaders plot their performance against an ideal, and, more importantly, against a benchmark of peer organizations. The power of the attributes we have chosen rests in their composition, they are all outcomes-based measures. We measure more than just potential to act. We measure the outcomes of organizational action. |
| Session Title: Statewide Evaluation Studies: Issues in Design, Implementation, Reporting and Policy | ||||||||||||||||||
| Multipaper Session 119 to be held in Sebastian Section L4 on Wednesday, Nov 11, 4:30 PM to 6:00 PM | ||||||||||||||||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | ||||||||||||||||||
| Chair(s): | ||||||||||||||||||
| Jordan Horowitz, Cal-PASS, jhorowitz@calpass.org | ||||||||||||||||||
| Discussant(s): | ||||||||||||||||||
| Andrew Newman, Mid-continent Research for Education and Learning, anewman@mcrel.org | ||||||||||||||||||
|
| Session Title: Insights From Rapid Evaluations: Improving School Programs for Better Results |
| Multipaper Session 120 to be held in Suwannee 11 on Wednesday, Nov 11, 4:30 PM to 6:00 PM |
| Sponsored by the Health Evaluation TIG |
| Chair(s): |
| Doryn Chervin, ICF Macro, doryn.d.chervin@macrointernational.com |
| Discussant(s): |
| Leah Robin, Centers for Disease Control and Prevention, ler7@cdc.gov |
| Abstract: The Division of Adolescent and School Health (DASH) of the Centers for Disease Control and Prevention (CDC) contracted with Macro International Inc. to conduct rapid evaluations of the effectiveness of innovative programs addressing child and adolescent health issues. Rapid evaluations use a mixed methods approach together with rapid, iterative, and team-based methods. DASH's use of the method has been particularly valuable for quickly gauging whether an initiative is effective and where program improvements are worthwhile. This session will describe one such program and the methods and measures used to evaluate it, how unexpected results were understood, and lessons learned for future rapid evaluations. Discussants will reflect on the project to date. |
| Value of the Rapid Evaluation Method |
| Leah Robin, Centers for Disease Control and Prevention, ler7@cdc.gov |
| Marian Huhman, University of Illinois at Urbana-Champaign, mhuhman@illinois.edu |
| Catherine Rasberry, Centers for Disease Control and Prevention, catherine.rasberry@cdc.hhs.gov |
| Rapid evaluations are designed to be completed within a relatively short time frame to provide information about the impact of programs, policies, and initiatives. In a rapid evaluation, appropriate stakeholders are engaged in the process to identify salient evaluation questions, develop and implement evaluation plans, and collect and analyze data. This method aims to describe program activities, short-term and intermediate outcomes, and impacts. Further, rapid evaluations can increase accountability by identifying why program components are not being implemented as planned. The value of rapid evaluations is the timely provision of data to encourage action. Results from rapid evaluations can also have implications for other organizations aiming to adopt and implement similar programs and can provide insights for program improvement and quality information for making decisions. |
| Conducting a Rapid Evaluation in a Local Education Agency in the Southeast |
| Karen Cheung, ICF Macro, karen.cheung@macrointernational.com |
| Pamela Lunca, ICF Macro, pamela.j.lunca@macrointernational.com |
| Sarah Merkle, Centers for Disease Control and Prevention, smerkle@cdc.gov |
| In 2006, CDC/DASH launched an initiative to provide local education agencies (LEAs) with evaluation technical assistance on asthma management programs using the rapid evaluation method. First, an evaluability assessment was conducted to determine the program's readiness and capacity for evaluation. Then, using participatory, team-based, and iterative processes, the authors assisted a large LEA in the Southeast to develop evaluation questions and an evaluation plan. Individual interviews, focus groups, and questionnaires were used to collect information from various stakeholders, including elementary school students and high school students; principals, nurses, and other program staff; key district staff members; and parents/guardians of elementary school students. The authors will describe how the rapid evaluation method provided the LEA with important feedback to strengthen the overall management of their asthma program by facilitating uniform implementation of program components across all sites and maintaining detailed records of asthma program services to allow for future evaluation activities. |
| Interpretation and Communication of Unexpected Findings of a School-Based Asthma Program in a Local Education Agency in the Southeast |
| Dana Keener, ICF Macro, dana.c.keener@macrointernational.com |
| Karen Cheung, ICF Macro, karen.cheung@macrointernational.com |
| Catherine Rasberry, Centers for Disease Control and Prevention, catherine.rasberry@cdc.hhs.gov |
| Just like any other approach to evaluation, rapid evaluations can reveal unexpected findings. This presentation will describe unanticipated results that emerged from a rapid evaluation of a school-based asthma program in a large LEA in the southeast. In addition, the presentation will describe efforts to explore possible confounding variables that might explain the findings; mixed-method techniques used to understand and interpret the findings; and the process for communicating the findings back to the school district. Finally, recommendations for program improvement and future evaluation that stemmed from the unexpected findings will be shared. |
| Insights From Conducting Rapid Evaluation in School Settings |
| Doryn Chervin, ICF Macro, doryn.d.chervin@macrointernational.com |
| Dana Keener, ICF Macro, dana.c.keener@macrointernational.com |
| Karen Cheung, ICF Macro, karen.cheung@macrointernational.com |
| Leah Robin, Centers for Disease Control and Prevention, ler7@cdc.gov |
| This presentation will share key lessons and insights gained from conducting rapid evaluations in school settings. Although some lessons may be specific to school settings, others apply across other settings as well. Some of the lessons that will be discussed include: (1) form strong working relationships with evaluation stakeholders; (2) identify and address gaps in implementation data early in the process; (3) develop formal and ongoing opportunities for sharing evaluation results as they become available; and (4) engage stakeholders in the interpretation of the results. In addition, key questions that emerged from the evaluations will be raised for group discussion. For example, would rapid evaluations benefit from a more robust evaluability assessment prior to the evaluation? Finally, tensions associated with making mid-course corrections and improvements while the evaluation is still ongoing will also be discussed. |
| Session Title: Assessing Special Need Populations in the Context of High-Stakes Testing | |||||||||||||
| Multipaper Session 121 to be held in Suwannee 12 on Wednesday, Nov 11, 4:30 PM to 6:00 PM | |||||||||||||
| Sponsored by the Special Needs Populations TIG | |||||||||||||
| Chair(s): | |||||||||||||
| Jane Beese, University of Akron, jab128@uakron.edu | |||||||||||||
|
| Session Title: Assessing the Impact of Three Dropout Prevention Strategies on Student Academic Achievement in Grades 6-12 in Texas |
| Multipaper Session 122 to be held in Suwannee 13 on Wednesday, Nov 11, 4:30 PM to 6:00 PM |
| Sponsored by the Pre-K - 12 Educational Evaluation TIG |
| Chair(s): |
| Thomas Horwood, ICF International, thorwood@icfi.com |
| Discussant(s): |
| John Kuscera, Texas Education Agency, john.kucsera@tea.state.tx.us |
| Abstract: The Texas Legislature authorized and funded three pilot programs so that select districts and charter schools receiving grants could develop and implement programs to prevent and reduce dropout, increase high school success, and improve college and work readiness in public schools. In addition, the Texas Legislature authorized and funded the evaluation of these pilot programs and required that the evaluation assess the impact of the programs on student performance, high school completion rates, college readiness of high school students, teacher effectiveness in instruction, as well as the cost-effectiveness of each program. The objectives of the evaluation are to do each of the following for each pilot program: (1) evaluate the implementation of each program, (2) evaluate the impact of each program on student outcomes (e.g., achievement, college readiness, workforce readiness, graduation), and (3) evaluate the impact of each program on other relevant outcomes (e.g., teacher effectiveness). |
| An Evaluation of the Intensive Summer Program (ISP) in Texas Schools |
| Rosemarie O'Conner, ICF International, ro'conner@icfi.com |
| John Kuscera, Texas Education Agency, john.kucsera@tea.state.tx.us |
| Carol Kozak Hawk, ICF International, carolkozakhawk@gmail.com |
| The Intensive Summer Program (ISP) provides summer instruction for "at risk" students in Texas with the goals of reducing dropout, increasing school achievement, and promoting college and workforce readiness skills among students. Using a mixed methods approach, this presentation examines statistical analyses of student achievement data and the survey results from school administrators, teachers, and students. Hierarchical linear models (HLM) are used to examine the results of student achievement in standardized tests to determine whether the ISP program positively affected student academic achievement over time. Student surveys are used to further understand the relationships uncovered in statistical analyses, while surveys from the school administrators and teachers shed light on the additional positive benefits from the ISP program. Concluding remarks focus on the evaluation of the ISP program and identify future directions and lesson learned from this evaluation. |
| An Evaluation of the Collaborative Dropout Reduction Program in Texas Schools |
| Frances Burden, ICF International, fburden@icfi.com |
| John Kuscera, Texas Education Agency, john.kuscera@tea.state.tx.us |
| Sarah Decker, ICF International, sdecjer@icfi.com |
| The Collaborative Dropout Reduction program is a school-based program aimed at promoting academic achievement and college and workforce readiness in students. Considerable differences exist between the six Collaborative programs in their approaches to developing students' academic and workforce readiness skills. This evaluation examines student achievement across multiple campuses using HLM models, with particular attention to the larger programmatic differences. Additional analyses focus on student self-reported assessments of ethical workplace behaviors and their own college and workforce readiness, which serve to offer greater insight into the statistical results. Finally, interviews with the Collaborative programmatic staff and school administrators uncover additional school-wide positive benefits from the Collaborative program. Concluding remarks will focus on the difficulties encountered in evaluating six diverse Collaborative programs and how commonalities and differences were uncovered and measured. |
| An Evaluation of the Impact of Teacher Mathematics Instructional Coaches Training on Teachers and Schools |
| Amy Mack, ICF International, amack@icfi.com |
| John Kuscera, Texas Education Agency, john.kucsera@tea.state.tx.us |
| The Mathematics Instructional Coaches (MIC) pilot program provides assistance in developing the content knowledge and instructional expertise of teachers who instruct "at risk" students in mathematics at middle and high schools. The evaluation of the MIC program examines whether strengthening mathematics teachers' knowledge, skills, and abilities led to improvements in teachers' self-efficacy and beliefs about teaching. Surveys and interviews were collected from stakeholders in the MIC program, including administrators and grant coordinators, in order to further understand the results from teacher self-reported measures. Finally, this presentation will conclude with a discussion of the challenges the MIC program faced in developing student-level and school-level findings from programs aimed at training teachers. |
| Session Title: Measuring the Fidelity of Implementation of Response to Intervention for Behavior (Positive Behavior Support) Across All Three Tiers of Support |
| Demonstration Session 123 to be held in Suwannee 14 on Wednesday, Nov 11, 4:30 PM to 6:00 PM |
| Sponsored by the Pre-K - 12 Educational Evaluation TIG |
| Presenter(s): |
| Karen Childs, University of South Florida, childs@fmhi.usf.edu |
| Don Kincaid, University of South Florida, kincaid@fmhi.usf.edu |
| Heather George, University of South Florida, hgeorge@fmhi.usf.edu |
| Abstract: This workshop will provide information on the development, validation and use of implementation fidelity instruments available for use by schools in evaluating school-wide positive behavior support (otherwise known as response to intervention for behavior). The Benchmarks of Quality (BoQ) is a research-validated self-evaluation tool for evaluating fidelity of school level implementation of Tier 1/Universal level behavior support. Participants will receive information on the theoretical background, development and validation of the BoQ for Tier 1. Participants will learn how to complete the instrument, how the instrument is used by school, district and state teams to monitor implementation fidelity, and how to use results to improve implementation. Participants will also receive information about a new instrument in development; the Benchmarks of Quality for Tier 2/Supplemental and 3/Intensive levels of support. This discussion will include an explanation of instrument development and results of the validation pilot study. |
| Session Title: Walking the Tightrope: Strategies for Conducting Evaluations Within the Political Contexts of School Districts | ||||
| Panel Session 124 to be held in Suwannee 15 on Wednesday, Nov 11, 4:30 PM to 6:00 PM | ||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | ||||
| Chair(s): | ||||
| Jennifer Coyne Cassata, Prince William County Schools, cassatjc@pwcs.edu | ||||
| Abstract: Decision-making by local school boards takes place within an intensely political context. Stakeholders, particularly parents, have a direct voice and many groups are mobilizing to actively shape policy decisions. Evaluators working within and for school districts find themselves in a situation where frequently changing political contexts influence the methods used, the extent to which evaluation findings are utilized, and even evaluators' ability to adhere to the Program Evaluation Standards. This panel discussion will include evaluators working within neighboring school districts who will share sample experiences and how they navigated through those experiences to maintain high-quality practice and encourage effective use of the evaluation process and findings. | ||||
| ||||
| ||||
|
| Session Title: The Influence of Evaluators' Principles and Clients' Values on Contextually-Bound Evaluation Resource Decisions | |||
| Panel Session 125 to be held in Suwannee 16 on Wednesday, Nov 11, 4:30 PM to 6:00 PM | |||
| Sponsored by the Research on Evaluation TIG | |||
| Chair(s): | |||
| Jennifer Greene, University of Illinois at Urbana-Champaign, jcgreene@uiuc.edu | |||
| Discussant(s): | |||
| A Rae Clementz, University of Illinois at Urbana-Champaign, clementz@illinois.edu | |||
| Abstract: As evaluators, we hold certain principles (enacted values) that guide our practice, as do the clients we serve. This panel will discuss how those principles influence our work, especially in the decisions we must make given resources available to a particular context. The extent to which certain principles and resources are negotiable or non-negotiable across evaluation or evaluation capacity building settings will also be explored. Particular attention will be paid to the enacted values guiding the relationships between the evaluator and the client, who are key resources in determining the direction of the evaluative effort and ensuring its successful implementation and utility. | |||
| |||
| |||
|
| Session Title: Informal Environments: A Sampler of Audience Research and Evaluation From the Visitor Studies Association |
| Multipaper Session 126 to be held in Suwannee 17 on Wednesday, Nov 11, 4:30 PM to 6:00 PM |
| Sponsored by the Evaluating the Arts and Culture TIG |
| Chair(s): |
| Cheryl Kessler, Institute for Learning Innovation, kessler@ilinet.org |
| Discussant(s): |
| Joe E Heimlich, Ohio State University, heimlich.1@osu.edu |
| Abstract: What does evaluation and research in informal learning environments look like? Who does this work? What are the challenges and constraints for evaluating informal learning environments? Expanding the discussion in New Directions for Evaluation's 2005 issue, Evaluating Nonformal Education Programs and Settings, members of the Visitor Studies Association (VSA), an international network of professionals committed to understanding and enhancing visitor experiences in informal settings through research, evaluation and dialogue, will present a showcase of studies in informal environments. The showcase will include evaluation and research studies conducted for a performing arts organization, studies conducted in history, science, natural history, and children's museums; studies aimed at specific and general audience learning; and studies conducted by both internal and external evaluators. |
| Approaches to Measuring Identity in Informal Learning Environments |
| Kirsten Ellenbogen, Science Museum of Minnesota, kellenbogen@smm.org |
| Kirsten Ellenbogen, Director of Evaluation and Research in Learning at the Science Museum of Minnesota will present on recent work in informal learning environments to define identity, measure identity, and the integration of technology into identity-related measurements. Discussion includes criteria for developing an identity as a learner, the interrelationship of identity and participation, and the particular importance of identity in informal learning environments. Approaches include video-based reflective interviews, traditional and technology-supported journaling, and conversation analysis. Ellenbogen is a founding officer of the Informal Learning Environments Research SIG-American Education Research Association, senior chair of the Informal Science Education Strand-National Association for Research in Science Teaching, training coordinator of the Visitor Studies Group (UK), and President of VSA. |
| Concurrent Evaluations in a Single Institution |
| Saul Rockman, Rockman et al, saul@rockman.com |
| Saul Rockman, President of Rockman et al, and VSA Board member, will present on evaluations conducted in Science Centers and Natural History museums. Within a single institution, visitors include school groups, seniors, young adults, and multi-generation families; programs are designed to appeal to any and all of these groups. Concurrent evaluations in one institution include: assessing the perceived value of a multimedia exhibit for adults and families, the educational value of school group visits and programs for teachers, studies of learning from a planetarium show and visitor interest in future programs, how partnerships between public institutions and the private sector yield benefits for both, and the contributions of evening programs and Web 2.0 presence for young adults to increase membership and attendance. Each evaluations engages multiple methods and strategies, varying immediate and longitudinal impacts, and a concern for data interpretation to provide actionable information for institutional and programmatic decision makers. |
| Evaluating Museum Youth Program for Social Change and Civic Engagement |
| Mary Ellen Munley, MEM & Associates, maryellen@mem-and-associates.com |
| Mary Ellen Munley, Principal, MEM & Associates and Past President of VSA, will present on a study with program staff of the US Holocaust Memorial Museum to evaluate Bringing the Lessons Home, the museum's youth program for area teens. The evaluation focused on the ways in which the attitudes and actions of participating youth changed over the course of their involvement in the program, how those changes contributed to personal transformation and increased civic engagement, and how the participants' became catalysts for social change in their communities and in the museum. As they continue to work on moving the program into its next phase of development, program staff are actively using the study to gain a deeper understanding of the relationship between the anatomy of the program design, outcomes for participants, and broader impact. |
| Evaluation at the Denver Museum of Nature and Science (DMNS) |
| Kathleen Tinworth, Denver Museum of Nature and Science, kathleen.tinworth@dmns.org |
| Kathleen Tinworth, Director of Vistor Rsearch & Program Evaluation at the Denver Museum of Nature & Science (DMNS) and member of VSA's Profesional Development Committee, will present on evaluation conducted for a unique and distinctive first-person enactment program in 2007. Two multi-method evaluations (including exit surveys, visitor and enactor interviews and focus groups, observations and tracking and timing studies) have been conducted to assess qualitative and quantitative impacts that the enactor program has on visitor experience at DMNS. The first evaluation focused on the enactors' role within a temporary exhibition (Titanic: The Artifact Exhibition) and the second examined their ongoing work in the Museum's permanent diorama halls. |
| Evaluating the Long Island Children's Museum's, Be Together, Learn Together Program |
| Cheryl Kessler, Institute for Learning Innovation, kessler@ilinet.org |
| Cheryl Kessler, Research Associate with the Institute for Learning Innovation and VSA Board member will present on evaluation conducted for the Long Island Children's Museum (LICM), Be Together, Learn Together program, a partnership with Nassau County, NY Department of Health & Human Services (DHHS) to provide support to children and families receiving social service agencies. |
| Session Title: Hidden Possibilities: Building Stakeholder Capacity To Utilize Demographic Data |
| Demonstration Session 127 to be held in Suwannee 18 on Wednesday, Nov 11, 4:30 PM to 6:00 PM |
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG |
| Presenter(s): |
| Paul St Roseman, Sakhu and Associates LLC, pstroseman@gmail.com |
| Abstract: This demonstration presents evaluation service delivery approaches used to develop and support the capacity of organizational stakeholders to compile, manage, and utilize demographic data to: (a) clarify emerging service models or (b) improve understanding of models that already exist but have never been documented. Through the review of three case examples, participants will examine: (1) essential evaluation frameworks/models used to inform service delivery approaches to work with program managers and their staff members to identify demographic data, (2) tools and techniques used to build stakeholder knowledge and capacity to collect and manage demographic data, and (3) approaches used to guide stakeholder effort to use demographic data for fundraising, program planning/development, and accountability reporting. This presentation is most appropriate for evaluation practitioners who collaborate with education or human service program managers and staff members to design, implement, and sustain evaluation processes, as well as utilize evaluation products. |
| Roundtable Rotation I: Sharing Lessons Learned In Implementing State Outcomes Systems for Alcohol, Drug Abuse, and Mental Health Services |
| Roundtable Presentation 128 to be held in Suwannee 19 on Wednesday, Nov 11, 4:30 PM to 6:00 PM |
| Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG |
| Presenter(s): |
| Diana Seybolt, University of Maryland Baltimore, dseybolt@psych.umaryland.edu |
| Margaret Cawley, National Development and Research Institutes Inc, cawley@ndri-nc.org |
| Abstract: Each of the presenters has been involved in the development and use of a state outcomes system (Maryland Outcomes Measurement System [OMS] and North Carolina's Treatment Outcomes and Program Performance System [NC-TOPPS] respectively). Each will provide a brief presentation regarding their system and contextual influences which have affected system development and use. Group discussion will provide an opportunity for attendees to discuss their experiences, learn from one another, and brainstorm regarding challenges. Questions to be addressed include 'What contextual influences have affected your work in outcomes measurement?; What roles do government and other stakeholders play? How have you addressed competing priorities?; What has aided or impeded the development or ongoing use of your system?; How are you analyzing and presenting data?; Who has access to data?; What are you doing to promote use of the data for quality assurance purposes?; and 'What are your 'lessons learned'?' |
| Roundtable Rotation II: Working With Consumers of Mental Health Services in Evaluations: Thinking About the Challenges and Opportunities |
| Roundtable Presentation 128 to be held in Suwannee 19 on Wednesday, Nov 11, 4:30 PM to 6:00 PM |
| Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG |
| Presenter(s): |
| Amanda Jones, University of Maryland Baltimore, amjones@psych.umaryland.edu |
| Jennifer Kulik, University of Maryland Baltimore, jkulik@psych.umaryland.edu |
| Clarissa Netter, Maryland Mental Hygiene Administration, netterc@dhmh.state.md.us |
| Abstract: Evaluations in a public mental health system are improved when evaluators collaborate with all of the system's stakeholders, including the consumers of the public mental health system's services. The presenters, two University of Maryland evaluators and a consumer liaison from Maryland's Mental Hygiene Administration, have learned that consumer participation in evaluations presents both challenges and opportunities. After presenters speak briefly about their experiences, roundtable participants will discuss a number of topics related to working with consumers in the context of an evaluation, including the roles consumers can take in evaluations (as evaluation partners and/or evaluation participants); the most effective techniques and methods for empowering consumers in their roles during each phase of an evaluation (from evaluation planning to results dissemination); and the contextual elements (e.g., evaluation settings and societal influences) that can affect the evaluation experience for consumers. |
| Roundtable Rotation I: Seven Effective Strategies for Assessing and Demonstrating Impact of Faculty Development Programs: From Experience Comes Wisdom |
| Roundtable Presentation 129 to be held in Suwannee 20 on Wednesday, Nov 11, 4:30 PM to 6:00 PM |
| Sponsored by the Assessment in Higher Education TIG |
| Presenter(s): |
| James Eison, University of South Florida, jeison@coedu.usf.edu |
| Yenni Djajalaksana, University of South Florida, ydjajala@mail.usf.edu |
| Jecky Misieng, University of South Florida, jmisieng@mail.usf.edu |
| Abstract: Experienced faculty developers who are leaders in the field of faculty development in higher education settings were contacted via an anonymous web-based survey to collect valuable and practical lessons learned from their vast experience. One of the questions was about assessing and demonstrating impact of their development programs. 42 participants responded to the survey and this presentation will highlight insights gained by grouping them into seven broad categories. Among the issues brought up to assess impact was keeping track of how many faculty attend training programs and what they say about the usefulness of the topics. To demonstrate impact, one advice was to document everything in a detailed annual report and 'planting' stories in the local campus publications to maintain visibility. |
| Roundtable Rotation II: Promoting Change Through Internal and External Evaluations: Academic Center for Excellence |
| Roundtable Presentation 129 to be held in Suwannee 20 on Wednesday, Nov 11, 4:30 PM to 6:00 PM |
| Sponsored by the Assessment in Higher Education TIG |
| Presenter(s): |
| Bonnie Smith, Chipola College, smithb@chipola.edu |
| Dan Kaczynski, Central Michigan University, dan.kaczynski@cmich.edu |
| Abstract: This presentation will critically discuss internal and external evaluation methods and results covering the first two years of a multi-year federal higher education Title III grant award. Two grant components will be highlighted; the Academic Center for Excellence (ACE) and Supplemental Instruction (SI). Of particular interest in this presentation will be the unique challenges the college has experienced due to the success and growth in student interest ACE has generated. Formative findings indicate success in ACE and SI is due to administrative support and buy in from faculty in promoting a supportive culture for organizational change. |
| Roundtable Rotation I: Evaluation as a Management and Learning Tool for Neighborhood Change |
| Roundtable Presentation 130 to be held in Suwannee 21 on Wednesday, Nov 11, 4:30 PM to 6:00 PM |
| Sponsored by the Non-profit and Foundations Evaluation TIG |
| Presenter(s): |
| Della M Hughes, Brandeis University, dhughes@brandeis.edu |
| Abstract: This presentation will focus on methods and tools used by The Skillman Foundation evaluation team to assess individual, nonprofit and community capacities during the Readiness Phase of the Detroit Works for Kids Initiative (a ten-year investment in six neighborhoods in Detroit to improve outcomes for young people). Further, we will describe the challenges and opportunities of (1) embedding evaluation in the community for continuous improvement when there are competing voices and interests, and the neighborhoods are in the formative stages of their governance development processes; (2) engaging residents and other stakeholders in defining the capacities and the short and interim indicators for the long-term youth outcomes; and (3) measuring the strength of a system of supports and opportunities for youth when clear organizational, leadership and system baselines were not established at the onset of the initiative. The discussion will also focus on development of effective capacity for learning and data-driven decision making at the neighborhood and cross-neighborhood levels. |
| Roundtable Rotation II: Assessing Community Capacity to Develop and Implement Powerful Strategies: Tools, Methodology and the Influence of Evaluation on Practice |
| Roundtable Presentation 130 to be held in Suwannee 21 on Wednesday, Nov 11, 4:30 PM to 6:00 PM |
| Sponsored by the Non-profit and Foundations Evaluation TIG |
| Presenter(s): |
| Mary Achatz, Westat, maryachatz@westat.com |
| Scott Hebert, Sustained Impact, shebert@sustainedimpact.com |
| Abstract: A presentation of the tools and methodology that The Annie E. Casey Foundation developed and is using to assess community capacity to achieve tangible results for significant numbers of families and children in 10 communities nation-wide. The tool includes indicators of community capacity along a developmental continuum to mark progress over time, and to identify next steps. The methodology, which begins with a facilitated conversation with key stakeholders, uses a common set of questions to guide conversations across sites and to elicit concrete examples and evidence that support their assessments and that link investments in community capacity to improved outcomes for families and children. Discussion will include key decisions and processes in the development and refinement of the tool and methodology, the contributions of the assessment to continuous local learning and ongoing capacity building, and the challenges of analysis across the sites. |
| Session Title: Evaluating Technology Training and Development Initiatives | |||||||||||||
| Multipaper Session 131 to be held in Wekiwa 3 on Wednesday, Nov 11, 4:30 PM to 6:00 PM | |||||||||||||
| Sponsored by the Distance Ed. & Other Educational Technologies TIG | |||||||||||||
| Chair(s): | |||||||||||||
| Jane A Rodd, University at Albany - State University of New York, eval@csc.albany.edu | |||||||||||||
|
| Session Title: In Living Color: Black Women in Evaluation, Teaching and Praxis | |||
| Panel Session 132 to be held in Wekiwa 4 on Wednesday, Nov 11, 4:30 PM to 6:00 PM | |||
| Sponsored by the Feminist Issues in Evaluation TIG | |||
| Chair(s): | |||
| Tristi Nichols, Manitou Inc, tnichols@manitouinc.com | |||
| Discussant(s): | |||
| Anna Madison, University of Massachusetts Boston, anna.madison@umb.edu | |||
| Abstract: The panel includes four presentations highlighting the need for understanding the role of context and evaluation practice in academic and other professional settings. Each presentation focuses on expanding evaluation in educational and professional settings. Re-defining evaluative practices within traditional academic settings will be discussed in The Intersection of Race and Gender in the Ivory Tower: Evaluating Academic Life for Women of Color. Incorporating evaluation practices in asynchronous academic settings will be discussed in Evaluation in Asynchronous Learning Environments: Examining the Experiences of African American Women. Evaluation Practice in Schools of Social Work: Is It Deemed a Scholarly Activity provides a discussion on intersectionality of evaluation practice as a scholarly activity and its impact on African American women faculty. Finally, Black Women at the Evaluation Cross provides a discussion regarding the contribution of Black women to the evaluation profession. The influence of race/gender in academic/evaluation settings will also be discussed. | |||
| |||
| |||
| |||
|
| Session Title: Engaging Participants in the Evaluation Process: A Participatory Approach | ||||||||||||||||||||||||||||
| Multipaper Session 133 to be held in Wekiwa 5 on Wednesday, Nov 11, 4:30 PM to 6:00 PM | ||||||||||||||||||||||||||||
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG | ||||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||||
| Corina Owens, University of South Florida, cowens@coedu.usf.edu | ||||||||||||||||||||||||||||
|
| Session Title: Measuring Interdisciplinary and Mapping Research Emphases From Research Publications |
| Demonstration Session 134 to be held in Wekiwa 6 on Wednesday, Nov 11, 4:30 PM to 6:00 PM |
| Sponsored by the Research, Technology, and Development Evaluation TIG |
| Presenter(s): |
| Alan Porter, Georgia Institute of Technology, alan.porter@isye.gatech.edu |
| Abstract: Research evaluation frequently addresses research outputs - e.g., papers deriving from programmatic funding, or comparison of multiple research centers. Over the past several years we have developed two useful empirical tools: * Integration and Specialization scores to characterize the degree of interdisciplinary * Science overlay maps to depict which research fields are engaged. This demonstration takes you through the process of generating these: 1. Extracting journal titles (e.g., from database searches or proposal references) 2. Associating those with Web of Science Subject Categories 3. Calculating several, comparative interdisciplinarity metrics [especially Integration, Specialization, Shannon and Herfindahl Diversity, coherence, team size] 4. Generating overlays on base science maps (that can incorporate social sciences) 5. Tracking multi-stage research knowledge flows: a) integration of knowledge (reflected by cited references) b) dissemination of findings (via publications) c) diffusion of findings (indicated by citing of those publications). |
| Session Title: Enhancing Organizational Learning With Technology: Implications of Diversity, Improving Response Rates, and Increasing Evaluation Capacity | ||||||||||||||||||||||||||
| Multipaper Session 135 to be held in Wekiwa 7 on Wednesday, Nov 11, 4:30 PM to 6:00 PM | ||||||||||||||||||||||||||
| Sponsored by the Integrating Technology Into Evaluation | ||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||
| Rebecca Culyba, Emory University, rculyba@emory.edu | ||||||||||||||||||||||||||
| Discussant(s): | ||||||||||||||||||||||||||
| Nathan Balasubramanian, Centennial Board of Cooperative Educational Services, nbala@cboces.org | ||||||||||||||||||||||||||
|
| Session Title: Developing Effective Surveys | |||||||||||||||||||||
| Multipaper Session 136 to be held in Wekiwa 8 on Wednesday, Nov 11, 4:30 PM to 6:00 PM | |||||||||||||||||||||
| Sponsored by the AEA Conference Committee | |||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||
| Lija Greenseid, Professional Data Analysts Inc, lija@pdastats.com | |||||||||||||||||||||
|
| Session Title: Indigenous Peoples in Evaluation TIG Business Meeting |
| Business Meeting Session 137 to be held in Wekiwa 9 on Wednesday, Nov 11, 4:30 PM to 6:00 PM |
| Sponsored by the Indigenous Peoples in Evaluation TIG |
| TIG Leader(s): |
| Katherine Tibbetts, Kamehameha Schools, katibbet@ksbe.edu |
| Kaylani Rai, University of Wisconsin Milwaukee, kaylyanir@uwm.edu |
| Joan LaFrance, Mekinak Consulting, lafrancejl@gmail.com |
| Session Title: Timing Considerations in Evaluating Corrections Programs: How Context, Maturity of Program and Audiences Can Influence the Evaluation Process | |||
| Panel Session 138 to be held in Wekiwa 10 on Wednesday, Nov 11, 4:30 PM to 6:00 PM | |||
| Sponsored by the Crime and Justice TIG | |||
| Chair(s): | |||
| Marian Kimball Eichinger, The Improve Group, mariane@theimprovegroup.com | |||
| Abstract: While recidivism rates are often the primary indicator of success in corrections programs, a valuable and relevant question is whether it is possible to utilize alternative measures and designs to determine "success" in such programs. We will host a lively discussion on the successes and challenges of evaluating release programs in corrections. Three evaluators will bring their experience and expertise in evaluating corrections programs to discuss how timing can influence an evaluation by way of data gathering context, program maturity, reporting audience, and relevant outcomes. Our panel will facilitate this discussion using examples from our own evaluation experiences that had differing data collection contexts, levels of program maturity, and end results that uncovered both unique and standard outcomes of success. | |||
| |||
| |||
|