|
Session Title: Valuing Participation in a Multi-site, Multi-method Intervention Trial of Housing First in Five Canadian Cities
|
|
Multipaper Session 802 to be held in California B on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
|
| Chair(s): |
| Jayne Barker, Mental Health Commission of Canada, jbarker@mentalhealthcommission.ca
|
| Abstract:
There are many ways in which participation can be a part of a pragmatic randomized multi-site trial of a complex intervention. In order for the results to influence policy and practice it is important to incorporate this approach throughout the project. This session will describe the experiences to date for a $110 million 5 year study that began in 2008. An overview of the study will set the stage for papers describing how the implementation of the Housing First intervention was adapted in response to local needs, how a local research team has operated within a common national framework and how a consumer panel has supported the involvement of persons with lived experience of homelessness and mental illness.
|
|
An Overview of the At Home/Chez Soi Evaluation Project
|
| Paula Goering, University of Toronto, paula_goering@camh.net
|
|
This paper will describe the overall project design and the rationale for key scientific and operational decisions regarding the first and only multi-site trial of Housing First. The core study aims to learn about the feasibility, effectiveness and costs of implementing Housing First programs in varied contexts. It will randomize 2000 participants who are homeless and have a mental illness into treatment as usual or two intervention arms, Housing First plus Assertive Community Treatment for those with high needs and Housing First plus Intensive Case Management for those with moderate needs. An adaptation in one mid-size city with 200 participants will have one Housing First intervention arm in a combined need group. Another 300 participants will receive unique local interventions. Recruitment was completed in the spring of 2011. A common mixed-methods protocol includes assessments every six months over two years of follow-up.
|
|
Seeking to Reconcile At Home/Chez Soi National and Local Research Objectives in Montreal
|
| Eric Latimer, , eric.latimer@douglas.mcgill.ca
|
|
The At Home/Chez Soi project was designed and funded at the outset to allow local sites the opportunity to address questions of local interest in addition to a set of core questions common to all sites. This presentation will describe how Montreal investigators interacted with local providers as well as the national research team to produce a unique set of questions and methods in addition to those specified at the national level; how the conduct of the research has involved a constant give and take between the national investigators and the local team, both researchers and research staff; and indeed how input from individual sites often led to changes in policy across all sites. Several examples will be given, ranging from the description of the genesis of specific sub-studies, to the selection of psychometric instruments, to agreements regarding publications.
|
|
Managing the Implementation of At Home/Chez Soi as a Collaborative Exercise
|
| Cameron Keller, Mental Health Commission of Canada, ckeller@mentalhealthcommission.ca
|
|
Implementation of At Home/Chez Soi across sites included identifying and contracting with service agencies to provide housing and service supports as well as providing technical assistance and training about Housing First. Attention was paid to effective strategies for mental health best practice implementation. However, implementation also varied somewhat across sites due to a variety of factors including political influences, size and capacity of service providers, local cultural needs and influences, local support offered by the funder (Mental Health Commission of Canada), the presence of a local "champion" etc. In all cases, collaboration between multiple research and service agencies during development and implementation was essential for success, although the extent and type of collaboration varied. This paper describes the critical role of the "Site Coordinator", a change agent position developed to facilitate and support collaborations that were sensitive to local city context and yet shared a common purpose and form.
|
|
Experience Matters: Making the At Home/Chez Soi National Consumer Panel
|
| Jijian Voronka, University of Toronto, jvoronka@mentalhealthcommission.ca
|
|
This presentation begins with the theoretical imperative for the involvement of people with lived experience (PWLE) of mental health and homelessness in research and service provisions. I will then talk about the practical implementation considerations when developing the National Consumer Panel, a group constituted and run by PWLE, which acts in an advisory capacity to the project. I will further discuss the ways in which the Panel has worked to meaningfully inform a diversity of topics, including evaluation of the relevance and appropriateness of research scales, concerns about media representation, and supporting PWLE as advisors and project employees. I will end by discussing the importance of the Panel generating self-initiated measures, such as training recommendations, knowledge exchange, and the writing of discussion reports that are used to inform the project as a whole.
|
|
Session Title: Enhancing the Quality of Evaluation Design and Data Collection Tools Through Peer Review
|
|
Think Tank Session 803 to be held in California C on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Independent Consulting TIG
|
| Presenter(s):
|
| Sally Bond, The Program Evaluation Group LLC, usbond@mindspring.com
|
| Discussant(s):
|
| Courtney Malloy, Vital Research LLC, courtney@vitalresearch.com
|
| Abstract:
Since AEA 2004, the Independent Consulting TIG has operated a Peer Review Process for its members. The current co-chairs of the IC TIG's Peer Review are expanding the process to include reviewing evaluation designs and data collection tools. Draft protocols were presented in a think tank at the 2010 annual meeting. Participants reviewed and provided feedback on the draft protocols. The co-chairs will present the revised frameworks, including an expanded set of protocols for reviewing different types of data collection tools (e.g., surveys, interviews, focus groups, observations, etc). The purpose of the think tank is to invite additional comments and also to prepare volunteers to pilot the protocols in 2012.
|
|
Session Title: Narrative Control, the Appliance of Science and the Challenge for Independent Evaluation
|
|
Panel Session 805 to be held in Pacific A on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Evaluation Policy TIG
and the Presidential Strand
|
| Chair(s): |
| Saville Kushner, University of the West of England, saville.kushner@uwe.ac.uk
|
| Discussant(s):
|
| Thomas Schwandt, University of Illinois at Urbana-Champaign, tschwand@illinois.edu
|
| Abstract:
Democracy thrives on the proliferation of narratives - a plurality of possible explanations for social and political phenomena. As Chantal Mouffe suggests, argument is the life-blood of democracy, consensus its death-knell. Panelists will take a critical, sometimes dismaying, look at the balance between narrative control (the imposition of single narratives - climate change, economic crisis, educational achievement)and the possibility of narrative contestation. Are evaluators independent of narrative control? Is it our obligation to proliferate explanations - sometimes to be 'inconvenient'? Can we? Are we lost in a sea of consensus? Does narrative control dissolve in localism?
|
|
Speaking Truth to Power
|
| Eleanor Chelimsky, Independent consultant, eleanor.chelimsky@gmail.com
|
|
As evaluators beginning a new study, we often find that the question we are asked reposes on some demonstrably inaccurate assumptions. Or we discover, during the execution of the work, that what we are finding is incongruent with quite widely-shared conventional wisdom. Still further, at the end of the study, we may be faced with the difficulties of airing conclusions that are inconvenient to various stakeholders. In some of these cases, we are dealing with "the single narrative," often officially sponsored, and which may or may not have been purposely distorted. I propose to examine some experience in the field with regard to this problem, and sketch out some realistic strategies and tactics for dealing with it.
|
|
|
The Narrative Control of Educational Standards
|
| Robert Stake, University of Illinois, stake@illinois.edu
|
|
As I write this, people are on the streets of Yemen, Libya, Bahrain, Iraq, Algeria, Morocco, Jordan and Oman, protesting central control, supposing democracy will give them more control of their lives. Democracy is a weak hope, if it passes control from dictators to mayors, economists, and parents aspiring to place their children in elite schools. Educational standards are the narrative control over the lives of parents, teachers and children, insisting on a conformance and adherence to the indicators conceived by scientists, in this case, educational evaluators. Strong democracy is the provision of conditions by which local education might thrive. Absent in America is the protest by which democracy might put the control of education into the narratives of teachers and learners. When I speak with this panel, I will have current examples on the expression of people to apply the new media to challenge the constraints of narrative control.
| |
|
Independent Evaluation: 'Busted Flush' or Act of Resistance
|
| Saville Kushner, University of the West of England, saville.kushner@uwe.ac.uk
|
|
I know there is no economic crisis in the UK - and yet I watch helplessly as a financial coup d'etat is staged by investment bankers. I know climate change has the status only of a hypothesis, and yet I watch as my children's sense of future is stolen away. And I know science is just another base sociology, but I watch as it struts its privileges. Postmodernism was to deliver us from Grand Narratives, dissolve control in an intellectual nursery of playfulness. But all the while the spiders were busy, spinning webs of narrative control, sewing up the media, tightening regulatory systems and performance-managing dissent. We live in a scary world where government has interests independent of its citizenry. Evaluation - what Cronbach called "the process by which society learns about itself", the convenor of public debate - is either a busted flush or an act of resistance.
| |
|
Session Title: Review of the Guiding Principles: How Do (or Don't) the AEA Guiding Principles Inform Your Evaluation Practice?
|
|
Think Tank Session 806 to be held in Pacific B on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the AEA Conference Committee
|
| Presenter(s):
|
| Rebecca Woodland, University of Massachusetts, Amherst, rebecca.woodland@educ.umass.edu
|
| Discussant(s):
|
| Marilyn Ray, Finger Lakes Law & Social Policy Center, mlr17@cornell.edu
|
| Abstract:
Session participants will partake in an interview process to discuss their perspectives and share their experiences with regard to the AEA's Guiding Principles. Small-group dialogue will be facilitated using questions similar to the following:
1. To what extent and it what ways have you found AEA's Guiding Principles helpful in your practice?
2. In what ways, if any, do you take exception to any of the principles?
3. In what ways, if any, would you revise or modify the principles?
4. What suggestions or recommendations do you have concerning the improvement and value of the AEA Guiding Principles?
The data that you provide through this Think Tank session will be collected, analyzed, and used by the GP Review Task Group to inform decisions about how to make meaningful improvements the AEA Guiding Principles.
All AEA members are strongly encouraged to attend and to participate in this important process.
|
|
Session Title: Conducting High Quality Surveys: Frameworks and Strategies for Reducing Error and Improving Quality
|
|
Demonstration Session 807 to be held in Pacific C on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Presenter(s): |
| Lija Greenseid, Professional Data Analysts Inc, lija@pdastats.com
|
| Julie Rainey, Professional Data Analysts Inc, julie@pdastats.com
|
| Abstract:
Surveys are common tools for gathering evaluation information; however, they are only as useful as the quality of the data that are collected. This demonstration will provide a multi-faceted framework for thinking about survey quality that includes both the accuracy of the data and other important considerations such as usability, timeliness, and credibility. The concept of Total Survey Error will be introduced to identify the primary sources of error that can affect survey accuracy. Much of the demonstration will focus on advances in survey questionnaire design, sampling, and administration that can reduce survey error and increase survey quality. Examples of actual program evaluations in the field of tobacco control will illustrate how these concepts can be applied to improve evaluation practice.
|
|
Session Title: Improving Organizational Capacity Improve Client Outcomes?: A Review of Three Initiatives of The Colorado Trust
|
|
Panel Session 808 to be held in Pacific D on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Chair(s): |
| Chris Armijo, The Colorado Trust, chris@coloradotrust.org
|
| Abstract:
In 2005, The Colorado Trust invested $26.1 million in three initiatives -Equality in Health, Healthy Aging Initiative and Partnerships for Health Initiative. Each initiative had a distinct emphasis, but all three hypothesized that improving organizational capacity would impact the number of services provided and/or demonstrate improvements in client health outcomes. The underlying assumption being that high capacity organizations would ultimately yield measurable improvement in specific health indicators. This panel, which includes program and evaluation staff from the foundation and the external evaluation firms, will discuss adaptations of the initiatives over time, evaluation methods and findings, assumptions on capacity building, grantee outcomes and evaluation capacity of grantee organizations. In particular, the presenters will address the following questions: Who's capacity are we building and measuring? What conditions should be present for successful capacity building? What is the link between capacity building and client outcomes, and how to address both through an evaluation?
|
|
A Reflection on Capacity Building Grantmaking and Evaluation: Using our Successes and Lessons Learning on Strategies to Improve Client Outcomes
|
| Chris Armijo, The Colorado Trust, chris@coloradotrust.org
|
|
The Colorado Trust invested significant resources in three grantmaking strategies to understand the correlation between capacity building and client outcomes. There were a number of lessons learned about clarity of the strategy from the perspectives of the foundation and the grantees, ability of grantees to measure outcomes, challenges in grantees participation in the initiative evaluation, and the appropriate timeline to include evaluators in the design of an initiative strategy. In addition to the aforementioned lessons, these three initiatives had a number of design and evaluation learnings pertaining to the hypothesis that that improving capacity leads to better outcomes. The presenter will discuss the successes and lessons learned as well as share thoughts on capacity building grantmaking for the future.
|
|
|
Finding the Link Between Community Collaboration and Improving the Health of Communities: A Retrospective View of the Partnerships for Health Initiative Evaluation
|
| Phillip Chung, Colorado Trust, phillip@coloradotrust.org
|
|
Due to a largely fragmented and underfunded public health system in Colorado, the Partnerships for Health Initiative was developed to improve agency collaboration that would result in better health outcomes in communities. The Colorado Trust funded 13 organizations statewide ranging from hospitals, safety net clinics, substance abuse coalitions to mental health collaboratives. The panelist will discuss the evaluation framework for the initiative, changes to the evaluation, and the case study approach to the evaluation. Additionally, key findings will be shared as well as measurement challenges in linking community efforts to the health of the community.
| |
|
Are Culturally Competent Organizations Better Equipped to Reduce Racial/Ethnic Health Disparities?: Issues of Measurement and Metrics from the Equality in Health Initiative
|
| Kien Lee, Community Science, kien@communityscience.com
|
| LaKeesha Woods, Community Science, lwoods@communityscience.com
|
|
In 2005, The Colorado Trust funded a $13.1 million initiative to reduce racial/ethnic health disparities. The grant strategy sought to answer three key evaluation questions: "Does the cultural competency of the grantees change over time? If so, how does this change influence the grantees' health disparities interventions and their logic model outcomes? And, what factors and conditions should be in place in order for an organization to bring about positive changes in cultural competency?" Results of the evaluation will be discussed with an emphasis on measuring organizational cultural competency, linking to program-level outcomes and methodological challenges in collecting data.
| |
|
Are Higher Capacity Organizations Better Equipped to Deliver Senior Services with Better Outcomes?: An Evaluation Case Study from the Healthy Aging Initiative
|
| Erin Caldwell, National Research Center Inc, erin@n-r-c.com
|
|
Due to the large and growing aging population in Colorado, The Colorado Trust invested in 20 senior serving organizations throughout the state. Each organization was awarded a four year grant to fund new or existing senior services and building their organizational capacity. The primary evaluation question was "does providing technical assistance to build the capacity of grantee organizations help to improve their ability to serve seniors?" The presenter will discuss the methods of measuring improvements to senior services, organizational capacity and challenges of initiative and program level evaluation. Additionally, results will be shared on the links between organizational capacity building and senior services.
| |
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
Challenges in Juvenile Justice Treatment Program Evaluations |
|
Roundtable Presentation 809 to be held in Conference Room 1 on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Crime and Justice TIG
|
| Presenter(s):
|
| Jodi Petersen, Michigan State University, jpete@msu.edu
|
| Christina Campbell, Michigan State University, campb547@msu.edu
|
| Valerie Anderson, Michigan State University,
|
| William Davidson, Michigan State University, davidso7@msu.edu
|
| Abstract:
Evaluations within the juvenile justice system are essential for ensuring adherence to best practice treatment guidelines and improving outcomes for youth. Such evaluations are not without challenges though. This presentation will discuss the roadblocks faced in one county court that is striving to require process and outcome evaluations of interventions for youth. The challenges that have been faced include getting buy-in for a top-down mandated evaluation, improving data collection/management techniques for programs, interpreting results when multiple outcomes are desired, and what to do when no programs seem to work. Methods of addressing these challenges will also be discussed. This presentation will also offer an opportunity for presenters and participants to exchange ideas on how to overcome challenges such as these and complete rigorous evaluations in the juvenile justice field.
|
| Roundtable Rotation II:
Successful Strategies for Gathering Data from Law Enforcement Officers |
|
Roundtable Presentation 809 to be held in Conference Room 1 on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Crime and Justice TIG
|
| Presenter(s):
|
| Pamela Powell, University of Nevada, powellp@unce.unr.edu
|
| Marilyn Smith, University of Nevada, smithm@unce.unr.edu
|
| Janet Usinger, University of Nevada, Reno, usingerj@unr.edu
|
| Abstract:
Law enforcement officers are likely to be the first responders to a domestic dispute. Understanding the issues surrounding domestic violence (DV), as well as the resources available to help the victim, impact how law enforcement can enhance a victim's capacity to break the cycle of violence (Renzetti, et al. 2001). While appropriate DV training for officers is imperative to enhance their knowledge, skills and behaviors, program evaluation is crucial in the development and adaptation of successful programs. Few DV training programs have been able to collect this impact evaluation data from officers. This presentation will provide an overview of successful strategies employed to obtain evaluation data from law enforcement officers attending domestic violence training. Presentation attendees will receive copies of subsequent publications which describe training content, evaluation methodologies, and evaluation findings.
|
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
Assessing Evaluation Implementation: Reflections From Communities Putting Prevention to Work Nutrition, Physical Activity and Obesity States |
|
Roundtable Presentation 810 to be held in Conference Room 12 on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Health Evaluation TIG
|
| Presenter(s):
|
| Lynn Mongkieu Huynh, Centers for Disease Control and Prevention, jso4@cdc.gov
|
| Syreeta Skelton, ICF Macro, sskelton@icfi.com
|
| Abstract:
The CDC's Division of Nutrition, Physical Activity and Obesity, in collaboration with ICF Macro, is conducting a systematic assessment of evaluation implementation among states operating initiatives under Communities Putting Prevention to Work (CPPW), a federally funded program working to decrease the prevalence of obesity and tobacco use. This round table presents reflections on implemented evaluations among CPPW states and territories making policy and environmental changes within nutrition, physical activity and obesity. The systematic assessment focuses on 'high performing' (e.g. possess diverse stakeholders, plans for use, and evaluate changes pre/post interventions) grantees, and attempts to document promising practices and lessons learned in successful policy and environmental change evaluation to increase knowledge of essential components and steps. High performing grantee selection methods, key learnings, and results from the assessment will be discussed. In addition, we will explore the role of evaluation technical assistance with respect to evaluation planning, implementation, and results dissemination.
|
| Roundtable Rotation II:
Dinosaurs, Astronomy, and...Heart Health? Oh My! Lessons Learned from Evaluating Museum-Based Community Health Education |
|
Roundtable Presentation 810 to be held in Conference Room 12 on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Health Evaluation TIG
|
| Presenter(s):
|
| Elizabeth Danter, Institute for Learning Innovation, danter@ilinet.org
|
| Claudia Figueiredo, Institute for Learning Innovation, figueiredo@ilinet.org
|
| Abstract:
Heart Smart is a wellness-focused science museum exhibit. It addresses the role of behavior in heart health by emphasizing nutrition, physical activity, and stress management through computer interactive stations and interpretive panels. The stations offer personalized feedback to users on measures including blood pressure, waist size, body-mass index, and lifestyle choices. Heart Smart provides a venue for visitors' knowledge, attitudes, and behaviors to change, and the evaluation focused on measuring whether healthy behavior changes occurred as a result of spending time in the exhibit. Lessons learned included issues such as suitability of the host organization, Heart Smart competition with other exhibits, the validity of health results, and challenges in working with a culturally diverse, multigenerational audience. Positive outcomes included the success of targeted lifestyle choice feedback, the nonjudgmental anonymity of the experience, and the increase in visitors' self-efficacy for adopting healthy behaviors.
|
|
Session Title: New Frontiers in International Development Evaluation: Key Challenges and Lessons Learned in Evaluating Online Communities of Practice (CoP)
|
|
Panel Session 811 to be held in Conference Room 13 on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the International and Cross-cultural Evaluation TIG
|
| Chair(s): |
| Michele Tarsilla, Western Michigan University, michele_tarsilla@yahoo.com
|
| Abstract:
Policy-makers in developing countries are increasingly confronted with a host of unprecedented political, social, economic and environmental challenges. In response to such scenario, a need for better strategic planning as well as results-focused and evidence-informed management has emerged quite vigorously, both within national governments and civil society.
To this end, international donors have sponsored a number of online capacity-building initiatives aimed at enhancing countries' ownership and south-south cooperation in two key areas: Monitoring and Evaluation (M&E) and Management for Development Results (MfDR). The International Program in Development for Evaluation Training List-Serv (IPDET), the My Monitoring and Evaluation Knowledge Management Platform (MYM&E) and the African Community of Practice on Managing for Development Results (AfCoP) offer some good examples.
As no systematic evaluation of such initiatives has been conducted to this date, panelists will share some tools and methods as well as an innovative framework to evaluate online Communities of Practice.
|
|
Evaluating the Effects of the African Community of Practice on Management for Development Results (AfCoP-MfDR): Key Issues and Findings
|
| Michele Tarsilla, Western MIchigan University, michele_tarsilla@yahoo.com
|
|
African policy-makers are being confronted with a host of unprecedented social, economic and environmental challenges today. In response to the current scenario, donors and civil society across the continent are increasingly calling for the mainstreaming of results-oriented and evidence-informed practices in national strategic planning and management processes. To this end, donors have sponsored a number of international online capacity building initiatives fostering country ownership and enhancing south-south cooperation in the areas of Management for Development Results (MfDR) and Monitoring and Evaluation (M&E). The African Community of Practice on Managing for Development Results (AfCoP-MfDR), established in 2007, is a good illustration of that. Michele Tarsilla, an Evaluation and Strategic Planning Specialist based in Africa, will discuss the key methods and tools used during the AfCoP-MfDR retrospective evaluation and will share with the audience an innovative framework to evaluate the impact of online Community of Practices (CoPs) in international development contexts.
|
|
|
Evaluating the Effects of IPDET's Listserv as a Community of Practice: Issues and Findings
|
| Linda Morra Imas, World Bank, lmorra@worldbank.org
|
|
The International Program of Development Evaluation (IPDET) is an international evaluation capacity building program, sponsored by the Independent Evaluation Group of the World Bank and Carleton University, which has been operating since 2001 and has had almost 3000 participants from 105 countries around the world. One of its key features is the IPDET listserv-an online community of practice of IPDET participants and instructors. In 2010, an impact evaluation of the program was conducted. Linda Morra Imas, Co-Founder and Co-Director of IPDET will present some of the issues and challenges in trying to evaluate the effects of a community of practice like IPDET's, how the evaluation addressed them, and specific findings on the effects of IPDET's community of practice.
| |
|
Session Title: Values and Valuing in Collaborative, Participatory, and Empowerment Evaluation
|
|
Multipaper Session 812 to be held in Conference Room 14 on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
|
| Chair(s): |
| Mary Achatz,
Westat, maryachatz@westat.com
|
|
Changes in Participant Values Resulting from Public Engagement Processes on Public Health Issues
|
| Presenter(s):
|
| Mark DeKraai, University of Nebraska, mdekraai@nebraska.edu
|
| Denise Bulling, University of Nebraska, Lincoln, dbulling@nebraska.edu
|
| Tarik Abdel-Monem, University of Nebraska, tabdelmonem@nebraska.edu
|
| Stacey Hoffman, University of Nebraska, shoffman@nebraska.edu
|
| Abstract:
Increasingly, public agencies are engaging citizens and stakeholders in an effort to answer complex values-related policy questions, and to advance transparency in government. This effort is based on the belief that public participation in policy decisions will result in policies that are more effective and more acceptable to the general population, and that in some manner, engagement of the public will add value to the process or outcome of policy making. This paper summarizes results of six nationwide evaluations of U.S. Centers for Disease Control-sponsored public engagement projects four involving federal policy on pandemic influenza, one on federal vaccine policy, and one involving multiple state policies on pandemic influenza. Results indicate that public engagement processes produce changes in participant values and perspectives; however, public engagement does not result in greater levels of agreement on social values among participants. We will attempt to explain these seemingly disparate results.
|
|
Understanding the Value of Youth Participatory Evaluation: Developing an Instrument to Measure Impact on Youth
|
| Presenter(s):
|
| Mary Arnold, Oregon State Unviversity, mary.arnold@oregonstate.edu
|
| David White, Oregon State University, david.white@oregonstate.edu
|
| Abstract:
During the past 10 years there has been increasing interest in youth participatory evaluation (YPE), which involves youth in evaluating the programs that affect them. This evaluation strategy is, in and of itself, a youth development activity. Theorists and practitioners who advocate for youth-led evaluation often cite the potential transformational and practical benefits of the practice to youth and the programs they help evaluate. These benefits include the development of leadership skills, youth empowerment, youth-adult mentoring, analysis and communications skills, and community action. Despite the theoretical evidence of these benefits, no instrument exists for measuring the impact of YPE on youth. As such, there is currently no established way to measure and express the value of YPE as an effective youth development strategy. This paper will describe a newly developed instrument for measuring the impact of YPE on youth, as well as the theoretical framework that supports the instruments development.
|
|
Codifying Values: Developing Evaluation Tools to Enhance Decision-making
|
| Presenter(s):
|
| Laurel Sipes, MPR Associates Inc, lsipes@mprinc.com
|
| Beverly Farr, MPR Associates Inc, bfarr@mprinc.com
|
| Abstract:
This paper will explore the idea that the process of developing evaluative tools, such as rubrics, is fundamentally an act of exploring, defining, and prioritizing values. Using two examples of rubric development from on-the-ground evaluations active within the last two years, the authors will describe the often labor-intensive, iterative process of unearthing values for the purpose of informing the development of evaluation tools. Bringing stakeholders to the table, reviewing extant documentation, facilitating delicate conversations about values and their relative weight, building stakeholder consensus about the criteria and scales to be used, and finally documenting these ideas in a clear, easy-to-use tool can be a daunting, but also rewarding and hopefully useful service offered by an evaluator.
|
| | |
|
Session Title: Case Studies in Evaluation Use and Influence
|
|
Multipaper Session 813 to be held in Avila A on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Evaluation Use TIG
|
| Chair(s): |
| Jacqueline Stillisano,
Texas A&M University, jstillisano@tamu.edu
|
| Discussant(s): |
| Jacqueline Stillisano,
Texas A&M University, jstillisano@tamu.edu
|
|
Applying an Ecological Model to the Comparison of Two Case Studies of Evaluation Use
|
| Presenter(s):
|
| Judith Ottoson, Independent Consultant, jottoson@comcast.net
|
| Diane Martinez, The American Institutes for Research, diane.j.martinez@gmail.com
|
| Laura Leviton, The Robert Wood Johnson Foundation, llevito@rwjf.org
|
| Abstract:
Two case studies of evaluation use, sponsored by the Robert Wood Johnson Foundation, are compared using the Ecological Model of Evaluation Use. Following a literature review and using Yin's approach to case study methodology, study questions were clarified, conceptual frameworks identified, and two study designs initiated. Use of the Active for Life and Covering Kids and Families evaluations were explored through snowball samples of twenty-three and twenty-six key informants respectively, as well as use-related artifacts. Study findings were member-checked with informants. Strikingly different patterns of use and non-use were found between the two cases. Findings confirmed familiar types of evaluation use, such as conceptual use, and added new ones, such as valuing use. The interrelationship among use categories was explored sequentially by tracing antecedents and sequels of use, as well as leveraged use across sectors, time, and stakeholders. An ecological understanding complements current approaches to the study of evaluation use.
|
|
Lessons From Applying Developmental Evaluation Approaches to Heart and Stroke Foundation's Spark Together for Healthy Kids
|
| Presenter(s):
|
| Jennifer Yessis, University of Waterloo, jyessis@uwaterloo.ca
|
| Barb Riley, University of Waterloo, briley@uwaterloo.ca
|
| Sharon Brodovsky, Heart and Stroke Foundation of Ontario, sbrodovsky@hsf.on.ca
|
| Shirley Von Sychowski, Heart and Stroke Foundation of Ontario, svonsychowski@hsf.on.ca
|
| Lisa Stockton, Propel Centre for Population Health Impact, lstockton@uwaterloo.ca
|
| Michelle Halligan, Heart and Stroke Foundation of Ontario, mhalligan@hsf.on.ca
|
| Abstract:
Spark Together for Healthy Kids(Spark) targets childhood obesity using a population approach, and focuses on creating environments to increase physical activity and healthy eating. A developmental evaluation approach has been used during Spark's first five years, to inform Spark strategy and examine markers of progress that are linked to changes in environments and behaviours. Early products of this evaluation include a theory of change, multi-method design, two evaluation reports, and targeted evidence synthesis, all of which are informing Spark strategy and evaluation. Presenters will share evaluation lessons emphasizing the importance of sustained and trusted relationships between evaluators and partners, the relevance of evaluation questions to decision making, and use of the most rigorous methods to answer the most relevant questions. Presenters will also raise conundrums about this approach such as how developmental evaluation serves needs for 'accountability', the education needed regarding evaluation paradigms, and how 'outcomes' need to be reframed.
|
| |
|
Session Title: Building an Effective Early Childhood System: Factors Facilitating and Inhibiting Systems Change Strategies
|
|
Multipaper Session 814 to be held in Avila B on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Systems in Evaluation TIG
|
| Chair(s): |
| Pennie Foster-Fishman, Michigan State University, fosterfi@msu.edu
|
| Abstract:
There is a growing recognition that communities need to develop a quality early childhood system in order to ensure that all children are ready to learn by the age of five. Under the leadership of the Early Childhood Investment Corporation, every community in Michigan has developed a multi-stakeholder collaborative and a parent coalition to promote this systems change. Together these two entities work collaboratively to build a more responsive community context and develop a more integrated and effective early childhood system. This session will describe the evaluation of these efforts to date and highlight what we have learned about the factors facilitating effective system change pursuits within the early childhood arena.
|
|
Effective Systems Change Strategies for Developing an Integrated and Responsive Early Childhood System
|
| Pennie Foster-Fishman, Michigan State University, fosterfi@msu.edu
|
|
Under the leadership of the Early Childhood Investment Corporation, every community in Michigan has developed a a multi-stakeholder collaborative that is charged with building an effective early childhood system and a Parent Coalition that is responsible for promoting public will and grassroots mobilization around the early childhood agenda. To examine the effectiveness of this joint collaborative/community organizing approach, survey data was collected in 2010 from 2137 collaborative/coalition members and key outside community members across Michigan. This paper explores the key findings from our HLM analysis of the multi-level factors related to the development of a more responsive community context and a more integrated and effective early childhood system. Findings highlight the importance of service delivery network structure, readiness for change, advocacy and policy change, collaborative capacity, and partnership development.
|
|
Multilevel Evaluation of an Early Childhood System Building Initiative: Theoretical and Practical Implications for Aggregating Data to Higher Levels of Analysis
|
| Charles Collins, Michigan State University, colli443@msu.edu
|
| David Reyes-Gastulum, Michigan State University, reyesgas@msu.edu
|
| Goran Kuljanin, Michigan State University, gkuljanin@gmail.com
|
| Pennie Foster-Fishman, Michigan State University, fosterfi@msu.edu
|
|
Evaluation projects are increasingly employing data collection across multiple ecological levels, especially those that adopt a systems perspective. Due to this, evaluators are tasked with making decisions regarding the level at which data should be presented. These decisions have important implications for evaluation design, data collection, and analysis. Of particular importance are the ways in which data aggregation to higher levels of analysis can increase model misspecification. Taking a "levels" perspective, this presentation will first discuss conceptual issues of multilevel data aggregation including the assumptions, typologies, and properties of aggregated data. Second, utilizing evaluation data from a statewide early childhood system building initiative, we will provide examples of data analyses at multiple ecological levels and highlight how those data can provide unique insights while emphasizing the importance of attending to the issues and assumptions of data aggregation. This presentation will provide attendees with theoretical and practical implications of multilevel evaluations.
|
|
Understanding Perspective: The Importance of Disparate Stakeholder Viewpoints in Systems Change Evaluation Data
|
| David Reyes-Gastulum, Michigan State University, reyesgas@msu.edu
|
| Charles Collins, Michigan State University, colli443@msu.edu
|
| Goran Kuljanin, Michigan State University, gkuljanin@gmail.com
|
| Pennie Foster-Fishman, Michigan State University, fosterfi@msu.edu
|
|
Gaining multiple perspectives in the evaluation of a system building initiative can provide diverse insights into system functioning. Specifically, by collecting evaluation data from service providers, clients, and the broader community evaluators can gain a wealth of information regarding everything from internal functioning to community building activities. However, this can add to the complexity of data analysis due to group differences. This presentation will provide insights into the ways in which multiple groups can differ with regard to evaluation responses. Utilizing data from the evaluation of a system building initiative, we will give examples of group differences in stakeholder perspectives at multiple levels of analysis and how these differences were incorporated into the analysis. Moreover, we will highlight the importance of utilizing multiple points of view in evaluating a systems building initiative. We will conclude by providing insights gained from utilizing multiple stakeholder perspectives and the implications thereof.
|
|
Predicting Network Exchanges in Early Childhood Service Delivery Networks
|
| I-Chien Chen, Michigan State University,
|
|
While early childhood systems change efforts have prioritized the development of integrated and coordinated service systems, the desired level of coordination and collaboration has not yet been realized. This paper will explore the degree of exchanges, the factors influencing network characteristics, and the extent to which network characteristics are related to child/family outcomes in 55 early childhood collaborative structures in Michigan. Inter-organizational network data were collected using online and mail surveys in 2010 from organizations (N=1107) participating as members in 55 multi-stakeholder early childhood collaborative in Michigan. Organizational representatives were asked to provide their frequency of referral, information, and resources exchanges with all other member organizations on a six-point scale ranging from "Never" to "Daily. Results of the study will be discussed in terms of their implications for evaluating systems change efforts.
|
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
Using Citizen Science to Increase Organizational Capacity, Investment in the Evaluation Process, and Program Outcomes |
|
Roundtable Presentation 815 to be held in Balboa A on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Organizational Learning and Evaluation Capacity Building
|
| Presenter(s):
|
| Rachel Becker-Klein, PEER Associates, rachel@peerassociates.net
|
| Tina Phillips, Cornell University, tina.phillips@cornell.edu
|
| Abstract:
Increasingly, non-profit organizations will need to rely on the assistance of volunteers to meet their programmatic goals. However, in the absence of well-articulated objectives for enhancing volunteer experiences beyond one-time events, retention of volunteers may become more difficult. Citizen science, i.e., the intentional engagement of the public in scientific research, (Bonney et al. 2009) has the potential to deepen volunteer experiences aligned to programmatic goals while also broadening the audience base to meet organizational goals. In addition, the process and goals of citizen science are well aligned with those of evaluation, so that involving program staff and volunteers in conducting research may serve to increase their investment in evaluation. During this roundtable we will present a new NSF-funded Pathways project operated by the National Audubon Society, that seeks to build and test various models for incorporating citizen science and mentoring opportunities in order to achieve organizational and programmatic goals.
|
| Roundtable Rotation II:
A Knowledge-Sharing Roundtable Designed Specifically to Strengthen Collaborative Evaluation Capacity Building Practices in Support of Social Change Work |
|
Roundtable Presentation 815 to be held in Balboa A on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Organizational Learning and Evaluation Capacity Building
|
| Presenter(s):
|
| Julie Poncelet, Action Evaluation Collaborative, julie_poncelet@yahoo.com
|
| Catherine Borgman-Arboleda, Indendent Evaluation Consultant, cborgman.arboleda@gmail.com
|
| Rachel Kulick, Action Evaluation Collaborative, rakulick@gmail.com
|
| Abstract:
There is a need for strategic approaches and models to build the evaluation capacity of social change organizations. Evaluators are in the unique position to co-design and implement practical teaching and learning strategies that support the integration of evaluative thinking and practice in internal organizational reflection and planning. Practical approaches to collaborative evaluation capacity building must focused on key aspects of social change organizations - leadership development, community organizing and engagement, building alliances, networks and campaigns, advocacy, communications and shifting power - and the challenges in evaluating these relationship and process based changes. We will facilitate a discussion on the nature of social change work and organizations, and how evaluators can best engage to strengthen social change strategies and organizational capacity. Key topics: building data collection and reflecting on what is already happening, collaborative analysis, and creating ways of reporting findings that meet both internal knowledge needs and funder accountability.
|
|
Session Title: Evaluating Foundation Strategy
|
|
Multipaper Session 816 to be held in Balboa C on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Chair(s): |
| Victor Kuo,
FSG Social Impact Consultants, victor.kuo@fsg.org
|
|
Measuring Apples, Doctors, and Insurance Coverage with the Same Yard Stick: Innovative Evaluation in a Foundation Setting
|
| Presenter(s):
|
| Marisa Allen, Colorado Health Foundation, mallen@coloradohealth.org
|
| Kaye Boeke, Colorado Health Foundation, kboeke@colradohealth.org
|
| Abstract:
Evaluation in foundation settings presents a number of challenges, chief among them the difficulty of assessing the collective impact of a diverse set of grants and ensuring that evaluation is truly used to guide strategic grantmaking. This paper presents a history of how the Colorado Health Foundation (TCHF) implemented an innovative evaluation model and documents evaluation's use and influence within the Foundation. While the primary benefit of implementing the model has been the ability to track the collective impact of a diverse set of grants, the process of implementing the framework has been instrumental in building the Foundation's own internal evaluation capacity. This paper documents how the implementation of this model promoted evaluative thinking and how the metrics in the model serve as a powerful mechanism to assess the collective impact of the Foundation's entire portfolio of grantmaking.
|
|
Restructuring the Human Services Sector: Evidence from a Innovative Pilot Initiative
|
| Presenter(s):
|
| Rob Fischer, Case Western Reserve University, fischer@case.edu
|
| Claudia Coulton, Case Western Reserve University, claudia.coulton@case.edu
|
| Diwakar Vadapalli, Case Western Reserve University, diwakar.vadapalli@case.edu
|
| Abstract:
Nonprofit agencies face increasing competition for scarce funding resources. Many agencies are considering ways to restructure themselves, often via mergers and acquisitions, as a way to become more effective and competitive. This study examining a pilot initiative in Cleveland, OH in which funders supported nonprofits in the pursuit of significant restructuring efforts. Health and human service nonprofits were recruited into a three-phase facilitated pilot that assisted the agency executive directors and boards in determining what type of restructuring was feasible and desirable. The study highlights key learnings from the initiative.
|
|
Foundation Performance Assessment: The State of Practice
|
| Presenter(s):
|
| Ellie Buteau, Center for Effective Philanthropy, ellieb@effectivephilanthropy.org
|
| Andrea Brock, Center for Effective Philanthropy, andreab@effectivephilanthropy.org
|
| Abstract:
When the topic of foundation evaluation practice arises, it is often in relation to foundations conducting or reviewing evaluations of their grantees' work. But what are foundation CEOs doing to evaluate their own foundations' performance? For foundation leaders, answering the question, 'How are we doing?' is anything but simple.
In a recent survey of CEOs of foundations with at least $5MM in giving, the majority of CEOs believe that foundations have made great progress over the last decade in being able to assess their effectiveness. Findings from this research indicate that while foundations are making concerted efforts to assess their performance, there is still progress to be made. This paper examines these issues and explores potential ways for foundations to continue to improve their work to assess their own effectiveness.
|
| | |
|
Session Title: How Practitioners and Researchers Can Use Models to Build and Sustain Evaluation Capacity
|
|
Panel Session 817 to be held in Capistrano A on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Organizational Learning and Evaluation Capacity Building
|
| Chair(s): |
| Steffen Bohni, Ramboll Management Consulting, sni@r-m.com
|
| Discussant(s):
|
| Hallie Preskill, FSG Social Impact Consultants, hallie.preskill@fsg.org
|
| Abstract:
In the past two decades evaluation capacity building (ECB) has become an increasingly widespread practice among evaluation practitioners. Since the 2000 AEA conference theme on evaluation capacity building, research on what constitutes evaluation capacity and capacity building has increased markedly. Yet, there has been little effort to consolidate approaches and models. In this panel the aim is twofold. First, contributors to this debate will convene and discuss the similarities and differences of their various evaluation capacity models and value the various contributions. Second, they will discuss how evaluation capacity models can and should inform the practice of evaluation capacity building. - And ultimately create value.
|
|
Evaluation Capacity Conceptualizing and Measuring an Ambiguous Construct
|
| Steffen Bohni, Ramboll Management Consulting, sni@r-m.com
|
| Sebastian Lemire, Ramboll Management Consulting, setl@r-m.com
|
|
The purpose of this presentation is to outline a model of evaluation capacity and tool to measure evaluation capacity using the evaluation capacity index (ECI) and discuss its potential use for evaluation. The point of departure is a recently published article in the American Journal of Evaluation (Nielsen, Lemire & Skov, 2011). First, the presenters will outline model of evaluation capacity. Second, the processes and findings in validating the ECI will be described. Third, the presenters will engage in a discussion with the panel about diagnosing, building and sustaining evaluation capacity.
|
|
|
Using the Evaluation Capacity Index (ECAI) to Conceptualize, Measure and Build Evaluation Capacity
|
| Tina Taylor-Ritzler, Dominican University, tritzler@dom.edu
|
| Yolanda Suarez-Balcazar, University of Illinois at Chicago, ysuarez@uic.edu
|
|
The purpose of this presentation is to describe the components and validation results of the Suarez-Balcazar, Taylor-Ritzler and Garcia-Iriarte (2010) evaluation capacity building model, compare and contrast it with other ECB models, and discuss the uses of the Evaluation Capacity Assessment Instrument (ECAI). First, the presenters will describe the model components: individual factors (awareness of the benefits of evaluation, motivation, and competence), organizational cultural and contextual factors (leadership, learning climate and resources), and evaluation capacity outcomes (use of evaluation processes and findings). Second, the presenters will describe the processes used to validate the ECAI, including mixed methods and confirmatory factor analysis and structural equation modeling. Third, the presenters will engage in discussion with other panel members to identify commonalities among and differences between current ECB models. Finally, the presenters will talk about uses of the ECAI in practice, including to diagnose, build and sustain evaluation capacity.
| |
|
The Evaluation Capacity Building Checklist: A Tool to Build Evaluation Capacity
|
| Jean A King, University of Minnesota, kingx004@umn.edu
|
|
The ECB checklist (Volkov & King, 2005) resulted from an empirical study of evaluation capacity building in three organizations (a museum, a large non-profit, and a school district). The resulting model has three components: (1) organizational context (including internal and external power hierarchies, administrative culture, and decision-making processes); (2) ECB structures, i.e., the mechanisms in the organization that enable the development of evaluation capacity (an explicit ECB plan, related infrastructure, socialization into the evaluation process, and peer learning structures); and (3) resources to support the development of evaluation capacity. King will explicate these components, including an update based on recent literature and an ongoing study of capacity building in the school district that was part of the original research.
| |
|
Session Title: From Home Visiting to Homelessness: Using Evaluation to Measure Social Work Domains
|
|
Multipaper Session 818 to be held in Capistrano B on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Social Work TIG
|
| Chair(s): |
| Tracy Wharton,
University of Michigan, trcharisse@gmail.com
|
|
Values in the Evaluation of Homelessness in the United States and European Union
|
| Presenter(s):
|
| Verner Denvall, Linnaeus University, Sweden, verner.denvall@lnu.se
|
| Abstract:
Homelessness is a wicked social problem exhibiting great resistance to current solutions while engaging various public sectors, organizations, agencies and levels simultaneously. How does this affect the evaluation of programs and projects aspiring to combat homelessness? Variations occur depending on held values of stakeholders and evaluators.
Knowledge about the interaction between problems, evaluation methods and regimes of the welfare state is limited. Dissimilar pictures and different solutions regarding homelessness in the U.S. and in E.U. countries will likely effect recommendations given as a result of performed evaluations.
This session will provide an analysis of such evaluations, focusing criteria, values in practice, recommendations and relationships with stakeholders. The empirical base is a sample of the most cited evaluations of homelessness programs published in professional journals during the period 2005 - 2010.
|
|
Put Your Money Where the Parents Are: An Applied Cost Analysis of a Parenting Education Program to Prevent Repeat Maltreatment
|
| Presenter(s):
|
| Erin Maher, Casey Family Programs, emaher@casey.org
|
| Abstract:
This paper presents an applied cost analysis of a parenting education program (The Nurturing Parenting Program for Infants, Toddlers, and Preschoolers) in Louisiana for child welfare-involved parents. It builds on existing evaluation results, which showed a significant reduction in repeat maltreatment for those parents with greater participation in the program. Taking the perspective of the child welfare agency with a focus on direct outcomes over a short time frame, we demonstrate a cost analysis approach that is conservative, useful to policymakers and advocates in the public child welfare field, and that can readily be applied in similar settings. Our results suggest that participation in a parenting education class is almost cost neutral relative to savings associated with reduced likelihood of a repeat maltreatment report. We include a discussion of the how this cost approach differs from other types of economic analyses of social programs and its strengths and limitations.
|
|
Ensuring Parent Voice in a Statewide Capacity Assessment of Home Visiting Services
|
| Presenter(s):
|
| Rebecca Gillam, University of Kansas, rgillam@ku.edu
|
| Karin Chang-Rios, University of Kansas, kcr@ku.edu
|
| Annie McKay, University of Kansas, amckay@ku.edu
|
| Abstract:
Parent perspectives are increasingly important in planning, implementing and evaluating programs and services in the early childhood field. Through funding requirements, federal agencies have demanded an increased focus on parent leadership. Ensuring that parent voice is included can be challenging. This poster presentation addresses these issues by documenting the process for ensuring parent voice in a Statewide capacity assessment of home visiting services in Kansas. The capacity assessment used a qualitative methodology, resulting in over 45 hours of feedback. Process elements that will be presented include community partnerships, parent recruitment, focus group protocol, and follow up. Implications will be presented, suggesting that effective planning, implementation and evaluation of services requires recognizing and responding to parent needs and perspectives.
|
| | |
|
Session Title: Evaluating Integration: Evaluation of Aggregated Prevention Programs
|
|
Panel Session 819 to be held in Carmel on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
|
| Chair(s): |
| Mindy Dahl Chai, Wyoming Department of Health, mindy.dahl@health.wyo.gov
|
| Abstract:
The evaluation of prevention programs comes mainly from observing how well those programs address a problem. Health care reform has had a large impact on prevention. The Substance Abuse and Mental Health Services Administration (SAMHSA) has started to put mental health, suicide, and substance abuse prevention together. This, coupled with budget cuts, has led several states to integrate their systems and to begin to see the relationships between things like mental health prevention and substance abuse prevention. Attendees of this panel will hear from evaluators and clients on methods used to integrate multiple prevention programs. Attendees will gain understanding of how data documentation ties together different prevention programs, the vision of a state level prevention program, and the impact these adjustments have on evaluation, including the lessons learned from integrating a rural state's prevention programming.
|
|
Connecting Diverse Prevention Programs Through Data Documentation
|
| Reese Jenniges, University of Wyoming, jenniges@uwyo.edu
|
| Humphrey Costello, University of Wyoming, hcostell@uwyo.edu
|
|
The first presentation concentrates on the rationale for integrated prevention programming. The panelist will discuss the data suggesting benefits of integrating prevention efforts. He will focus on data that connects prevention programs to one another at the state and national level. This information includes a rich discussion of tobacco use and how it is specifically associated with mental health, alcohol dependence, and other areas of concern in health care. It is this documentation that helped inform the discussion on how the tobacco prevention and control programs in the state of Wyoming integrated with the remaining health care prevention programs. The presentation will cover current information from literature on tobacco research. Additionally, the panelist will discuss the results of some novice analyses and a logistic regression which details the connections between tobacco use and several mental disorders.
|
|
|
Prevention Integration Among State Agencies: A Team Approach
|
| Mindy Dahl Chai, Wyoming Department of Health, mindy.dahl@health.wyo.gov
|
| Marilyn Patton, Wyoming Department of Health, marilyn.patton@health.wyo.gov
|
|
A typical Wyoming citizen who receives state services often receives those services from two or more state agencies. In 2006, the Wyoming Department of Family Services (DFS) and Wyoming Department of Health (WDH) recognized the growing need to discuss shared programmatic responsibilities and began meeting regularly. These agencies were soon joined by representatives from the Governor's Office, the Wyoming Department of Education (WDE), the Wyoming Department of Workforce Services (DWS), and the Wyoming Department of Corrections (DOC) to form the Planning Team for At-Risk Children, Youth, and Families (PTAC). The presenter will provide an overview of the organizational components of this team, team activities, as well as the team's identified core values and principles. The presenter will highlight processes used by the team to ensure continuation of a sustainable framework to advance human service policy initiatives. The presentation will showcase the comprehensive prevention model developed by the PTAC Prevention subcommittee.
| |
|
The Impact of Integrating Programs and Funding Streams on Evaluation
|
| Laura Feldman, University of Wyoming, lfeldman@uwyo.edu
|
| Rodney Wambeam, University of Wyoming, rodney@uwyo.edu
|
|
In this final presentation researchers will discuss their work to evaluate separate state level projects that include integrated projects, outcome based goals, and funding. Presenters will discuss how integration impacted evaluation design, measurement, and technical assistance provided to the State and to sub-recipient communities. They will also discuss major challenges to evaluating integrated projects as well as lessons learned from these evaluations. This topic is particularly relevant to attendees because of recent recognition of the strong relationships between many public health issues. In light of this reality and the current economic crisis, more and more evaluators will be faced with the challenge of assessing the impact of integrated programming.
| |
|
Session Title: Novices Evolving Evaluation Values Through Developmental Evaluation
|
|
Panel Session 820 to be held in Coronado on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Teaching of Evaluation TIG
|
| Chair(s): |
| Michael Patton, Utilization-Focused Evaluation, mqpatton@prodigy.net
|
| Discussant(s):
|
| David Williams, Brigham Young University, david_williams@byu.edu
|
| Abstract:
Graduate students in an introductory evaluation course compared Patton's Developmental Evaluation and several other texts on evaluation. They applied what they learned while evaluating an emerging technology-based learning suite being created simultaneously by their university's center for teaching and learning. The students created mind maps to summarize what they learned about values, valuing, realities of evaluation practice, their personal evaluation lives, contrasts among approaches, and essential principles of evaluation such as flexibility, authenticity, responsiveness and spontaneity. The students will share their mind maps using stories, video, music, dance, sports illustrations, and art to portray their evolving values and the values of stakeholders as these played out in this learning-by-doing evaluation apprenticeship. Their professor as discussant and Michael Patton as session chair will explore implications and respond to the students' efforts to dive into and begin interpreting and growing the field of evaluation. They will invite the audience to share reactions.
|
|
Eisner and Patton Meet the Learning Suite
|
| Anneke Majors, Brigham Young University, blueroosterdesign@gmail.com
|
|
Using Patton's Developmental Evaluation (2011) and Eisner's Art of Educational Evaluation (1985) as a foundation, this student joined other students in evaluating an innovative suite of electronic learning/teaching tools. Others studied Patton and at least one other text and all students studied and shared ideas from current evaluation articles, websites, their own metaevaluations of completed evaluations and kept journals in which they reflected on their personal evaluation lives and evolving views on evaluation theory and practice. In this presentation, the student will share a mind map in which they summarize what they learned about values, valuing, realities of evaluation practice, their personal evaluation life, contrasts among approaches, and essential principles of evaluation, such as flexibility, authenticity, responsiveness and spontaneity. They will employ stories, video, music, dance, sports illustrations, and/or art to portray their values and those of stakeholders that played out in this learning-by-doing evaluation apprenticeship.
|
|
|
Morell and Patton Meet the Learning Suite
|
| Frederick Hyatt, Brigham Young University, fred.hyatt@gmail.com
|
|
Using Patton's Developmental Evaluation (2011) and Morell's Evaluation in the face of uncertainty (2010) as a foundation, this student joined other students in evaluating an innovative suite of electronic learning/teaching tools. Others studied Patton and at least one other text and all students studied and shared ideas from current evaluation articles, websites, their own metaevaluations of completed evaluations and kept journals in which they reflected on their personal evaluation lives and evolving views on evaluation theory and practice. In this presentation, the student will share a mind map in which they summarize what they learned about values, valuing, realities of evaluation practice, their personal evaluation life, contrasts among approaches, and essential principles of evaluation, such as flexibility, authenticity, responsiveness and spontaneity. They will employ stories, video, music, dance, sports illustrations, and/or art to portray their values and those of stakeholders that played out in this learning-by-doing evaluation apprenticeship.
| |
|
Fenwick, Parson and Patton Meet the Learning Suite
|
| Jennifer Skeen, Brigham Young University, jaskeen@gmail.com
|
|
Using Patton's Developmental Evaluation (2011) and Fenwick and Parson's The art of evaluation (2009) as a foundation, this student joined other students in evaluating an innovative suite of electronic learning/teaching tools. Others studied Patton and at least one other text and all students studied and shared ideas from current evaluation articles, websites, their own metaevaluations of completed evaluations and kept journals in which they reflected on their personal evaluation lives and evolving views on evaluation theory and practice. In this presentation, the student will share a mind map in which they summarize what they learned about values, valuing, realities of evaluation practice, their personal evaluation life, contrasts among approaches, and essential principles of evaluation, such as flexibility, authenticity, responsiveness and spontaneity. They will employ stories, video, music, dance, sports illustrations, and/or art to portray their values and those of stakeholders that played out in this learning-by-doing evaluation apprenticeship.
| |
|
Preskill, Catsambas, and Patton Meet the Learning Suite
|
| Pamela Luke, Brigham Young University, pambluke@gmail.com
|
|
Using Patton's Developmental Evaluation (2011) and Preskill and Catsambas's Reframing evaluation through appreciative inquiry (2006) as a foundation, this student joined other students in evaluating an innovative suite of electronic learning/teaching tools. Others studied Patton and at least one other text and all students studied and shared ideas from current evaluation articles, websites, their own metaevaluations of completed evaluations and kept journals in which they reflected on their personal evaluation lives and evolving views on evaluation theory and practice. In this presentation, the student will share a mind map in which they summarize what they learned about values, valuing, realities of evaluation practice, their personal evaluation life, contrasts among approaches, and essential principles of evaluation, such as flexibility, authenticity, responsiveness and spontaneity. They will employ stories, video, music, dance, sports illustrations, and/or art to portray their values and those of stakeholders that played out in this learning-by-doing evaluation apprenticeship.
| |
|
Session Title: Determining Indicators for Hard-to-Measure Health Outcomes in Youth
|
|
Think Tank Session 821 to be held in El Capitan A on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Health Evaluation TIG
|
| Presenter(s):
|
| Julia Alvarez, JVA Consulting LLC, julia@jvaconsulting.com
|
| Kathleen Tinworth, Denver Museum of Nature and Science, kathleen.tinworth@dmns.org
|
| Discussant(s):
|
| Nancy Zuercher, JVA Consulting LLC, nancy@jvaconsulting.com
|
| Andrea Giron, Denver Museum of Nature and Science, andrea.giron@dmns.org
|
| Katie McCune, JVA Consulting LLC, katie@jvaconsulting.com
|
| Abstract:
In this think tank session, participants will explore the challenges of determining indicators for diverse groups of participants engaged in a multiple-touch, school-based health science initiative implemented by a science and cultural organization over several years. The facilitators will present a case study and ask participants to engage in collaborative brainstorming to determine indicators that can help with the measurement of health outcomes in youth, while also meeting the needs of, and responding to the outcomes articulated by, funders and program staff. Session participants will join facilitated breakout groups that represent groups engaged with programming-students, teachers or families-and will be asked to think through specific indicators. Breakout groups will present indicators to the larger group, and group leaders will facilitate a discussion about the challenges of determining indicators of health in the context of multifaceted, school-based programming with intended outcomes that extend beyond the walls of the school.
|
|
Session Title: What's New in the New Edition of the RealWorld Evaluation Book?
|
|
Demonstration Session 822 to be held in El Capitan B on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the International and Cross-cultural Evaluation TIG
|
| Presenter(s): |
| Jim Rugh, RealWorld Evaluation, jimrugh@mindspring.com
|
| Michael Bamberger, Independent Consultant, jmichaelbamberger@gmail.com
|
| Abstract:
The RealWorld Evaluation book and related workshops have been very popular with AEA audiences and with colleagues in other countries as well. Based on extensive feedback and in response to global trends in the evaluation profession, there have been a number of changes and additions made in the 2nd edition of the book, to be published by Sage by late 2011. During this session authors Michael Bamberger and Jim Rugh will be introducing what's new in the new edition of the RealWorld Evaluation book.
These include a more extensive list and descriptions of a variety of evaluation designs, alternatives to the statistical counterfactual, ways to reconstruct baseline data, a greater focus on the evaluation of complicated and complex programs, the use of mixed methods and other ways to conduct evaluations given the practical realities in the real world.
|
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
The Guidelines Used in Evaluation in Accordance of Federal State Educational Standards in Russia |
|
Roundtable Presentation 823 to be held in Exec. Board Room on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Mixed Methods Evaluation TIG
|
| Presenter(s):
|
| Marina Chelyshkova, State University of Management, mchelyshkova@mail.ru
|
| Victor Zvonnikov, State University of Management, zvonnikov@mail.ru
|
| Abstract:
The process of new Federal State Educational Standards introduction for system of professional training is developing in Russia now. Instead of requirements to the content of education new Standards have requirements to results of training which are presented in the form of competencies frame. The evaluation of competence levels demands regular supervising and estimated procedures that requires guiding principles used in its development.
In our paper we presented the experience of Russian State University of Management which is the head of methodical association for 480 Russian universities and institutes which are carrying out the educational process in the area of management. We enter the number of the principles connected with theory and tools development for evaluation on the base of new Educational Standards .
|
| Roundtable Rotation II:
Applying Mixed Methods Into Needs Assessment: Constructing a Mixed Methods Design to Assess the Training Needs of Testing Professionals from State-level Testing Organizations in China |
|
Roundtable Presentation 823 to be held in Exec. Board Room on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Mixed Methods Evaluation TIG
|
| Presenter(s):
|
| Hongling Sun, University of Illinois at Urbana-Champaign, hsun7@illinois.edu
|
| Abstract:
Given that needs assessment (NA) is usually related to complicated issues beyond need itself (DiLauro, 1979), a mixed methods approach is often suggested to be applied in NA. Although many NA practices use mixed methods, few published studies actually describe how to apply mixed methods in NA, and even fewer attend to mixed methods literature for thoughtful planning. Adopting the conceptual frameworks developed by Greene, Caracelli, and Graham (1989), this presentation will demonstrate how a mixed methods approach can be used to design a training NA for testing professionals in state-level testing organizations in China, followed by a discussion of how this approach can be applied in the general domain of NA. By highlighting the critical components of mixed methods purposes and design through this example, I intend to help practitioners understand how to carefully plan a mixed methods study and improve the quality of NA.
|
|
Session Title: Methodological Issues in Feminist Evaluation
|
|
Multipaper Session 824 to be held in Huntington A on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Feminist Issues in Evaluation TIG
|
| Chair(s): |
| Denice Cassaro,
Cornell University, dac11@cornell.edu
|
|
A Feminist, Gender and Rights Perspective for Evaluation of Women's Health Programmes
|
| Presenter(s):
|
| Renu Khanna, Society for Health Alternatives, sahajbrc@yahoo.com
|
| Abstract:
This paper is based on my experience as a women's health advocate in India. Through my community-based work and feminist health research projects, I have developed a perspective on women's health which draws upon public health, a feminist analysis of health, and the framework of right to health. My worldview on evaluations has developed from experiences of being evaluated - I learnt valuable lessons on how evaluations should not be done. These lessons evolved into a set of values and ethical principles that I strive to follow when called upon to undertake evaluations. As a feminist evaluator, I design and conduct evaluations that privilege the perspectives of the 'user' community and the implementors. This paper lays out (i) my concept of women's health, (ii) values and ethical principles, (iii) insights into feminist evaluation of health programmes.
|
|
Explication of Evaluator Values: Timing Matters
|
| Presenter(s):
|
| Kathryn Bowen, Centerstone Research Institute, kathryn.bowen@centerstone.org
|
| Abstract:
Feminist Principles were incorporated into the evaluation of a substance abuse treatment program after it was conceptualized. The design, outcomes, and methods were pre-determined and reflected values of the funding agency, grant writers and grantee. The evaluator began activities after these foundational elements were established. While activities of the evaluation intentionally sought to understand the realities of program women's lives, structural and philosophical elements reflecting variability in values and valuing of others resulted in challenges when interpreting data, particularly as they related to key elements aligned with feminist principles valued by the evaluator. Language, methodologies and epistemological frameworks interwoven into the fabric of the program had the potential to foreclose the possibilities for truly understanding the impact of the program and the experiences of some women during the process. Infusing and adhering to the evaluator's values helped create a space for the voices of women that had been systematically removed from the analysis.
|
|
Challenging Gender Blindness in Conventional Evaluation
|
| Presenter(s):
|
| Silvia Salinas, Independent Consultant, ssalinas@entelnet.bo
|
| Fabiola Amariles, Learning for Impact Corporation, famariles@gmail.com
|
| Abstract:
The paper will discuss 'gender bias and blindness' in conventional evaluation, arguing that its definitions and frameworks come from a dominantly male environment and masculine notions of reality. It will explore changes needed in evaluation designs and some means to achieve them in practice. 'Gender responsive evaluation' (GRE) confronts the complex issue of evaluating changes in power relations, as well as capturing and measuring intangible, oftentimes highly subjective improvements in women's wellbeing and quality of life. Thus, GRE not only deals with the need to address gender gaps but implies reviewing the premises of conventional evaluation. Based on evaluations conducted at national and regional level, and experience linked to GRE institutionalization efforts, different conceptual, methodological and technical alternatives will be proposed to effectively and systemically capture gender outcomes and understand cause-effect relations. Furthermore, the paper aims to reflect on additional challenges deriving from multicultural contexts, ethnic revalorization and intercultural relations.
|
| | |
|
Session Title: Whom Does an Evaluation Serve? Aligning Divergent Evaluation Needs and Values
|
|
Panel Session 826 to be held in Huntington C on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Government Evaluation TIG
|
| Chair(s): |
| Dawn Smart, Clegg & Associates, dsmart@cleggassociates.com
|
| Discussant(s):
|
| George Grob, Center for Public Program Evaluation, georgefgrob@cs.com
|
| Abstract:
Programs carrying out evaluations as a condition of government or foundation funding often struggle to balance the dictates of the grantmaker for data collection and reporting with the desire to make the evaluation beneficial internally. What is valuable to funders is sometimes inconsistent with what is useful for program improvement or other reporting, particularly when funders 'roll up' results across grantees. Or there may be competing requirements among evaluation sponsors or organizational stakeholders. And sometimes, a sponsor or funder has an internal conflict on how they want to use the evaluation. This panel will present perspectives of evaluators who work with organizations and at local, state, federal, tribal and international levels. Panelists will talk about their experiences managing the plurality of agendas, the challenges of balancing the priorities, and how they avoid getting caught between opposing groups. Session participants will discuss ways to ensure useful and honest evaluations for everyone.
|
|
Responding to State Sponsors and Stakeholders When Their Values and Evaluation Needs may be at Odds
|
| Rakesh Mohan, Idaho State Legislature, rmohan@ope.idaho.gov
|
|
Credibility is the only thing that saves the day for evaluators working in a politically charged environment. But where does this credibility come from? How does one establish and maintain credibility? Can we be responsive to the values and evaluation needs of sponsors and stakeholders while maintaining our independence from them as evaluators? The presenter will share his experiences working in a highly politically-charged state legislative environment. He will use examples of evaluations done in the areas of K-12 public education and corrections to illustrate how evaluators can successfully navigate through difficult situations involving conflicting and competing values and needs.
|
|
|
When an International Organization Gives Mixed Messages on the Purpose of the Evaluation
|
| Tessie Catsambas, EnCompass LLC, tcatsambas@encompassworld.com
|
|
The presenter worked on an evaluation of an online M&E system for HIV/AIDS for global use by countries. In this evaluation, the client, an international organization, kept giving distinctively different messages oscillating from "we want to kill the online program," "we want to build the online program," and "we want to stop building the online program but would like to continue supporting it." To make matters worse, the client asked the evaluation team to recommend whether the client should continue working on the program (essentially inviting the evaluation team to step right into the center of internal conflict). The presenter will share the strategies used to navigate this treacherous ground and how the evaluation findings were used to facilitate a compromise and resolution internally.
| |
|
Strategies to Reconcile Differing Needs and Values: Examples from Federal and Foundation Evaluation Projects
|
| Dawn Smart, Clegg & Associates, dsmart@cleggassociates.com
|
|
The presenter will share her experience with two evaluations, one a federally-funded technical assistance program and one a foundation-funded initiative for a tribal organization. In each case, reconciling the grantmaker's prescribed evaluation focus and reporting requirements with the interests of the organization was a challenge. In order to address the needs of the clients and their respective funders, and stay within budget, a number of strategies were put in place. They included adjusting initial evaluation plans, dropping some measures in favor of others, supplementing the data collection, and perhaps most successful and creative, collaborating with another evaluation underway. In both cases, the questions of purpose, value, and balancing agendas and priorities were squarely on the table.
| |
|
Session Title: Evaluation Frameworks for College Access Programs
|
|
Multipaper Session 827 to be held in La Jolla on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the College Access Programs TIG
|
| Chair(s): |
| Rita O'Sullivan,
University of North Carolina, rjosull@mindspring.com
|
|
Dean's Future Scholar's: A Program Evaluation on a Research-based Program for First Generation Low-Income Students High School Success and College Matriculation
|
| Presenter(s):
|
| James Beattie, University of Nevada, Reno, puck0401@yahoo.com
|
| Bill Thornton, University of Nevada, Reno, thorbill@unr.edu
|
| Abstract:
This paper will present a CIPP (Context Input Process Product) evaluation implemented on the Dean's Future Scholars program at a midsize research University. The evaluation used a mixed methods approach to collect data. The context portion of the evaluation involved interviews with participants, the program director, as well as graduate assistants involved in the program. In addition, a review of previous research was conducted to provide comparison information. The input portion involved review of budget, student workshops, staff involvement, and planned interventions. The process portion involved review, monitoring, documentation, and assessment of various program activities. The impacts, effectiveness, and sustainability of program interventions were considered. Finally, the transportability or the extent that the program is replicable at other institutions will be addressed.
|
|
Evaluating High School Pipeline Programs: Med-Start Academic Enrichment Program to Enhance Higher Education in Science and Health Careers
|
| Presenter(s):
|
| Laurie Soloff, University of Arizona, lsoloff@email.arizona.edu
|
| Linda Don, University of Arizona, ldon@email.arizona.edu
|
| Oscar Beita, University of Arizona, obeita@email.arizona.edu
|
| Patricia Rodriguez, University of Arizona, aprodrig@email.arizona.edu
|
| Andrew Stuck, University of Arizona, astuck@email.arizona.edu
|
| Ana Maria Lopez, University of Arizona, alopez@azcc.arizona.edu
|
| Abstract:
Med-Start (MS) is an academic enrichment program for high school students from under-served, low income and under-represented minority backgrounds to support college enrollment and pursuit of careers in science and health care. MS goals are not only to strengthen individual achievements and increase student diversity, but also to enhance the future healthcare workforce. Offered at the University of Arizona since 1969, the MS program evaluation plan now includes short-term measures of self efficacy, motivation and knowledge, as well as long-term outcomes.
Participants' pre- and post-program self-efficacy, intentions, and knowledge regarding pursuit of science/health careers have been analyzed. Results of a longitudinal study include educational achievements, career outcomes and intentions of program participants (N=127). Use of DatStat Illume online survey design and email management enhanced response rates and data quality. Descriptive analysis includes demographics and disadvantaged characteristics. Quality improvement and satisfaction data support formative evaluation.
|
|
At Home in College: A Comprehensive Evaluation of a New College Transition Program in New York City
|
| Presenter(s):
|
| Drew Allen, City University of New York, drew.allen@mail.cuny.edu
|
| Stefanie Bruno, City University of New York, stefanie.bruno@mail.cuny.edu
|
| Abstract:
At Home in College is a new college transition program offered by the City University of New York (CUNY) that works with students from 35 New York City public high schools and three GED programs who are on-track to graduate but who have not met traditional benchmarks of college readiness. The program aims to increase the college enrollment and retention rates of these student populations, with the long-term goal of increasing college graduation rates. This paper presents a comprehensive evaluation of the program's implementation and impact during its first two years. A quantitative multiple regression analysis utilizing a differences-in-differences framework is combined with results from surveys, focus groups, and interviews to provide a clear and nuanced picture of the program's effectiveness. Evaluation findings are followed by a discussion of how real-time feedback of evaluation results have been actively used by administrators and on-the-ground educators to help improve the program.
|
|
21st Century Framework to Evaluate Higher Education Programs: Designed to Increase Minority Student Success
|
| Presenter(s):
|
| Greg Richardson, Azusa Pacific University, gdrichardson@apu.edu
|
| Abstract:
Underrepresentation of minority sub groups in higher education represents the future imbalance of diverse insights needed to adequately resolve complex problems within our multicultural nation. Far too many students are leaving higher education without degree attainment, and academic inability is not the key culprit. Proven solutions identified by leading researchers were analyzed and complied to construct a new framework for student success in higher education. Findings reveal that efforts are required to address external and internal factors of at-risk students. This paper addresses the fragmentation and isolation currently existent in higher education that precludes underrepresented groups from completing degree program requirements. It identifies the effectiveness of frameworks used to evaluate existent diversity initiatives at colleges and universities, and essential methodologies needed to develop institutional collaboration that reverse the dropout trend.
|
| | | |
|
Session Title: The Evolution of Evaluation Within a National Science Foundation Program: Integrating Policy, Practice, and Vision
|
|
Multipaper Session 828 to be held in Laguna A on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Research on Evaluation
|
| Chair(s): |
| Arlen Gullickson, Western Michigan University, arlen.gullickson@wmich.edu
|
| Discussant(s): |
| Connie Kubo Della-Piana, National Science Foundation, cdellapi@nsf.gov
|
| Abstract:
In 1993, Congress established the National Science Foundation's Advanced Technological Education program. The first request for proposals did not direct proposers to include an evaluation component in their proposals. In contrast, the 2010 program solicitation called for proposers to "establish claims as to the project's effectiveness," and describe how the evaluation will "provide evidence on the extent to which the claims are realized." This session includes (a) an examination of almost two decades of ATE program solicitations, revealing NSF's increasing value of evaluation and heightened expectations for grantees to have their projects evaluated in substantial and meaningful ways; (b) an analysis of current grantees' responses to the "claims and evidence" requirement; and (c) a discussion of results of a facilitated dialogue among ATE stakeholders about how grantees are currently addressing the requirement and what can be done to bridge the gap between NSF's demands and the priorities of the grantees.
|
|
Evaluation Expectations Expressed in National Science Foundation Advanced Technological Education Program Solicitations: An Analysis of Changes in De Facto Evaluation Policy since 1993
|
| Lori Wingate, Western Michigan University, lori.wingate@wmich.edu
|
|
The National Science Foundation is a major force in shaping policies and practices around program evaluation. Its overall growing interest in evaluation, as well as subtle shifts in evaluation priorities over time, has implications for how evaluators serve their NSF-funded clients. An examination of almost two decades of solicitations for NSF's Advanced Technological Education program reveals a significant increase in the attention to evaluation as well as the specificity of evaluation requirements placed on grantees. In 1993, there is only one reference to evaluation. In contrast, the 2010 solicitation conveys specific expectations for evaluation for each type of project and instructs proposers that they must allocate a portion of their budget to an external evaluator. Moreover, an overarching expectation is articulated as follows: "The PI should establish claims as to the project's effectiveness, and the evaluative activities should provide evidence on the extent to which the claims are realized."
|
|
Evaluative Claims and Evidence: Current Practice
|
| Carl Westine, Western Michigan University, carl.d.westine@wmich.edu
|
|
In the grant-making world, there is a growing emphasis on improving the usefulness of evaluations in terms of providing meaningful feedback to grantees, as well as serving the primary funder's role in securing financial support from donors and government appropriations. Large funding bodies like the National Science Foundation are making increasingly specific requirements for evaluation. In a 2011 survey of all NSF Advanced Technological Education grantees, principal investigators were asked to "Provide an example of a claim or impact related to their project/center." Additionally, they were asked, "What is the evidence to support this claim or impact?" A qualitative analysis of survey responses indicates a lack of clarity and consistency among grantees' perceptions as to what constitutes evaluative claims and evidence. In an era of shallower pockets and tightening budgets, understanding how to properly make claims and provide clear evidence of effectiveness is essential for both grantees and program funders.
|
|
Articulating a Vision for Program-wide Evaluative Claims and Evidence: Giving Voice to Stakeholders
|
| Jane Ostrander, Truckee Meadows Community College, ostranderjane@mac.com
|
| Peggie Weeks, Lamoka Educational Consulting, pegweeks@gmail.com
|
|
The National Science Foundation's Advanced Technological Education program has been in existence for nearly two decades. By meeting in person annually and collaborating across projects, this program's grantees have developed a sense of community. They share a common vision about their roles in improving technological education and are deeply aware of the challenges associated with this work and evaluating the impact of their efforts. NSF's requirement that principal investigators establish claims about their effectiveness and provide evidence on the extent to which the claims are realized was issued with little guidance. In this presentation, we share the results of a facilitated dialogue among community members (grantees, evaluators, and program officers) about the extent to which grant-level evaluations are meeting this "claims and evidence" requirement and what needs to happen to bridge the gap between the demands of NSF and the priorities of the grantees.
|
|
Session Title: Evaluating the Effects of Media for Social Change in Fragile Environments
|
|
Panel Session 829 to be held in Laguna B on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the International and Cross-cultural Evaluation TIG
|
| Chair(s): |
| Ratiba Taouti-Cherif, Search for Common Ground, rtcherif@sfcg.org
|
| Abstract:
Media for social change has been a central component of Search for Common Ground's approach to peacebuilding. Over the last four years SFCG has embarked on an investigative process to identify approriate ways to evaluate the outcomes of its media interventions which include TV and radio episodic drama. Measuring the effects of media on audiences knowledge, attitudes and behavior has been a central research question for SFCG and its partners. The research journey took us from cohort to cultivation methodologies. The multiplicity of tools, approaches and contexts have never been discussed in any evaluation forum with specialists from the fields of media, peacebuilding, behavior change communication and evaluation. The present set of paper examines how media interventions affect cultural and social identity in a way that positively impacts social change.
|
|
Transformational Media in Conflict-Affected Countries: Measuring the Intangible
|
| Ratiba Taouti-Cherif, Search for Common Ground, rtcherif@sfcg.org
|
| Rebecca Besant, Search for Common Ground, rbesant@sfcg.org
|
|
Central to the work of Search for Common Ground (SFCG) in transforming the way people deal with conflict has been the use of radio and TV dramas. Finding effective ways of evaluating TV and radio dramas in conflict and post-conflict environments has been an important aspect of Search for Common Ground's work. This paper will examine the successes and challenges of evaluative research in the field, with solutions SFCG has identified through experience. The paper will look at three case studies: Sierra Leone, Cote d'Ivoire and the DRC. We will discuss methodological issues of project logic, indicators, sampling, access, data collection tools and analysis from a local capacity perspective together with the challenges, risks and weaknesses associated such evaluations. We will also address the impact of external events within a fragile environment and coordination of strategies between donor and partners as these impact program design, implementation and evaluation.
|
|
|
Cultivation of Pro-Social, Pro-Democratic Norms and Conflict Solving Scripts in an African Audience
|
| Helena Bilandzic, Augsburg University, helena.bilandzic@phil.uni-augsburg.de
|
| Rick Busselle, Washington State University, busselle@wsu.edu
|
| Jean Brechman, University of Pennsylvania, jbrechman@gmail.com
|
|
This study investigates cultivation effects of an entertainment-education series produced by a global NGO for the national African audience in Kenya. It explores how the series strengthens norms of peaceful conflict resolution, tolerance and political engagement, and extends existing research by exploring actual conflict solving scripts (measured by culturally adapted narrative vignettes), and issue reflection as specific cultivation effects. It is hypothesized that, apart from exposure, narrative engagement facilitates cultivation effects, mainly through stronger emotional and cognitive rehearsal of message-relevant plots, and by preventing resistance to overtly persuasive messages. Different media settings (e.g., home versus mobile cinema) and ethnic background were considered as culturally specific factors. In face-to-face-interviews, a cross-sectional sample of 544 Kenyans was surveyed around Nairobi and Eldoret. Results indicate that exposure is related to attitudes and reflection, and to selection of conflict resolution options in response to a conflict script. Narrative engagement predicts most outcome variables.
| |
|
Evaluating Entertainment-Education Series: A Narrative Approach
|
| Helena Bilandzic, Augsburg University, helena.bilandzic@phil.uni-augsburg.de
|
| Rick Busselle, Washington State University, busselle@wsu.edu
|
| Jean Brechman, University of Pennsylvania, jbrechman@gmail.com
|
|
This paper presents findings of an evaluation of a narrative approach to entertainment-education in Nigeria. The study hypothesized that the more viewers experience narrative engagement with an entertainment-education program, the more they will adopt attitudes implied in the series. In a prolonged exposure experiment, Nigerian university students watched 13 episodes of the E-E series "The Station". Narrative engagement, enjoyment, perceived realism and character liking were measured after each exposure. Effects measures, including conflict resolution (violence vs. dialogue), empowerment, tolerance and mutual respect (ethnicity, religion), social responsibility, and social/political engagement, were collected from an experimental group after the final episode; a control group provided a baseline for these measures, collected prior to exposure. Results will be presented and drawing from narrative persuasion theory, practical implications about possible strengths and weaknesses of E-E programs are discussed. Suggestions on how to better achieve E-E goals through narrative in the future are also discussed.
| |
|
Session Title: Incorporating Stakeholder Values and Best Practices in Health Evaluation
|
|
Multipaper Session 831 to be held in Lido C on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Health Evaluation TIG
|
| Chair(s): |
| Robert LaChausse,
California State University, San Bernardino, rlachaus@csusb.edu
|
|
Using Stakeholder Values to Determine Evaluation Questions in Health Evaluation
|
| Presenter(s):
|
| Robert LaChausse, California State University, San Bernardino, rlachaus@csusb.edu
|
| Abstract:
Evaluators have long been encouraged to take into account what stakeholders believe to be important when determining evaluation questions and standards. Many approaches to program evaluation emphasize the importance of consulting stakeholders but fail to articulate how one should incorporate the values of stakeholders in determining the evaluation's focus. Although evaluation conclusions are often based on the analysis of data or other evidence, they are often interpreted through the values of stakeholders. The use of stakeholder interviews and checklists can be useful in helping to determine evaluation questions and to increase evaluation use. An innovative approach to identifying stakeholder values and selecting evaluation questions will be presented. This paper will increase participants' competencies in determining evaluation questions and standards while fostering partnerships with diverse stakeholders when evaluating health programs. An example from a health promotion program for an ethnically diverse population will be used to illustrate these concepts and methods.
|
|
The Value in Identifying Best Practices: Moving from Promising to Best
|
| Presenter(s):
|
| Aisha Tucker-Brown, Centers for Disease Control and Prevention, atuckerbrown@cdc.gov
|
| Rachel Barron-Simpson, Centers for Disease Control and Prevention, rbarronsimpson@cdc.gov
|
| Marnie House, ICF Macro, mhouse@icfi.com
|
| Abstract:
Identifying best practices that can be replicated to achieve a greater impact is an important use of evaluation, but often difficult to accomplish. The Division for Heart Disease and Stroke Prevention (DHDSP) at the Centers for Disease Control and Prevention seeks best practices in heart disease and stroke prevention to assist funded agencies in identifying effective ways to invest resources. First, promising practices are identified through the use of the Systematic Screening and Assessment (SSA) method and then further investigated through rigorous evaluation. Through DHDSP's initial SSA process, three practices were identified as evaluable. The Kaiser Permanente Colorado Hypertension Management Program was rated highest by the expert panel to undergo rigorous evaluation with a focus on identifying core components of the program needed for replication. This presentation will focus on the value of identifying best practices and the process DHDSP is using to identify them for use by their grantees.
|
|
Stakeholder Learning in a Developmental Evaluation
|
| Presenter(s):
|
| Catherine Donnelly, Queen's University, Kingston, donnelyc@queensu.ca
|
| Judy Woods, Queen's University, Kingston, judy.woods@queensu.ca
|
| King Luu, Queen's University, Kingston, king.luu@queensu.ca
|
| Lyn Shulha, Queen's University, lyn.shulha@queensu.ca
|
| Abstract:
Developmental evaluation is an approach to evaluation that focuses on the development process of a program or organization. A major component involves collaboration with stakeholders, whereby an evaluator's perspective is introduced into the development process. While the literature has described the influence of evaluation on stakeholders and organizations there is little empirical evidence to support the claims of knowledge gain, and changes in behaviour that result from participation in a developmental evaluation.
The purpose of this study was to examine the learning and behavioural changes experienced by a program's development team as they participated in a developmental evaluation. A new day rehabilitation program for stroke patients served as the context for this study. Team members participated in three semi-structured interviews throughout the evaluation. Participants were asked what they had learned about the program and what changes they have made to their practice as a result of the evaluation.
|
|
Successful Strategies for Evaluating Public Health Partnerships
|
| Presenter(s):
|
| Jessica Rice, University of Wisconsin, rice4@wisc.edu
|
| Courtenay Kessler, University of Wisconsin, courtenay.kessler@aurora.org
|
| Abstract:
Funders and community organizations have increasingly emphasized the value of partnerships as an integral part of any successful and sustainable public health intervention. Our evaluation team, housed within an academic setting, provides evaluation services for community organizations and academic partners. Over time, we have had many requests to evaluate the development and growth of community/academic, community/community, and community/government partnerships. Our evaluation approach depends greatly on the partners, timeline and scope of the project. We work with the community groups to help them define the partnerships within their projects and then use a variety of models and tools to evaluate the partnership. This presentation will discuss models of public health partnerships, illustrate the steps we take in evaluating these partnerships, discuss our experiences with several commonly used tools and provide examples of partnerships we have evaluated. We will conclude with practical suggestions for other evaluators asked to perform this task.
|
| | | |
|
Session Title: Evaluating Health Research Impact: Implementation of the Canadian Academy of Health Sciences (CAHS) Model
|
|
Panel Session 832 to be held in Malibu on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Research, Technology, and Development Evaluation TIG
|
| Chair(s): |
| Gretchen Jordan, Sandia National Laboratories, gbjorda@sandia.gov
|
| Abstract:
This panel will further thinking on the contextual nature of evaluating health research, address methodological challenges, and pose questions and considerations for others interested in implementing a framework developed by the Canadian Academy of Health Sciences (CAHS) or extending the model to other fields. The first presenter was part of the expert panel brought together to develop the CAHS model for assessing the return on investment for health sciences research. She will describe the development and strengths of the impact logic model and limitations in generalizing this logic to other areas of research. Then we introduce two case studies that will show the experiences of different Canadian health research organizations in implementing the model. A special emphasis will be on how the model was adapted to meet organizational needs, what enhancements were made and how the model was integrated into existing evaluation systems.
|
|
Building a Complex Health Research Logic Model: Making Pathways to Impacts Clear
|
| Gretchen Jordan, Sandia National Laboratories, gbjorda@sandia.gov
|
|
A complex, one page logic model for the return on investment for health sciences research was developed by the expert panel brought together in a Canadian Academy of Health Sciences (CAHS) major assessment in 2007-2008. This logic model has several features that are very important if it is to frame assessments that are similar enough to feed evaluations that synthesize across studies. The model borrowed agreed upon theory and definitions to specify aspects of both the healthcare system and the environment that affects health and individual behaviors. It also categorized the institutions and actors that are the pathways through which advances in the various areas of science take to impact. Individual studies following this logic can look for these pathways to impact, noting other parts of the logic as context. Reasons it will be difficult to copy this logic in other areas of research will be discussed.
|
|
|
The Call to Action: Building a Network That Links Evaluation to Social Benefit
|
| Inez Jabalpurwala-Graham, Graham Boeckh Foundation, inez@bccl.ca
|
|
There has been little progress in the treatment of mental illnesses in the past two decades, despite substantial research spending around the world. The Graham Boeckh Foundation is a private foundation created by J. Anthony Boeckh and his family to fund initiatives in the area of mental health. The Foundation partnered with RAND Europe to establish "The Science of Science Mental Health Research Network" with the purpose of understanding how the mental health research system works and how such research is translated (or not) into patient benefit. The Network involves mental health scientists, practitioners, policy researchers interested in the science of science, and research funders. By involving funders, the Foundation hopes that they will act on the research findings when making future funding decisions. The Network's flagship research project is "Mental Health Retrosight," a multinational study that will evaluate the translation and payback from basic or early clinical mental health research into clinical application and community practice.
| |
|
Implementation of an Evaluation Model for Evaluating Complex Health Research Outcomes
|
| Kathryn Graham, Alberta Innovates Health Solutions, kathryn.graham@albertainnovates.ca
|
| Heidi Chorzempa, Alberta Innovates Health Solutions, heidi.chorzempa@albertainnovates.ca
|
| Daniel Zhang, Alberta Innovates Health Solutions, daniel.zhang@albertainnovates.ca
|
|
The Canadian based Alberta innovates Health Solutions is a publicly-funded, not-for-profit, provincial health research organization mandated to improve the health and socio-economic wellbeing of Albertans through health research and innovation support. Funding agencies around the world are being asked to better measure the impacts that result from their funded investments. However, measuring outcomes in this context is a challenge ranging from multiple stakeholder perspectives, attribution issues and time lags between funding and the realization of socio-economic impact. To address some of these methodological issues, AIHS tested the CAHS model using a retrospective review of grantee information and prospectively by piloting the model with two funding programs. The model was chosen as it aligned to our organizational mission and has been integrated into our organizational performance monitoring and evaluation framework. The focus of the discussion will be on the implementation approach, gaps identified methodological challenges and lessons learned.
| |
|
Session Title: International Views on Assessing Knowledge and Technology Transfer
|
|
Multipaper Session 833 to be held in Manhattan on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Research, Technology, and Development Evaluation TIG
|
| Chair(s): |
| George Teather,
Performance Management Network, george.teather@pmn.net
|
|
Results Based Evaluation on the Appraisal Process of a Technology Transfer Program
|
| Presenter(s):
|
| Yukio Kemmochi, Japan Science and Technology Agency, kemmochi@jst.go.jp
|
| Abstract:
Results based evaluation is a powerful management tool with greater accountability (Kusek & Rist). We adopted this method to assess the appraisal process of the Contract Development Program, a technology transfer program that has been overseen by the Japan Science and Technology Agency (JST) for more than fifty years. The appraisal process of the program tends to focus on the technology to be transferred. Meanwhile, as far as the private enterprise is concerned, the technology transfer project is only a part of the company's activities. It is natural to consider that company's general activities and economic environment also impact the development process of a technology transfer projects. In this paper, we evaluate the appraisal process with the commercialization results of the program.
|
|
Assessing the Effects of a Collaborative Research Funding Scheme: An Approach Combining Meta-Evaluation and Evaluation Synthesis
|
| Presenter(s):
|
| Barbara Good, Technopolis Group, barbara.good@technopolis-group.com
|
| Abstract:
The Swiss Innovation Agency CTI has administrated its collaborative research funding scheme since the early 1980s. Between 1989 and 2002 the scheme was evaluated 14 times. In a study combining meta-evaluation and evaluation synthesis, these evaluations were evaluated against the evaluation standards of the Swiss Evaluation Society (http://www.seval.ch/en/standards/index.cfm). The meta-evaluation showed that the evaluations conducted were mostly qualitative, internal and ex post and that evaluation culture at CTI was selective. Only research institutes and firms that carried out a large number of CTI projects were evaluated regularly. Evaluations under study differed in quality, with most evaluation standards being fulfilled fairly to very well. The results of the meta-evaluation were central to the ensuing evaluation synthesis by giving information on the quality of the evaluations. The synthesis compiled the - mostly qualitative - results of the evaluations. There were strong indications that CTI funding does have a variety of effects.
|
|
Best Practices in the Transformation of Research Knowledge to Application: A Case Study
|
| Presenter(s):
|
| George Teather, Performance Management Network, george.teather@pmn.net
|
| Suzanne Lafortune, Performance Management Network, suzanne.lafortune@pmn.net
|
| Abstract:
One of the major issues for science policy and programs relates to the need to improve the linkage between knowledge created through research and its application for public good and private benefit. This difficulty is often described as the 'Valley of Death'.
This presentation presents the results of a recent evaluation of the Natural Resources Canada Clean Electrical Power Generation (CEPG) program that supports R&D to reduce the environmental impact of Canada's electrical power generation system and increase the efficiency of fossil-fuelled power systems. The study examined the mechanisms by which knowledge was generated and applied. The study found that CPEG had engaged many of the major public and private sector stakeholders who need to take up and apply the new and improved technologies developed through the program. In many cases, the stakeholders identify priorities, contribute funding, and help select projects. The study found that the major pathways for the application of R&D knowledge for the private sector was through the development of prototypes, demonstrations and field trials. For the public sector, R&D led to the development of new and revised policies and regulations related to fossil-fuelled power generation and related emissions.
The presentation will use several case studies to show various approaches used by CEPG to engage stakeholders.
|
|
Translating New Knowledge from Technology Based Research Projects: A Randomized Control Study of an End-of-Grant Intervention
|
| Presenter(s):
|
| Vathsala Stone, University at Buffalo, vstone@buffalo.edu
|
| Machiko Tomita, University at Buffalo, machikot@buffalo.edu
|
| Abstract:
The concept of Knowledge Translation (KT) responds to a current concern about obtaining beneficial social impact from research, particularly research sponsored through public funding. This paper will present the methodology and results from a KT intervention study close to completion at the University at Buffalo. The study evaluates a KT strategy for new knowledge generated by technology-based research and development projects. A randomized control design is used to compare effects on Knowledge Use (KU) between the intervention, the traditional dissemination method and a control group. A Level of Knowledge Use Survey (LOKUS) was developed to measure effects. Contextualized Knowledge Packages (CKPs) were developed as KT components. An independent investigation established psychometric soundness of LOKUS. Participants represent six categories of potential K users related to Augmentative and Alternative Communication technology including consumers, manufacturers, brokers, clinicians, policymakers and other researchers. Presentation will cover intervention effects and development of LOKUS and the CKPs.
|
| | | |
|
Session Title: A Role for Higher Education in Educational Evaluation
|
|
Panel Session 834 to be held in Monterey on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Assessment in Higher Education TIG
|
| Chair(s): |
| John Yun, University of California, Santa Barbara, jyun@education.ucsb.edu
|
| Abstract:
Even as demands for research-based programs and rigorous evaluations are being made by governments and foundations across the country-state and local governments are forced to make cuts to essential educational/social services which have resulted in decreasing support for evaluations that inform critical educational decisions. This mismatch between the high expectations for evaluation to improve practice and the dwindling support for such work requires solutions.
Higher education must play a critical role in assisting the broader education sector in meeting these challenges. However, the details of these collaborations have been generally difficult to broker and manage over time. So while the potential of such collaborations is incredibly high, they have rarely been realized. In this session, through examples and argumentation, the University of California Educational Evaluation Center (UCEC) will propose a new framework for sustainability in this relationship to meet these myriad needs.
|
|
Towards a Framework for Sustainability: The Role of Higher Education in Educational Evaluation
|
| John Yun, University of California, Santa Barbara, jyun@education.ucsb.edu
|
|
As we move into a new cycle of dwindling funds and increased expectations for what data-based evaluations can deliver, there is a need for a new framework for sustainability to guide institutions of higher education and their partners toward fruitful collaborative relationships. Issues such as turnover in the local educational groups, the political realities of educational institutions, and the traditional focus on grant-based research projects in higher education have created challenges and incentives that work against sustained and sustainable relationships. The need for a framework to guide local agencies and institutions of higher education through the process of developing stronger collaborative relationships and highlighting the potential benefits and challenges they may encounter is critical to fostering and ensuring that more sustained collaborations of this type are created and maintained. In this presentation, Dr. John Yun will provide the parameters of such a framework, and explore the various questions that still remain unanswered.
|
|
|
Building Local Evaluation Capacity
|
| Christina Christie, University of California, Los Angeles, tina.christie@ucla.edu
|
|
Given the traditional role of institutions of higher education as propagators of knowledge and skills, it is clear that they have a key role to play as training grounds for the next generation of evaluation professionals. However, as the reality of dwindling budgets reduces the ability of educational organizations to solely rely on trained evaluation specialists, and given the more urgent need for evaluators to provide information that can be most effectively utilized within organizations, it may become necessary for colleges and universities to re-think the ways in which they work to facilitate the evaluation of programs and processes. Understanding the ways in which higher education can facilitate this transition towards useful evaluations is critical in this changing environment. Dr. Christina Christie will illustrate the multiple ways that she is working to understand and reshape this role.
| |
|
Sustainable Models of District-University collaboration
|
| Julian Betts, University of California, San Diego, jbetts@ucsd.edu
|
|
Dr. Julian Betts will discuss the eleven-year-old collaborative relationship that has developed between UC San Diego and the San Diego Unified School District, which is the second largest in California. He will summarize past and current research efforts and relevant policy findings. He will also discuss the formalization of this effort into the recently formed San Diego Education Research Alliance (SanDERA) and the elements that led to a stable and growing collaborative research relationship, which has now produced several noteworthy evaluations of education reform.
| |
|
Scaling Up: Higher Education in Evaluation for Policymaking
|
| Michal Kurlaender, University of California, Davis, mkurlaender@ucdavis.edu
|
|
While it is important for higher education to fulfill both its traditional role of training and knowledge generation and to explore new sustainable relationships with local districts and organizations to help generate both useful and rigorous evaluation studies in an environment of shrinking resources-it is also critical to find ways to work across districts and create viable economies of scale which will allow for the exploration of many different issues that arise across districts. This presentation will explore ways in which a system or consortium of colleges or universities can leverage their individual relationships to explore broader questions about teaching, learning, and program effectiveness, and to take advantages of the economies of scale that could emerge with the use of common assessments and the explorations of common questions.
| |
|
Session Title: Logic Modeling and Performance Measurement for Transforming and Sustaining Organizational Performance
|
|
Think Tank Session 835 to be held in Oceanside on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Business and Industry TIG
|
| Presenter(s):
|
| Michael Coplen, Federal Railroad Administration, michael.coplen@dot.gov
|
| Discussant(s):
|
| Michael Coplen, Federal Railroad Administration, michael.coplen@dot.gov
|
| Joyce Ranney, Volpe Transportation Center, joyce.ranney@dot.gov
|
| Jonathan Morell, Fulcrum Corporation, jamorell@jamorell.com
|
| Abstract:
Attendees will help us improve our ability to bring transformative organizational change to industry. The session will describe successful multi-year organizational change programs that incorporated group problem solving and changes in organizational culture. While successful, these programs have been difficult to sustain. A major difficulty is that participating companies seldom integrate appropriate performance measurement systems with the innovation. Our program theory for the design and implementation of performance measurement systems to sustain organizational change is rudimentary at best. The session will begin with an overview of the programs, their outcomes and impacts, and a proposed theory of how to sustain long-term organizational change through logic modeling and the institutionalization of performance management systems. After discussion, breakout groups will sketch logic models of appropriate performance measurement systems designed to sustain organizational performance. The session will conclude with a comparison of the models and a discussion of their relative strengths and weaknesses.
|
|
Session Title: New Approaches to Measuring Poverty, Value Added Components, and Addressing Related Issues
|
|
Multipaper Session 836 to be held in Palisades on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Human Services Evaluation TIG
and the Quantitative Methods: Theory and Design TIG
|
| Chair(s): |
| Vajeera Dorabawila,
New York State Office of Children and Family Services, vajeera.dorabawila@ocfs.state.ny.us
|
|
Designing an Indicator-based Profile for Evaluating and Identifying Poverty Solutions
|
| Presenter(s):
|
| Ann A O'Connell, The Ohio State University, aoconnell@ehe.osu.edu
|
| Dawn Wallace-Pascoe, The Ohio State University, wallace-pascoe.103@osu.edu
|
| Tamar Mott Forrest, The Ohio State University, forrest.97@osu.edu
|
| Abstract:
In this paper we review different approaches to the measurement of poverty, and present our work on development of a multidimensional indicator-based system designed to establish community profiles across multiple domains affected by poverty: health, food security, housing, transportation, economic development, employment, safety, family and child well-being, education, community infrastructure, access to community resources, social inclusion, and environmental resources. Given the complexity of poverty and the importance of poverty measures in the evaluation of interventions or programs designed to alleviate poverty, we argue that this holistic approach may enhance capacity for evaluators and researchers to recognize successful solutions to poverty based on a range of indicators. Clarifying how different profiles of communities respond to poverty interventions may help strengthen the design and implementation of these efforts. We discuss the application and interpretation of our community profiles to current poverty intervention sites at the local, regional, national, and international levels.
|
|
Assessing the Economic Impact of Project Funds on a Community: A 'Value-Added' Component
|
| Presenter(s):
|
| Judith Inazu, University of Hawaii, inazu@hawaii.edu
|
| Debbie Gundaya, University of Hawaii, gundaya@hawaii.edu
|
| Abstract:
Externally-funded community capacity-building projects infuse funds into a community to improve the infrastructure, provide expertise, develop leadership, and so on. The project is generally evaluated by focusing on impacts on community capacity. Rarely, however, are the economic benefits of those funds assessed. A method for documenting the economic impact of project funds, using an Input-Output model developed by state economists, is provided. The Input-Output model tracks the flow of financial transactions to and from economic sectors (e.g., transportation, agriculture). Each sector is assigned a multiplier (based on census data and previous research), which reflects the impact of each dollar spent in that sector. Detailed expenditure data from project records were entered into the model. Results show that each dollar invested by the project resulted in a two-fold increase in economic benefit to the community. This method provides an approach for documenting the economic 'value-added' of project funds that is often neglected in evaluation studies.
|
| |
|
Session Title: Evaluating Environmental Education Programs: Lessons Learned From Three Examples
|
|
Multipaper Session 837 to be held in Palos Verdes A on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Environmental Program Evaluation TIG
|
| Chair(s): |
| Nancy Carrillo,
Apex Education, nancy@apexeducation.org
|
|
Residential Environmental Education Program Evaluation Practices and Needs
|
| Presenter(s):
|
| Nicholas Bourke, Auburn Montgomery, nbourke@aum.edu
|
| Abstract:
This study presents results of a survey that examined the current program evaluation practices of residential environmental education centers and the needs of their directors in regard to program evaluations. Presently, a lack of quality evaluation has been noted in the area of environmental education. This is problematic given that evaluation is critical to the design of educational experiences. Survey respondents (n=114) were program directors of residential environmental education centers across the United States. The survey provided information detailing the program evaluation practices and needs of the center directors and their level of satisfaction with their present program evaluations.
Analysis of survey data revealed that residential centers evaluate programs using a variety of methods, but lack effective methods of evaluating important center goals. This study portrays the multi-dimensional needs residential centers would like to address in their processes of evaluation, and their need for assistance to improve their evaluation practices.
|
|
Evaluating a Wildlife Curriculum: Lessons Learned
|
| Presenter(s):
|
| Jesse Brant, Littlestown High School, brantj@lasd.k12.pa.us
|
| Ewing John, Penn State University, jce122@psu.edu
|
| Rama Radhakrishna, Penn State University, brr100@psu.edu
|
| Abstract:
Developing a wildlife curriculum is both a challenging and a time consuming effort. Determining the appropriate format is critical to its successful implementation. The need for developing a Wildlife Notes curriculum emerged in order to increase knowledge and improve results in both the Envirothon Wildlife Category and [State] Science Exam. Through this effort, environmental educators across the state will have access to the curriculum that will help them educate students about the subject of wildlife and environmental science. In this paper presentation, we describe the process we used to evaluate the curriculum that was developed to meet the needs of [State] Environmental education standards. First, we describe how the curriculum was put together—proposing the idea to stakeholders, developing the curriculum, and testing the curriculum for content validity. Then we discuss the procedures used to evaluate the curriculum and final editing of the curriculum after incorporating suggestions from the evaluation.
|
|
Front-End Evaluation of Audiences' Prior Knowledge, Values and Belief Systems: Informing the Development of Effective Vehicles for Increasing Climate Literacy
|
| Presenter(s):
|
| Susan Burger, David Heil & Associates Inc, sburger@davidheil.com
|
| Gina Magharious, David Heil & Associates Inc, gmagharious@davidheil.com
|
| Lauren Russell, David Heil & Associates Inc, lrussell@davidheil.com
|
| Kasey McCracken, Oregon Health & Science University, mccrackk@ohsu.edu
|
| Abstract:
This paper describes a promising approach to increasing the relevance and utility of front-end evaluation for informing the development of educational programs, products, and services. Specifically, the evaluation addressed the question of how can educators best communicate with their audiences to promote climate literacy. Using the 'Six Americas' profiles identified by The Yale Project on Climate Change (Leiserowitz, et al., 2009) the evaluation team conducted front-end evaluation of the values, beliefs and knowledge of science center visitors about topics related to climate change. Continuing formative and summative evaluations of implemented projects addressed the efficacy of using the front-end data to promote effective approaches to engaging the public on the issue of climate change.
Leiserowitz, A., Maibach, E., and Roser-Renouf, C. (2009). Global warming's six Americas: An audience segmentation analysis. Yale Project on Climate Change and the George Mason University Center for Climate Change Research.
|
| | |
|
Session Title: Fourth Annual Asa G. Hilliard III Think Tank on Language and Evaluation
|
|
Think Tank Session 838 to be held in Palos Verdes B on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Multiethnic Issues in Evaluation TIG
|
| Presenter(s):
|
| Jenny Jones, Virginia Commonwealth University, jljones2@vcu.edu
|
| Discussant(s):
|
| Itihari Toure, The Jegna Collective, itiharit@gmail.com
|
| Cindy A Crusto, Yale University, cindy.crusto@yale.edu
|
| Derrick Gervin, Centers for Disease Control and Prevention, dgervin@cdc.gov
|
| Abstract:
This annual session recognizes the important and relevant contributions of Dr. Asa G. Hilliard III, an African American professor of educational psychology and African history, to the field of evaluation generally, and to culturally competent and responsive evaluation specifically. We will first provide a brief overview of Dr. Hilliard's lifeworks and second will explore language as the context for understanding how values show up in evaluation methodologies, theories and analyses. We will then work in small groups to discuss how evaluators incorporate values in their work and consider strategies to aid in maximizing the use of words and language in evaluation practice. Finally, we will reconvene as a large group for a facilitated discussion to translate the learning derived across all of the small groups to explore how these constructs impact values and valuing in evaluation, as well as, how they relate to the AEA cultural competency statement.
|
|
Session Title: Practical Issues in Theory-driven Evaluation
|
|
Multipaper Session 839 to be held in Redondo on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Program Theory and Theory-driven Evaluation TIG
|
| Chair(s): |
| Uda Walker,
Gargani + Company Inc, uda@gcoinc.com
|
|
Practice What you Preach: Towards a Theory of Using Program Theory in Evaluation
|
| Presenter(s):
|
| Jan Hense, University of Munich, jan.hense@psy.lmu.de
|
| Abstract:
The notion of using program theory (PT) in evaluation has become a mainstream idea in the evaluation discourse, and numerous supposed advantages are associated with it. Conceptually, it has been argued that evaluation approaches ignoring PT treat their evaluands as black boxes, as they pay no attention to the mediating mechanisms between programs and their outcomes. However, the use of PT in evaluation itself resembles a black box, too, as there exists no coherent theoretical framework which explains the mechanisms between the use of PT and better evaluations. This paper presents such a framework. It combines a process model of evaluation, to identify different functions of PT, with existing models of evaluation influence, to show how these functions can contribute to better evaluations. This work is expected to clarify the different functions of PT for evaluation and to make assumptions on the effects of PT empirically testable.
|
|
Practical Issues for Evaluation Theory and Application
|
| Presenter(s):
|
| Michael Laurendeau, Cathexis Consulting, mlauren@sympatico.ca
|
| Abstract:
Wherever the function exists, evaluation is constrained by a number of government policies related to public management and conditioned by competing practices stemming from the audit function. In order to position evaluation in this overall context, it is useful to remember that evaluation is essentially research activity that tries to measure the impacts of public interventions (policies, programs and initiatives) by establishing causality and dealing with attribution. This should be done by providing proper evidence of the causal relationships that exist, according to strategic plans, along chains of results ranging from inputs to end outcomes. Performance measurement also requires monitoring the implementation of program delivery against operational plans.
The presenter will discuss and clarify the approach to the modelling of chains of results (logic model) and of program implementation (delivery process models) to clarify program assumptions that must tested by program evaluation.
|
| |
|
Session Title: Assessment in Community Colleges
|
|
Multipaper Session 840 to be held in Salinas on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Assessment in Higher Education TIG
|
| Chair(s): |
| Rhoda Risner,
United States Army, rhoda.risner@us.army.mil
|
|
Critical Indicators of a Successful QuickStart to College Program in Community Colleges- A Theory-Driven Evaluation Approach
|
| Presenter(s):
|
| Xin Liang, University of Akron, liang@uakron.edu
|
| Abstract:
This paper presents the development of an evaluation design built on the principles of theory-driven evaluation in a QuickStart to College project. The goal of the program was to offer low-wage, unemployed, and under educated adults a free, academic and student support services course to help them successful in college and become employable. Three stages of evaluation questions were asked to test the causal, intervention, and action hypotheses of the QuickStart to College program theory. The reciprocal process of data collection and cross validation among stages provided timely information on how exactly the planned intervention has been implemented and allowed to check whether an unsuccessful intervention is due to implementation failure or program design failure. In return, the cross validation of the assumptions provided evidence to refine the program theory and help generate 6 critical indicators for a successful QuickStart to college model scalable to community colleges across the state.
|
|
Meeting Students' Informational Needs: A Data Analysis Tool to Assess The Quality of Information on Community College Websites
|
| Presenter(s):
|
| Megan Brown, American Institutes for Research, mbrown@air.org
|
| Jonathan Margolin, American Institutes for Research, jmargolin@air.org
|
| Shazia Miller, American Institutes for Research, smiller@air.org
|
| James Rosenbaum, Northwestern University, j-rosenbaum@northwestern.edu
|
| Abstract:
This paper presentation describes the processes for creating a rubric and evaluating websites to assess how well community college websites inform prospective students. The rubric was used to evaluate the extent to which websites answer basic questions of prospective students who are selecting a program of study. The rubric items and criteria captured the needs and values of the students, and the approach captures website usability in terms of informational needs. The rubric can serve as a capacity building tool for colleges to judge the usefulness of their websites or other resources to their target constituency, and this approach can be applied in a wide variety of evaluations. In the presentation we will detail the rubric development and evaluation processes, discuss how we represented student values, and share recommendations for similar efforts. Although focused primarily on methodology, we will also present findings and webpage examples of best practices in informing prospective students.
|
|
Measuring Fidelity of Implementation in Community College Research
|
| Presenter(s):
|
| Oscar Cerna, MDRC Research Organization, oscar.cerna@mdrc.org
|
| Alissa Gardenhire-Crooks, MDRC Research Organization, alissa.gardenhire@mdrc.org
|
| Phoebe Richman, MDRC Research Associate, phoebe.richman@mdrc.org
|
| Abstract:
As a research organization, MDRC has developed standardized procedures for measuring the fidelity of implementation of community college programs. Steps to measure objectives, intended outcomes, program activities, operational feasibility and stakeholder experiences are part of MDRC's framework for conducting implementation fidelity research at community colleges. This paper highlights the application of implementation fidelity procedures for two distinct evaluation studies: a small-scale pilot study and a large-scale, multi-site random assignment demonstration. Qualitative measures for intended short and long term outcomes, program activities, term-to-term progress, and program scale and intensity for both studies are discussed. Procedures for involving community college stakeholders in discussions about measuring program components and evaluation activities are also highlighted. The tenets of the paper will serve to further inform how measuring fidelity of implementation can be used to evaluate community college programs and to provide college practitioners with useful evaluation tools.
|
|
A Formative Evaluation of a Dual Admission Program Between a Community College and Elite State University
|
| Presenter(s):
|
| Jason Taylor, University of Illinois at Urbana-Champaign, taylor26@illinois.edu
|
| Abstract:
This paper presents findings from a utilization-focused evaluation of a new dual admission and dual enrollment program designed by a community college and elite state university. The paper presents a theoretical framework, evaluation approach, evaluation methods, and evaluation findings. Using a utilization-focused approach, the evaluation sought to determine the program goals and components from multiple stakeholder perspectives. Using semi-structured interviews with program faculty, staff, and administrator at both institutions, findings reveal a variety of program goals and components with mixed perceptions of the potential value of program components for students and for the institutions. Similarly, a number of unexpected consequences have created a new set of challenges and opportunities for the institutions.
|
| | | |
|
Session Title: Evaluation in the Policy Change Process
|
|
Multipaper Session 841 to be held in San Clemente on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Advocacy and Policy Change TIG
|
| Chair(s): |
| Dwayne Campbell,
University of Rochester, dwayne.campbell@warner.rochester.edu
|
|
Tracking State Policies Related to Food and Active Living
|
| Presenter(s):
|
| Martha Quinn, University of Michigan, marthaq@umich.edu
|
| Laurie Carpenter, University of Michigan, lauriemc@umich.edu
|
| Abstract:
Between 2007 and 2010, the Center for Managing Chronic Disease at the University of Michigan tracked state-level policy activity related to improving school food, local food systems and efforts to increase active living in states where grantees that are part of the W.K. Kellogg Foundation's Food & Community program are located. The objectives were to describe the policy context and political climate in grantee states, identify states where policy change was underway, and assess changes in policy enactment over time. The evaluation uses for policy tracking included providing context for collaborative efforts, enhancing strategic learning and gaining insights into a larger social movement related to food and active living. Conference participants will 1) discover the results of the four years of data gathering and 2) learn how policy tracking can enhance their evaluation efforts.
|
|
Evaluating Intervention Effectiveness during the Policy Change Process: Communities Putting Prevention to Work (CPPW) and Prevention Research Centers (PRC) Leveraging Community Engagement Resources
|
| Presenter(s):
|
| Lisle Hites, University of Alabama, Birmingham, lhites@uab.edu
|
| Jessica Wakelee, University of Alabama, Birmingham, jwakelee@uab.edu
|
| Abstract:
Policy change is a goal for many different projects. This paper will present an example of where two projects overlapped in a given community, allowing one to build synergistically upon the other and leverage shared resources to affect a policy change for the betterment of the community. During this process, the combined evaluation team for the two programs tracked the efforts of the primary policy change agent, the CPPW, as well as the shared response efforts leveraging the existing community relationships which were ultimately engaged to make the effort successful. A novel adaptation of the Homeland Security Exercise and Evaluation Program will be presented and its effectiveness will be discussed in terms of measuring and facilitating policy change and evaluation.
|
|
From Research To Advocacy: Building On Lessons Learned To Advance Smoke-Free Air Legislation In Louisiana
|
| Presenter(s):
|
| Jenna Klink, Louisiana Public Health Institute, jklink@lphi.org
|
| Nikki Lawhorn, Louisiana Public Health Institute, nlawhorn@lphi.org
|
| Snigdha Mukherjee, Louisiana Public Health Institute, smukherjee@lphi.org
|
| Lisanne Brown, Louisiana Public Health Institute, lbrown@lphi.org
|
| Abstract:
In 2009-2010, separate air monitoring and saliva cotinine studies were conducted to support policy efforts to expand current smoke-free legislation to bars and casinos making all of Louisiana's workplaces smoke-free. While the proposed expansion was not passed during the 2010 state legislative session, the evaluation and research team for the Louisiana Campaign for Tobacco-Free Living learned important lessons that guided the research plan for 2011: 1) Time data collection activities so that decision-makers receive research results months or at least weeks before the vote; 2) continue building on and strengthening previous studies in order to have new information to use in campaigns; and 3) localize studies in order to tailor regional social marketing campaigns to increase support for local ordinances. Our experience illustrates the value in evaluating the research process used to advance health policy.
|
|
Leadership, Bipartisanship and Dual Strategies: Lessons Learned from an Evaluation of a California Governance Reform Initiative
|
| Presenter(s):
|
| Jacqueline Berman, Mathematica Policy Research, jberman@mathematica-mpr.com
|
| Hannah Betesh, Social Policy Research Associates, hannah_betesh@spra.com
|
| Abstract:
Reform of public policy processes, particularly in the current fiscal climate, represents a fundamental challenge requiring flexibility, strategic action, and responsive reform proposals. Such proposals must appeal to the broadest cross-section of constituents but be simultaneously bold enough to engender change. How can reform efforts reconcile often competing demands to produce change in a crisis? A recent evaluation of a governance reform initiative charged with addressing California's troubled policy environment suggested areas of both significant challenge and possibility for gaining broad support and providing bold leadership in the context of attempts to set, enact and implement a reform agenda while building broad support. Lessons and strategies that emerged included the need for engaging grass-roots and 'grass-tips' leaders and the advantages of simultaneous pursuit of ballot-box activism and legislative action. We will also discuss methodologies that can be used to identify and strengthen key strategies of governance reform.
|
| | | |
|
Session Title: Engaging Participants in the Evaluation Process: A Participatory Approach
|
|
Multipaper Session 842 to be held in San Simeon A on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
|
| Chair(s): |
| David Fetterman,
Fetterman & Associates, fettermanassociates@gmail.com
|
| Discussant(s): |
| David Fetterman,
Fetterman & Associates, fettermanassociates@gmail.com
|
|
Participatory evaluation and the degree of democratic discourse in program design
|
| Presenter(s):
|
| Helga Stokes, Duquesne University, stokesh@duq.edu
|
| Abstract:
The presentation examines the role of evaluation in two school change attempts, one characterized by user design of improvements, the other proposed by a funder, shaped by the central administration and seeking buy-in from schools. In the current school change debate voices can be heard that improvements and changes ought to originate from core stakeholders, mainly teachers and students. Changes and improvements should take into account the local context and be based on local strengths and weaknesses. Often a learning program is developed, tested and then distributed to a large educational system with little user participation. Evaluation is done by outside evaluators and adjustments to the program are conceived outside the school context. What would it look like if the cycle of program design, implementation and evaluation resided largely in the users' hands? How would the evaluation be implemented and the results be examined and used?
|
|
Theory Building Through Praxis Discourse: A Theory- And Practice-Informed Model Of A Transformative Participatory Approach To Evaluation
|
| Presenter(s):
|
| Michael Harnar, Claremont Graduate University, michaelharnar@gmail.com
|
| Abstract:
The discipline of evaluation is built upon its descriptions, its language, and its approaches towards practice. The development of one such approach - Participatory Evaluation - has focused almost entirely on a practical model, leaving a transformative model mostly neglected. Transformative Participatory Evaluation (T-PE) practitioners find little theoretical or practical guidance in the evaluation literature. This paper redresses this discrepancy by presenting a theory- and practice-informed model of T-PE that will help further discipline development by providing practitioners a model reflecting their work. LeBaron Wallace, Hansen, and Alkin (2009) used qualitative literature analysis to develop logic models reflecting Practical Participatory Evaluation and Transformative Evaluation but overlooked T-PE. The present research extends that work by developing a model of T-PE using both quantitative and qualitative methods that inform one another. The product is a theory- and practice-informed model of a transformative participatory approach to evaluation practice.
|
| |
|
Session Title: Understanding the Dynamics of Dropout Using the National Longitudinal Survey of Youth (NLSY97)
|
|
Multipaper Session 843 to be held in San Simeon B on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Allan Porowski, ICF International, aporowski@icfi.com
|
| Abstract:
In this session, presenters will share findings from three critical areas that cover the lifecycle of a student's decision to drop out of school. First, we plan to share how mathematics achievement (i.e., the most-failed subject in high school) is related to dropping out of school. Next, once a student has dropped out of school, we profile students in the NLSY who returned to school in order to determine what factors make dropout recovery more likely. Given that most dropouts return to some type of schooling, this often-overlooked dynamic in dropout prevention efforts is particularly valuable. Finally, we determine what other life choices may be affected by dropping out of school. The authors intend for these presentations to provide the audience with a deeper understanding of the dropout prevention field, and ultimately hope to encourage more multidisciplinary thinking in this area of study.
|
|
Doing the Math: Can Improving Student Math Achievement Reduce Student Dropout?
|
| Stacey Merola, ICF International, smerola@icfi.com
|
|
Academic performance is one of the most often cited behavioral variables linked to dropping out (Astone & McLanahan, 1991; Jordan et al., 1996; Kaplan et al., 1997; Morris, Ehren, & Lenz, 1991; Roderick, 1994; Rumberger, 1995) with poor academic performance in the early school years associated negatively to high-school dropouts (Ensminger & Slusarcick, 1992; Simner & Barnes, 1991). Math student performance in particular is an important correlate of dropout. Math is the course that students most often fail (Steen, 2007). Similarly, Lee and Burkam (2003) found that grade point average (GPA) in mathematics had significant effects on dropping out. In this presentation, we explore the links between middle and high school math achievement, math course completion, and dropping out, using data from the NLSY97. The implications for dropout prevention interventions and later student outcomes will also be discussed.
|
|
Dropout Recovery: Understanding the Dynamics of Returning to School After Dropping Out
|
| Allan Porowski, ICF International, aporowski@icfi.com
|
| Katerina Passa, ICF International, apassa@icfi.com
|
| Kazuaki Uekawa, ICF International, kuekawa@icfi.com
|
|
Dropout recovery is the process of bringing dropouts back into the school system. Although the risk factors and dynamics of dropping out are well-known, the dynamics of returning to school are not. Given that more than half of dropouts eventually receive at least a high school equivalency certificate (GED), it is important to understand what factors predict a return to school. This presentation focuses on developing understanding of the relationships between students' experiences in school, their decisions to leave school in middle or high school grades, and their ultimate return to school. Using the 1997 National Longitudinal Survey of Youth (NLSY97) data, we have identified a number of factors that may predict dropout recovery. Results will be presented for a number of subgroups and types of dropouts.
|
|
Associated Consequences of Dropping Out of School: Investigating Behaviors and Cognitive Processes of Dropouts
|
| Allan Porowski, ICF International, aporowski@icfi.com
|
| Kazuaki Uekawa, ICF International, kuekawa@icfi.com
|
|
Students who drop out of school base their decision on a number of factors, including financial, academic, social, safety, and vocational considerations. Oftentimes, a decision to drop out is not in the best interest of a student. Using data from the National Longitudinal Survey of Youth (NLSY97), the study authors present findings on associated consequences of dropping out of school. School dropouts from the NLSY will be profiled, and major decision points in dropouts' lives will be investigated to determine whether they have a propensity for unhealthy decision making. Factors investigated include substance abuse, physical health, lifestyle, and vocational choices.
|
|
Social Support and Student Dropout
|
| Kazuaki Uekawa, ICF International, kuekawa@icfi.com
|
| Stacey Merola, ICF International, smerola@icfi.com
|
| Allan Porowski, ICF International, aporowski@icfi.com
|
| Katerina Passa, ICF International, apassa@icfi.com
|
|
The sociology of education literature has identified the importance of social support for the prevention of student dropout. NLSY97 data provides an opportunity to examine how family support factors, parenting style factors, peer factors, and student perception of school factors are related to dropping out. The study team proposes three research questions. First, how is the level of social support students receive from parents, schools, and peers, related to the likelihood of student dropout? Second, based on the theory of intergenerational closure, proposed by James Coleman, are students whose parents are supportive of one another (mother and father) less likely to dropout? Finally, is there an advantage for students who have multiple sources of social support coming from family, school, and peers? The presentation will also discuss the implications of these findings for interventions targeting school climate.
|
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
What are the Basic Evaluation Skills and Can You Teach Them Online? |
|
Roundtable Presentation 844 to be held in Santa Barbara on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Extension Education Evaluation TIG
|
| Presenter(s):
|
| Benjamin Silliman, North Carolina State University, ben_silliman@ncsu.edu
|
| Suzanne Le Menestrel, National Institute for Food and Agriculture, slemenestrel@nifa.usda.gov
|
| Abstract:
4-H Science Initiative is developing an online 'basic training' to help county- and state-based staff understand and apply evaluation program development concepts in community-based settings. Competence in evaluation skills is viewed as critical for pooling of high-quality process and outcome data nationwide. Evaluators with the initiative conducted a needs assessment and curriculum review seeking input from potential training users such as 4-H agents and experts in 4-H science and evaluation. This roundtable seeks further insight from evaluation professionals on concepts and skills critical for a working knowledge or for functional competence in evaluation/program development. Comments will also be invited on alternative strategies for capacity-building such as online learning games or turnkey project software. Discussion will focus on several issues critical to evaluation capacity building overall such as the incremental and recursive nature of skill-building, links between individual and organizational capacity, and effective measurement of concept mastery and applied abilities.
|
| Roundtable Rotation II:
Maximizing the Impact of Science, Technology, Engineering, and Math Outreach Through Data-Driven Decision Making: An Evaluation Protocol for Science, Technology, Engineering, Math (STEM) Outreach Programs |
|
Roundtable Presentation 844 to be held in Santa Barbara on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Extension Education Evaluation TIG
|
| Presenter(s):
|
| Jenifer Corn, North Carolina State University, jeni_corn@ncsu.edu
|
| Eric Wiebe, North Carolina State University, eric_wiebe@ncsu.edu
|
| Sharon Schulze, North Carolina State University, sharon_schulze@ncsu.edu
|
| Tracey Collins, North Carolina State University, tracey_collins@ncsu.edu
|
| Alana Unfried, North Carolina State University, alana_unfried@ncsu.edu
|
| Abstract:
MISO (Maximizing the Impact of STEM Outreach through Data-driven Decision-Making) is a campus-wide project funded by the National Science Foundation, housed at North Carolina State University, a land-grant university. This project seeks to determine the collective STEM (Science, Technology, Engineering, Math) impact of the university through its pre-college outreach and extension programs. The MISO project team works to creatively integrate North Carolina's longitudinal student and staff databases with an innovative approach to evaluation across NC State's K-12 STEM education outreach programs, particularly those funded by the NSF. A critical part of this project is the longitudinal assessment of participant outcomes through development and collection of common STEM Outreach Evaluation Protocols and indicators of success. The project will define valid survey methods and measurable outcomes for both teachers and students involved in STEM outreach that can also be utilized, duplicated and shared in the future by any STEM outreach project.
|
|
Session Title: Valuing the Importance of Design in Producing Valuable Outcomes
|
|
Multipaper Session 845 to be held in Santa Monica on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Chair(s): |
| Guili Zhang,
East Carolina University, zhangg@ecu.edu
|
|
Increasing the Rigor of Evaluation Findings By Using Quasi-Experimental Research Designs
|
| Presenter(s):
|
| Elena Kirtcheva, Research for Better Schools, kirtcheva@rbs.org
|
| Abstract:
This paper presents the use of quasi-experimental research designs to rigorously evaluate the impact of a two-year professional development program on the gains in content knowledge of elementary and middle school science teachers. It presents a powerful method that enables efficient controlling for spurious relationships without increasing the cost of evaluations, and provides applicable advice to budget-conscious evaluators looking to develop rigorous studies. The paper presents the final findings from a professional development intervention to illustrate the use of the Non-Equivalent Dependent Variable (NEDV) and Non-Equivalent Groups (NEG) research designs in evaluation. It extends an earlier study of the same project by incorporating data and results from the second year of program implementation. Additionally, the paper includes a new section with practical guidance for evaluators considering the adoption of the presented methodology.
|
|
Evaluation in the Context of Lifecycles: 'A Place for Everything, Everything in its Place'
|
| Presenter(s):
|
| Jennifer Urban, Montclair State University, urbanj@mail.montclair.edu
|
| Monica Hargraves, Cornell University, mjh51@cornell.edu
|
| Claire Hebbard, Cornell University, cer17@cornell.edu
|
| Marissa Burgermaster, Montclair State University, burgermaster@gmail.com
|
| William Trochim, Cornell University, wmt1@cornell.edu
|
| Abstract:
One of the most vexing methodological debates of our time (and one of the most discouraging messages for practicing evaluators) is the idea that there is a 'gold standard' evaluation design (the randomized experiment) that is generally preferable to all others. This paper discusses the history of the phased clinical trial in medicine as an example of an evolutionary lifecycle model that situates the randomized experiment within a sequence of appropriately rigorous methodologies . In addition, we propose that programs can be situated within an evolutionary lifecycle according to their phase of development. Ideally, when conducting an evaluation, the lifecycle phase of the program will be aligned with the evaluation lifecycle.
This paper describes our conceptualization of program and evaluation lifecycles and their alignment. It includes a discussion of practical approaches to determining lifecycle phases, the implications of non-alignment, and how an understanding of lifecycles can aid in evaluation planning.
|
|
Is the Better the Enemy Of The Good? A Comparison of Fixed- and Random-Effects Modeling in Effectiveness Research
|
| Presenter(s):
|
| James Derzon, Battelle Memorial Institute, derzonj@battelle.org
|
| Ping Yu, Battelle Memorial Institute, yup@battelle.org
|
| Bruce Ellis, Battelle Memorial Institute, ellis@battelle.org
|
| Aaron Alford, Battelle, alforda@battelle.org
|
| Carmen Arroyo, Substance Abuse and Mental Health Services Administration, carmen.arroyo@samsha.hhs.gov
|
| Sharon Xiong, Battelle, xiongx@battelle.org
|
| Abstract:
One of the key challenges in conducting large-scale, multi-site, multilevel evaluation is the inability to conduct randomized control trials (or lack of comparison group) to assess the effectiveness of social interventions and to attribute the outcomes to the interventions of interest. Innovative use of advanced statistical designs guided by program theory models has been used in as an approach to overcome these challenges and has been recognized as a viable way to evaluate effectiveness of large-scale social interventions. Using data from the National Evaluation of the Safe Schools/Healthy Students Initiative (SS/HS), we examine the knowledge generation consequences of adopting the better random-effects modeling approach to the good fixed-effect method for controlling the moderators and identifying the mediators of SS/HS intervention effectiveness.
|
| | |
|
Session Title: Evaluation Designs for Special Topics: Sex Workers, End of Life, and Adverse Childhood Experiences
|
|
Multipaper Session 846 to be held in Sunset on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Human Services Evaluation TIG
and the Health Evaluation TIG
|
| Chair(s): |
| Todd Franke,
University of California, Los Angeles, tfranke@ucla.edu
|
|
Is Full Transition or Change in Behavior a Sufficient Desired Impact for an Intervention Project Targeting Female Sex Workers in Nigeria?
|
| Presenter(s):
|
| Muyiwa Oladosun, MiraMonitor Consulting Limited, fso226@yahoo.com
|
| Charles Toriola, MiraMonitor Consulting Limited, stinkinglyrichie@yahoo.com
|
| Femi Oladosu, MiraMonitor Consulting Limited, feoladosu@yahoo.com
|
| Gloria Affiku, MiraMonitor Consulting Limited, gaffiku@yahoo.co.uk
|
| Abstract:
An AIDS Impact Mitigation (AIM) project targeted Female Sex Workers (FSW) in 15 states in Nigeria between October, 2006 and October, 2010. The aim was to reduce HIV transmission among FSW through responsible behavior and transition to other trade. End-of-project evaluation surveyed 547 FSW project beneficiaries such of which participated in six focussed group discussion (FGD). Key intervention included involvement in per education and mentoring (PEM) program and income generating activity (IGA). Results Showed that the majority (79%) were involved in PEM, and the majority (93%) had desire to leave the trade. There were statistical significant differences in behavior change between those exposed to PEM and those not exposed in terms of (1) changed behavior (98% vs. 25%), condom use (64% vs. 28% respectively), more interest in leaving the trade (98% vs. 75%), increased risk awareness (33% vs. 6%), and other change indicators produced similar results.
|
|
Using Mixed Methods to Evaluate Patient and Family Perceptions of an End of Life Program
|
| Presenter(s):
|
| Sarah Cote, Partners in Care Foundation, sarahdcote@gmail.com
|
| Patricia Housen, Partners in Care Foundation, phousen@picf.org
|
| Yanyang Liuqu, Claremont Graduate University, yanyang.lq@gmail.com
|
| Nelson E Dalla Tor, Presbyterian Intercommunity Hospital, nedallator@hotmail.com
|
| Virag Shah, Presbyterian Intercommunity Hospital, vshah@pih.net
|
| Abstract:
Several limitations exist in measuring seriously ill patient quality of life through standardized instruments. Use of mixed methods, combining both quantitative and qualitative measures can overcome these potential limitations by providing rich qualitative data about the patient and family perspectives about their care experience. This session will present findings from an evaluation of a home-based palliative care program for seriously ill patients provided by a family care clinic. Qualitative interviews were conducted to enhance quantitative measures and elicited 1) how and if the consult assisted patients in decision-making, 2) information gained through the consult, 3) any experience associated with the consult (as compared to prior health care experiences), and 4) general thoughts about the team and quality of the experience. Results of these semi-structured interviews will be discussed. The presentation highlights the benefits of mixed method designs for capturing stronger, more complete evidence about key outcomes of interest.
|
|
Using Band-Aids When Major Surgery is Indicated: Evaluating the Adverse Childhood Experiences (ACE) Survey's Impact on Trauma Informed Practices
|
| Presenter(s):
|
| Jennifer Williams, Out of the Crossfire Inc, jenniferwilliams.722@gmail.com
|
| Nancy Rogers, University of Cincinnati, nancy.rogers@uc.edu
|
| Brian Powell, University of Cincinnati, powellbb@mail.uc.edu
|
| Abstract:
During these constrained economic times, small not-for-profit social service agencies are finding it challenging not only to raise money, but also to effectively target their efforts in order to maximize the use of what little funding is available. Traditionally, small non-profits tend to use a broader, less focused approach to providing services to their clientele, but it should be to their advantage to focus their efforts on one or two critical aspects of client services. By adopting the approach of the Adverse Childhood Experiences (ACE) Study and retrospectively utilizing the approach to re-interpret in-take data, recommendations for more effective client services can be made and implemented. This understanding can contribute to streamlined services that are more effective and useful to program clientele.
|
| | |
|
Session Title: Evaluating Curricular Implementation and Student Achievement in Mathematics
|
|
Multipaper Session 847 to be held in Ventura on Saturday, Nov 5, 8:00 AM to 9:30 AM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Sarah Brasiel,
Edvance Research Inc, sbrasiel@edvanceresearch.com
|
| Discussant(s): |
| Rebecca Eddy,
Cobblestone Applied Research & Evaluation Inc, rebecca.eddy@cobblestoneeval.com
|
|
Comparing Tools for Evaluating Fidelity of Implementation of School Mathematics Curriculum: Assumptions, Approaches, and Techniques
|
| Presenter(s):
|
| Steven Ziebarth, Western Michigan University, steven.ziebarth@wmich.edu
|
| Abstract:
Current school accountability systems stress the importance of raising student test scores to evaluate teacher effectiveness, yet often fail to include investigating the implementation of curriculum materials (i.e., textbooks) that form the core of teaching and learning in current mathematics classrooms. Many new curricula have been published in the last two decades that claim adherence to various national Standards documents, but evaluation results are mixed and an National Research Council's (NRC) 2004 evaluation of those evaluations is unclear regarding their effectiveness. This session compares various research approaches to the study of fidelity of implementation of K-12 mathematics curricula that have been since the issuing of the NRC report and highlights some of the tools that have been used singly or in combination to study this important topic.
|
|
Using Teacher Value Added to Promote Education Values: Preliminary Results of the Rapid Assessment of Teacher Effectiveness (RATE) Instrument in Mathematics
|
| Presenter(s):
|
| John Gargani, Gargani + Company Inc, john@gcoinc.com
|
| Michael Strong, University of California, Santa Cruz, mastrong@ucsc.edu
|
| Abstract:
We present evidence for the validity of the Rapid Assessment of Teacher Effectiveness (RATE), an observational instrument used to predict the effectiveness of elementary math teachers as measured by value-added measures (VAM). Because RATE is used to make predictions, it can be implemented early in the school year to identify which teachers are more or less likely to achieve the instructional success valued by administrators and parents. Educators can use RATE scores to direct support and resources to teachers in strategic ways that reflect their own beliefs and values regarding instructional success. We demonstrate that RATE can predict VAM more accurately than the judgment of educators based on their intuition or other systematic approaches. Some of this evidence comes from a recent article by the presenters published in the Journal of Teacher Education, and some comes from newly completed experiments.
|
|
Large-Scale Comparative Effectiveness Study of Four Elementary School Math Curricula
|
| Presenter(s):
|
| Roberto Agodini, Mathematica Policy Research, ragodini@mathematica-mpr.com
|
| Barbara Harris, Mathematica Policy Research, bharris@mathematica-mpr.com
|
| Abstract:
This large-scale evaluation examines the relative effectiveness of four elementary school math curricula that use varying approaches to math instruction: (1) Investigations in Number, Data, and Space, (2) Math Expressions, (3) Saxon Math, and (4) Scott Foresman-Addison Wesley Mathematics. The evaluation uses an experimental design based on 110 schools from 12 districts, where all four curricula were randomly assigned to schools within each participating district. The study compares average student math achievement gains to determine the relative effects of the curricula. This session presents causal evidence of the relative curriculum effects on first- and second-grade math achievement during the first year of curriculum implementation. At the first-grade level, the results favored Math Expressions; at the second-grade level, they favored Math Expressions and Saxon. Correlational (mediational) analyses also were conducted to examine whether instructional practices explain the differences in curriculum effects.
|
|
Assessing the Role of Career and Technical Education Coursework in Math Achievement Growth
|
| Presenter(s):
|
| Glenn D Israel, University of Florida, gdisrael@ufl.edu
|
| Alexa Lamm, University of Florida, alamm@ufl.edu
|
| Abstract:
Since most human capital is generated through formal education, the quality of public K-12 education is of paramount importance in preparing youth for college and employment. Career and Technical Education (CTE) programs involve over 9.2 million of the U.S.'s 14.9 million secondary students. CTE students are often perceived, however, to be less prepared for math-oriented college and careers. This evaluation used a multi-level design to examine CTE student math achievement over time. Preliminary findings show that students concentrating on a specific CTE occupational area start with higher math scores and have a greater increase in math achievement than students who take one or two CTE courses. By using rigorous multi-level modeling, the evaluator was able to identify the specific variables influencing math achievement, and recommend school-level changes to increase math achievement. The usefulness of examining multiple levels when studying K-12 education issues was discussed.
|
| | | |