|
Session Title: The Holy Grail Of Advocacy Evaluation: Connecting Advocacy to Long-Term Impact
|
|
Panel Session 331 to be held in Panzacola Section F1 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Advocacy and Policy Change TIG
|
| Chair(s): |
| Beth Rosen, Humane Society of the United States, brosen@hsus.org
|
| Susan Hoechstetter, Alliance for Justice, shoech@afj.org
|
| Discussant(s):
|
| Philip Setel, Bill & Melinda Gates Foundation, philip.setel@gatesfoundation.org
|
| Abstract:
The Holy Grail of advocacy evaluation may be learning how the lives of people intended to benefit from advocacy are affected over time. In this session, presenters will discuss how they faced time constraints, high costs, and other challenges to connecting advocacy/community organizing to long-term improvements in social welfare.
The first presenter will discuss methodology used to assess financial savings accrued in New Mexico through nonprofit work, citing findings from her recently completed study "Strengthening Democracy, Increasing Opportunities: Impacts of Advocacy, Organizing, and Civic Engagement in New Mexico." Different approaches to assessing long-term impact will be discussed by a Florida funder who worked in collaboration with national and other local funders to define and measure the long-term impact of advocacy/community organizing accomplished by grantees of their Fund. Before opening the session to a discussion with all participants, evaluation staff from the nation's largest foundation will react to the methodologies described.
|
|
Methodology for Determining Long-Term Advocacy Impact
|
| Lisa Ranghelli, National Committee for Responsive Philanthropy, lranghelli@ncrp.org
|
|
The National Committee for Responsive Philanthropy completed the first of its series of studies on the long-term impact of advocacy and community organizing in 2008. Their documented findings included that every dollar invested in advocacy garnered $157 in benefits for New Mexico communities, and that the state economy has received numerous economic benefits from the work. The author of "Strengthening Democracy, Increasing Opportunities: Impacts of Advocacy, Organizing, and Civic Engagement in New Mexico" will discuss how these findings and more were obtained.
When the methodology was presented at a 2009 it was both acclaimed as groundbreaking and noted for missing inclusion of factors external to the advocates' work that might affect the impacts. The evaluator/report author will also share the strengths and challenges of the New Mexico study and methodological changes being planned the next state study.
|
|
|
Funder Perspectives on Measuring Long Term Impact
|
| Charisse Grant, Dade Community Foundation, charisse.grant@dadecommunityfoundation.org
|
|
In 2004, the Dade Community Foundation partnered with the Ford Foundation and other Florida funders to establish the Fund for Community Organizing to build the capacity of community organizing organizations. For this session, the speaker will share how she, the foundation, and the organizing funding collaborative defined and measured the long-term impact of advocacy and organizing in Dade County. For example, two of the Fund's grantee organizations obtained a policy agreement from Miami-Dade County's Housing agency to provided 1-for-1 replacement of all 850 low-income housing units the county had recently demolished. The county, though, did not know where to locate displaced residents to make sure they had received the housing. How the Fund determined the impact of these and other policy gains will be discussed with a focus on the challenges of assessing impact in a way that is useful, but not overly burdensome for nonprofit partners (grantees).
| |
|
Session Title: Longitudinal Analysis of Juvenile Justice Outcomes in the Evaluation of Systems of Care for Youth With Serious Emotional Disturbance
|
|
Multipaper Session 332 to be held in Panzacola Section F2 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Human Services Evaluation TIG
, the Crime and Justice TIG, and the Quantitative Methods: Theory and Design TIG
|
| Chair(s): |
| Adam Darnell, EMSTAR Research Inc, adam_darnell@yahoo.com
|
| Abstract:
Systems of Care (SOC) are coordinated networks of community-based services and supports that are organized to meet the challenges of youth with Serious Emotional Disturbance (SED) and their families. SOC is currently operating in five sites in Georgia under the name KidsNet. Through partnership with the Department of Juvenile Justice (DJJ), KidsNet has augmented their evaluation by accessing DJJ data for youth enrolled in the SOC. DJJ data has provided much needed information on the 'non-KidsNet' condition, in this case DJJ contacts for youth prior to their enrollment in KidsNet. Access to this data has allowed KidsNet to avoid the considerable costs associated with collection of data from comparison youth or collection of data prior to or following KidsNet enrollment. Longitudinal change in juvenile justice outcomes was examined using growth modeling and survival analysis. Results and comparative strengths and weaknesses of each analytic approach will be discussed.
|
|
Application of a Growth Model to Estimate Change in Offending Rates Over Time Among Youth in a System of Care
|
| Jeremy Lingle, EMSTAR Research Inc, jeremy.lingle@yahoo.com
|
|
A primary purpose of Georgia's System of Care (SOC) is to decrease involvement of youth with the justice system. Historical data on contacts with the juvenile justice system for KidsNet youth were obtained via a data sharing agreement between Georgia's SOC and the Department of Juvenile Justice. Growth modeling was applied to this data with repeated observations clustered within individuals. The analysis produced estimates of growth in the number of offenses over time and change in the growth rate attributable to SOC services. Findings from this analysis suggest that the rate of offense accumulation could be effectively eliminated in less than one year of participation in SOC. The potential mediating relationships of rapid referrals and youth demographic characteristics upon growth rates are to be discussed. Discussion will also address presentation of longitudinal data analyses to a wider-audience of evaluation stakeholders.
|
|
A Discrete-time Survival Analysis of Juvenile Offending in the Evaluation of Systems of Care
|
| Adam Darnell, EMSTAR Research Inc, adam_darnell@yahoo.com
|
|
Historical data on contacts with the juvenile justice system for youth in Georgia's System of Care were obtained via a data sharing agreement with the Department of Juvenile Justice. These data were analyzed using discrete-time survival analysis. Outcomes were time from first to second offense and time from first to second detention placement. Lagged effects of a time-varying predictor representing SOC enrollment were examined, controlling for a range of participant characteristics. Results indicated a significant decrease in the likelihood of a second offense following SOC enrollment. Discussion will focus on strengths and weakness of the survival analysis approach as opposed to the hierarchical linear modeling approach utilized in the other paper in this session. Discussion will also address lessons learned from the process of interagency data sharing.
|
|
Session Title: External Evaluations of Coalitions: A Model for Supporting Participatory Assessment
|
|
Demonstration Session 333 to be held in Panzacola Section F3 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Health Evaluation TIG
|
| Presenter(s): |
| Kristin Hobson, Indiana University, khobson@indiana.edu
|
| Mindy Hightower King, Indiana University, minking@indiana.edu
|
| Abstract:
When funding agencies require grantees to form coalitions and evaluate them, understanding and utilizing a model for externally facilitated participatory coalition evaluations is critical for evaluators. This workshop will present a model for assessing coalitions using a participatory approach that will equip participants with a framework to reference when evaluating coalitions. The model draws upon existing frameworks and practical strategies such as the Model for Collaborative Evaluations developed by Liliana Rodriguez-Campos, Steps in the Community Initiative Evaluation Process presented by Nina Wallerstein, Michele Polascek, and Kristine Maltrud, as well as practical strategies used in the evaluations of the Indiana Cancer Consortium and Indiana Joint Asthma Coalition. Strengths of the model include the incorporation of existing frameworks and practical strategies, as well as adaptations for application at any point in evaluating coalitions. One limitation is that the model has yet to be tested on program structures outside of public health coalitions.
|
|
Session Title: Contextual Challenges for Evaluation in Methamphetamine Drug Abuse Treatment
|
|
Multipaper Session 334 to be held in Panzacola Section F4 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
|
| Chair(s): |
| Katrina Bledsoe,
Walter R McDonald and Associates Inc, kbledsoe@wrma.com
|
|
Formative Program Evaluation: Asking Mothers Addicted to Methamphetamine About Their Experiences
|
| Presenter(s):
|
| Debra Harris, California State University Fresno, dharris@csufresno.edu
|
| Abstract:
This presentation will explain the use of a formative program evaluation to assess a drug abuse inpatient treatment program. Contextualizing the women's experiences was an important aspect of this evaluation. It was intended to improve the services provided. Lessons learned by the program evaluator included the importance of information confidentiality and of the mothers' identity, the use of triangulation, use of the evaluation results by the service provider and the importance of feedback to the mothers. In addition, lessons included the length of time that in-person interviews, data analysis, and follow-up telephone contact required. For those who evaluate programs related to drug abuse or social service providers, the presentation will explore the mothers' journey through the treatment for methamphetamine addiction.
|
|
Evaluating the Methamphetamine Evidence-based Treatment and Healing Program in a Rural Community Mental Health Center Context: Challenges, Rewards and Sustainability
|
| Presenter(s):
|
| Kathryn Bowen, Centerstone Research Institute, kathryn.bowen@centerstone.org
|
| Freida Outlaw, Tennessee Department of Mental Health & Development, freida.outlaw@state.tn.us
|
| Jules Marquart, Centerstone Research Institute, jules.marquart@centerstone.org
|
| Gisoo Barnes, Centerstone Research Institute, gisdoo.barnes@centerstone.org
|
| Ajanta Roy, Centerstone Research Institute, ajanta.roy@centerstone.org
|
| Ellen Pogue, Centerstone Research Institute, ellen.pogue@centerstone.org
|
| Abstract:
While both rural and urban areas experience drug abuse problems, the consequences are not the same due to the limited ability of rural areas to offer effective substance abuse treatment that is accessible and sensitive to rural culture. This paper describes the evaluation of a program that uses the Matrix evidence-based model for treating adults 18 years and older who were abusing methamphetamine and/or other emerging drugs in six rural Middle Tennessee counties. In addition to discussing evaluation findings the presenter will discuss rural contextual barriers that created challenges to enrollment and retention and ways in which these challenges were overcome. The impact of context on implementation fidelity will be woven throughout the discussion. Finally, suggestions for sustainability of substance abuse treatment programs in a rural context will be presented.
|
| |
|
Session Title: Exploring the Terrain: Where Extension Evaluation Meets Foundation Investments in Youth Development
|
|
Multipaper Session 335 to be held in Panzacola Section G1 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Extension Education Evaluation TIG
|
| Chair(s): |
| Kate Walker, University of Minnesota, kcwalker@umn.edu
|
| Discussant(s): |
| Suzanne Le Menestrel, United States Department of Agriculture, slemenestrel@csrees.usda.gov
|
| Abstract:
Corporate and community foundation partnerships significantly advance Extension's mission to extend knowledge to address real community issues. Foundations benefit when Extension evaluators help them document and understand the impact of the innovative youth development strategies that they fund. This session highlights common themes, challenges, and benefits of the Extension-foundation partnership in evaluation. Two different cases of foundation-funded Extension evaluations illustrate the dynamics of this partnership. The first, funded by a corporate foundation, is a process evaluation of adult volunteer training in inquiry-based learning methods for use with a curriculum designed for middle school age youth. The second, funded by a community foundation, targets the effectiveness of the foundations' grant-making and advocacy efforts in the area of youth violence prevention. Both cases demonstrate how partnering with foundations can enhance Extension's traditional role and call on its faculty to be responsive and nimble to larger systems beyond the University.
|
|
Evaluating a Youth Development Inquiry-Based Curriculum Pilot
|
| Pam Larson Nippolt, University of Minnesota, nippolt@umn.edu
|
|
This paper outlines an approach to building process evaluation into a grant with a corporate foundation. In this example, grantmaker and grantee joined forces to both 1) get a new curriculum into the field and 2) to determine what it takes for volunteers to deliver it effectively with inquiry-based learning methods. The foundation invited 4-H Youth Development to form a six-state partnership to launch the curriculum and to design and evaluate a method for training volunteers to deliver it. The University Extension-based faculty and staff approached the project through distinct roles, providing expertise both to design the delivery and to evaluate the effectiveness of the design. Considerations about goals and purposes of the evaluation for the grantmaking organization, the grantee organization, and the funded project are considered as key factors in the choices made in the evaluation. Process evaluation methods for a multi-state, pilot and results are presented.
|
|
Evaluating a Community Foundation's Grantmaking and Advocacy Efforts To Prevent Youth Violence
|
| Kate Walker, University of Minnesota, kcwalker@umn.edu
|
|
This paper focuses on a community foundation's efforts to reduce growing youth violence by funding direct service youth program efforts despite its history of focused funding on systems change projects. In an effort to curb urban youth violence, the foundation awarded over one million dollars to community-based youth-serving organizations involved with intervention strategies focused on empowering young people to break the cycle of violence. In addition, the foundation supported a number of advocacy efforts to inform and influence policy and systems change, including forging strategic alliances, devising innovative strategies, and providing critical leadership. As the foundation sought meaningful ways to reflect upon and learn from their grantmaking and advocacy strategies related to youth violence prevention, they initiated a partnership with Extension youth development evaluators. The resulting retrospective evaluation gauges the progress and impact of these efforts and identifies critical lessons to inform and strengthen future funding decisions and evaluation strategies.
|
|
Session Title: An Evaluation Framework for Building Program Theory on the Basis of Empirical Evidence: An Illustration of Changing Health Behavior in Nicaragua
|
|
Expert Lecture Session 336 to be held in Panzacola Section G2 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Program Theory and Theory-driven Evaluation TIG
|
| Chair(s): |
| Gretchen Jordan, Sandia National Laboratories, gbjorda@sandia.gov
|
| Presenter(s): |
| Jerald Hage, University of Maryland, jerryhage@yahoo.com
|
| Abstract:
The objective is to present a framework for building knowledge about the success of alternative programs for changing human behavior that can be applied in many different kinds of national contexts. The specific focus is on health behavior and the programs in the framework are communication, physical capital investments, human capital investments, and social capital investments. The focus is on how much the quantity of each of these programs changes the behavior of women who are pregnant or have young children. In addition, there is a framework for evaluating the relative effectiveness of single organizational interventions vs. a network of organizations on the basis of amount of learning, the quantity of shared resources and the extent of social mobilization. Collecting the data requires a standardized methodology that includes concrete measures of health behavior and knowledge and a simple sampling system, which is discussed and illustrated with data analysis.
|
|
Session Title: Evaluating Technical Assistance for Early Childhood Systems Initiatives: Testing a Theory of Change
|
|
Panel Session 337 to be held in Panzacola Section H1 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Systems in Evaluation TIG
|
| Chair(s): |
| Joy Sotolongo, North Carolina Partnership for Children, jsotolongo@ncsmartstart.org
|
| Discussant(s):
|
| Julia Coffman, Harvard Family Research Project, jcoffman@evaluationexchange.org
|
| Abstract:
Smart Start's National Technical Assistance Center has provided intensive technical assistance to more than 10 states interested in creating comprehensive, community-based early childhood systems. The long-term impact of building a comprehensive early childhood system is to provide benefit for young children and those who care for young children, such as parents and early childhood professionals. Early efforts to evaluate Smart Start's National Technical Assistance Center focused on process measures, such as progress towards creating a shared vision; creating organizational infrastructure; and adopting a unified approach to advocacy. A follow-up evaluation in six states explored progress with systems building efforts and impacts on young children, families, and early childhood professionals. This session will examine the findings from the follow-up evaluation through the lens of A Framework for Evaluating Systems Initiatives, a paper authored by Julia Coffman for the BUILD Initiative, which proposes a theory of change menu for systems initiatives.
|
|
Testing the Theory Using Experiences in Six States
|
| Joy Sotolongo, North Carolina Partnership for Children, jsotolongo@ncsmartstart.org
|
|
Joy Sotolongo, will present an overview of Smart Start's National Technical Assistance approach to assist state's efforts to build comprehensive, community-based early childhood systems and related early evaluation findings. Then, the proposed theory of change outlined in A Framework for Evaluating Systems Initiatives will be presented. Key findings from Smart Start's six state follow-up evaluation to learn about the impact on young children, families, and early childhood professionals will be applied to the proposed theory of change. Finally, the utility of the proposed theory of change as an evaluation framework for examining results realized by states involved with Smart Start's National Technical Assistance Center will be discussed.
|
|
|
Testing the Theory in South Carolina
|
| Susan DeVenny, South Carolina Office of First Steps, sdevenny@scfirststeps.org
|
|
Susan DeVenny, Executive Director of South Carolina's First Steps to School Readiness, will share a brief summary of South Carolina's approach to creating and implementing a statewide comprehensive, community-based early childhood system. Ms. DeVenny will then respond to the utility of the proposed logic model from the perspective of a state leader of an early childhood initiative.
| |
|
Session Title: Evaluation Managers and Supervisors TIG Business Meeting and Think Tank: How to Manage Evaluations in Lean Economic Times
|
|
Business Meeting Session 338 to be held in Panzacola Section H2 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Evaluation Managers and Supervisors TIG
|
| TIG Leader(s): |
|
Ann Maxwell, United States Department of Health and Human Services, ann.maxwell@oig.hhs.gov
|
|
Sue Hewitt, Health District of Northern Larimer County, shewitt@healthdistrict.org
|
|
Laura Feldman, University of Wyoming, lfeldman@uwyo.edu
|
| Presenter(s): |
| Ann Maxwell, United States Department of Health and Human Services, ann.maxwell@oig.hhs.gov
|
| Laura Feldman, University of Wyoming, lfeldman@uwyo.edu
|
| Abstract:
This session will provide a forum for evaluation managers to discuss their tactics for managing evaluations with less resources. In addition, the session will address any opportunities that may be present in lean economic times which produces heightened concern about fiscal and program accountability. In other words, how do we do more with less? Break-out topics will include: training, cost effective data collection options, and opportunities for marketing the benefits of evaluation.
|
|
Session Title: Improving Evaluation Questions and Answers: Getting Actionable Answers for Real-World Decision Makers
|
|
Demonstration Session 339 to be held in Panzacola Section H3 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Evaluation Use TIG
|
| Presenter(s): |
| E Jane Davidson, Davidson Consulting Ltd, jane@davidsonconsulting.co.nz
|
| Abstract:
The utility and value of evaluation are largely determined by the questions asked and answered. Explicitly evaluative questions are the best source of actionable answers because they ask directly about quality, value, and/or importance. How can evaluators work with stakeholders to generate a concise set of 'big picture' questions that will guide a truly useful evaluation? What evaluation-specific logic and methodology is needed to answer those questions? And, how can all this help evaluators minimize the incidence of - and respond constructively to - criticisms or accusations of subjectivity or bias? To answer explicitly evaluative questions, it is vital that definitions of quality/value (i.e., "how good is good?") are clear, transparent, and based on sound needs assessment and other relevant sources. This session presents a number of simple but powerful tools and strategies that can make participatory and other evaluations more credible, useful, valid, and a powerful springboard for improvement and change.
|
|
Session Title: Valuing by Whose Values: How a Self-Determination Theory-Based Evaluation Incorporates Program Participant Context to Help Scotts Miracle-Gro Foundation Balance Institutional Values With the Values of the People it Serves
|
|
Demonstration Session 340 to be held in Panzacola Section H4 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Presenter(s): |
| Deborah Wasserman, The Ohio State University, wasserman.12@osu.edu
|
| Sue Hagedorn, Scotts Miracle-Gro Foundation, sue.hagedorn@gmail.com
|
| Tifani Kendrick, Center of Science and Industry, tkendrick@mail.cosi.org
|
| Gerlinde Higgenbotham, Center of Science and Industry, ghigginbotham@mail.cosi.org
|
| Abstract:
Often foundations fund grantees to institute programs that further the foundation's objectives and targeted outcomes. Then they require change-model evaluations that determine success based on the achievement of those outcomes. While beneficial for accountability purposes, this practice can risk creating a self-serving, closed system wherein, tethered by the demands of the funder, grantees perpetuate targeted outcomes whether or not they are in the best interests of the people they serve. In this presentation, representatives from Scotts Miracle-Gro Foundation's Cap Scholars program introduce Self-Determination Theory-Based Logic Models and how use of this kind of model measures the Foundation's targeted outcomes in the context of participants' values and personal environment. The results have created insight and change not only at the program level but also at the heart of the Foundation.
|
|
Session Title: Evaluation in State Government Context: Performance Based Contracts and Master Settlements
|
|
Multipaper Session 341 to be held in Sebastian Section I1 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Government Evaluation TIG
|
| Chair(s): |
| Cheri Levenson,
Arizona Department of Commerce, cheril@azcommerce.com
|
|
Challenges in Developing and Implementing an Accountability Framework for Master Tobacco Settlement Monies
|
| Presenter(s):
|
| Karin Chang-Rios, University of Kansas, kcr@ku.edu
|
| Elenor Buffington, University of Kansas, elliebuf@ku.edu
|
| Heather Rasmussen, University of Kansas, hrasmussen@ku.edu
|
| Jacqueline Counts, University of Kansas, jcounts@ku.edu
|
| Abstract:
The Institute has created an accountability framework and is using it to conduct an evaluation of programs receiving Tobacco Master Settlement monies in Kansas. This paper presents information regarding its development. The design process involved meeting the requirements of several key components. Evaluators had to attend to the Kansas statute which requires that programs use best practices in the field, have data to benchmark outcomes, and contain an evaluation component capable of determining program performance. They also had to consider the political priorities of the Governor and the Children's Cabinet. To enhance the quality of the evaluation, evaluators had to make sure the framework was guided by the Program Evaluation Standards. And finally, they had to develop a framework that was flexible enough to judge programs with disparate goals, evaluation foci or funding levels. Implementation of the framework and challenges associated with its development will be discussed in the paper.
|
|
Paying for Results: Administering Performance Based Contracts
|
| Presenter(s):
|
| Prashant Rajvaidya, Mosaic Network Inc, prash@mosaic-network.com
|
| Michael Bates, Mosaic Network Inc, mbates@mosaic-network.com
|
| Abstract:
As pressure has increased at Federal, State and Local levels for funding to be tied to achievement, government programs have in some cases moved toward performance based contracts where funding is tied directly to the attainment of desirable outcomes. The State of Hawaii's Office of Community Services (part of the Department of Labor and Industrial Relations) has implemented performance-based contracts for its Federally Funded Employment Core Services Program. This has moved the program from a model of 'payment for activities' to one of 'payment for results, where programs get reimbursed at set rates based on clients achieving certain employment-related milestones. We present early results from this program and the methodologies used to achieve them including the evaluation plan, the technology used to tie it all together, and feedback from service providers across the state.
|
| |
|
Session Title: Methodology and Results of a Comprehensive Evaluation of Communities in Schools of Texas
|
|
Multipaper Session 342 to be held in Sebastian Section I2 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
|
| Chair(s): |
| Heather Clawson, ICF International, hclawson@icfi.com
|
| Abstract:
In this presentation, study authors present the methodology and results of a multi-level, multi-method evaluation of Communities In Schools of Texas. Communities In Schools of Texas is a dropout prevention program and is part of the nation's largest stay-in-school network. CIS of Texas, established in 1979, operates 27 local CIS programs and provides services in more than 600 schools.
|
|
Examining School-Based Intervention Programs in a Multilevel Context: Communities in Schools in Texas
|
| Heather Clawson, ICF International, hclawson@icfi.com
|
| Jing Sun, ICF International, jsun@icfi.com
|
| Aikaterini Passa, ICF International, apassa@icfi.com
|
|
In this presentation, study authors provide an overview of student-level outcomes from the Communities In Schools (CIS) of Texas Evaluation, which was recently completed. This presentation will focus on results from hierarchical linear modeling (HLM). Hierarchical Linear Models (HLM) help evaluators overcome issues of dependence and allow evaluators to examine their subjects while controlling for group level membership (i.e., schools). Using the CIS of Texas evaluation, this presentation focuses on several HLM models, including matched (PSM) student comparisons and CIS only student models over time. Findings from this study suggested that while CIS students did not perform well on behavioral and academic outcomes compared to their non-CIS peers, there were several significant individual-level predictors of success among CIS students. Additionally, certain school-level and CIS affiliate-level characteristics were significantly associated with several positive student-level outcomes.
|
|
Communities in Schools of Texas: An Examination of the Effects of Service Dosage on Students Over Time
|
| Jing Sun, ICF International, jsun@icfi.com
|
| Yvette Lamb, ICF International, ylamb@icfi.com
|
|
For over thirty years, Communities In Schools (CIS) of Texas has worked to address the needs of students who are at risk for dropping out of school. During this presentation, which covers the results of the CIS Texas evaluation, the authors will cover interesting relationships that emerged between CIS dosage and student outcomes. Examining students who received CIS services in the 2005-06 school year, the authors present the results of descriptive analyses, trend plots, and hierarchical linear models. Findings suggest that the number of hours of CIS programming a student receives was significantly related to a number of academic and behavioral outcomes over time. Insights and implications from the Texas CIS evaluation are discussed and future directions for research will be examined.
|
|
Session Title: An Evaluation of Georgia's Performance Learning Centers: Conducting Multi-Method Evaluations of Alternative School Settings
|
|
Expert Lecture Session 343 to be held in Sebastian Section I3 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
|
| Presenter(s): |
| Felix Fernandez, ICF International, ffernandez@icfi.com
|
| Julie Gdula, ICF International, jgdula@icfi.com
|
| Linda Kelley, Communities In Schools of Georgia Inc, lkelley@cisga.org
|
| Allan Porowski, ICF International, aporowski@icfi.com
|
| Kelle Basta, ICF International, kbasta@icfi.com
|
| Abstract:
A Performance Learning Center (PLC) is an alternative public school for students who are not thriving in the traditional school and are at risk for dropping out. The first two PLCs opened in Georgia in the 2002-2003 school year. ICF International has conducted an evaluation of Georgia's PLCs using a multi-method approach consisting of both quantitative and qualitative measures. The team developed and conducted three components of the evaluation: a quasi-experimental study using propensity score matching, an online survey, and a case study of two PLC sites. This presentation will include a description of the Georgia PLC model, evaluation methodology, challenges encountered, results of the evaluation, and lessons learned. It will also discuss how a multi-method approach helps to determine (1) whether an alternative school model is working and (2) how and why it influences student outcomes.
|
|
Session Title: Enhancing the Strategic Management Process Through the Use of Evaluation Methods
|
|
Multipaper Session 344 to be held in Sebastian Section I4 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Business and Industry TIG
|
| Chair(s): |
| Michael Scriven, Claremont Graduate University, mjscriv@gmail.com
|
| Abstract:
Research has shown that companies (both for profit and non-profit) that engage in formal strategic planning tend to be more successful than companies that do not. Therefore, the strategic management process should be an integral part of every company or organization. Strategy evaluation is identified as one of the three phases of this process; however, the evaluative nature of this process has limitations which could be revamped using principles from evaluation methodology. This multi-paper session will focus on the evaluative nature of the strategic management process, with special emphasis on the widely-used Fred David strategic model, and will also highlight the similarities and differences between this model and the KEC (Key Evaluation checklist), a practical tool which can be used to conduct evaluations. The session will end with suggestions on how evaluation methods could potentially improve the strategic management process.
|
|
The Evaluative Nature of the Strategic Management Process
|
| Michelle Woodhouse Jackson, Western Michigan University, mwoodhousej@gmail.com
|
|
The strategic management plan is a management tool that is used to help companies to do better by focusing on the goals of the companies and assessing and adjusting established goals in order to adapt to the ever changing or dynamic environment. The strategic management process is vital in any type of organization. As such, various models exist on how to develop, implement and evaluate strategies. This presentation will begin by focusing on describing the strategic management process (what is, why it used, and the benefits of using this process). This will be followed by an illustration of the evaluative nature of the strategic management process through the use of a widely used strategic management model developed by Fred David. This will be followed by a discussion of the importance of bridging the gap between the fields of evaluation and business in order to improve the strategic management process.
|
|
Using Evaluation Methods to Improve the Strategic Management Process
|
| Nadini Persaud, University of West Indies, npersaud@uwichill.edu.bb
|
|
This presentation will begin with a brief overview of the components of the Key Evaluation Checklist (KEC) developed by Dr. Michael Scriven. The KEC is a user-friendly, practical tool that can be adapted for use in conducting evaluations. The KEC will be used as a barometer to compare the evaluation process outlined in Fred David's strategic management model, a widely-used instrument in corporate America. A comparative analysis between Scriven's KEC and Fred David's model will be presented, followed by a discussion of the relevant KEC elements missing from the strategic management model. Suggestions will also be given on how to incorporate evaluation theory and methods in the strategy evaluation phase of the strategic management process in order to improve the evaluation approach currently used by major companies.
|
|
Session Title: A Conversation on the Sociology of Evaluation
|
|
Expert Lecture Session 346 to be held in Sebastian Section K on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Presidential Strand
|
| Chair(s): |
| Jody Fitzpatrick, University of Colorado Denver, jody.fitzpatrick@ucdenver.edu
|
| Presenter(s): |
| Peter Dahler-Larsen, University of Southern Denmark, pdl@sam.sdu.dk
|
| Discussant(s): |
| Thomas Schwandt, University of Illinois at Urbana-Champaign, tschwand@illinois.edu
|
| Abstract:
Although often portrayed as primarily a logic for investigating and demonstrating the value of programs, policies, and practices, evaluation is better grasped as a situated argument about value. Argument here refers to the idea of building a case in ordinary language for considering the particular merit, worth, or significance of something using both evidence and sound reasoning. Saying that an argument is always situated signifies that the argument is always contextualized in two senses: First, the context determines, in part, what constitutes reasonable data, criteria, and evidence for an evaluative claim. The value of a program, policy, or practice is always studied and framed within a particular context of debate, conflict of opinion, value preferences, climate of criticism, and the relative merits of those opinions, values, preferences, and so on. Second, evaluation arguments are always indexed to some particular context of contentious ideas held by clients and stakeholders, and it is in this context that an evaluator aims to make a persuasive case for his or her claims. In this expert lecture, Peter Dahler-Larsen from Denmark and Thomas Schwandt from the U.S. engage in a conversation about this idea of evaluation as situated argument. They explore how the role of the modern welfare state, socio-political cultures of accountability, civil society traditions of democratic deliberation and citizen responsibilities, and the like help situate the kinds of evaluative arguments that are considered legitimate in Denmark and the U.S.
|
|
Session Title: Evaluation Efforts in Global AIDS Public Health
|
|
Multipaper Session 348 to be held in Sebastian Section L2 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the International and Cross-cultural Evaluation TIG
|
| Chair(s): |
| Thomas Chapel, Centers for Disease Control and Prevention, tkc4@cdc.gov
|
| Discussant(s): |
| Thomas Chapel, Centers for Disease Control and Prevention, tkc4@cdc.gov
|
| Abstract:
Strengthening basic program evaluation capacity in international settings requires coordination of multiple agencies, sectors, and parties. This may make evaluation appear complex and challenging to country teams, who are already juggling multiple competing priorities, and can deter them from making program evaluation a routine and systematic activity. However, when evaluation is undertaken in this context, factors such as stakeholder buy-in, organizational and country's culture, human and financial resources, among others need to be considered. This session discusses evaluation initiatives where staff from the CDC's Global AIDS Program, along with colleagues from other agencies, provided technical assistance to partners or grantees in international settings. The presentations will describe how each evaluation effort evolved, the strategies used to engage stakeholders, barriers and facilitators of the process, perceptions by the partners on how it has changed their evaluation practice, and lessons from the CDC experience.
|
|
Program Evaluation in Action
|
| Yamir Salabarria-Pena, Centers for Disease Control and Prevention, ycs8@cdc.gov
|
| Roger Myrick, University of California San Francisco, rogermyrick@yahoo.com
|
| Laura Porter, Centers for Disease Control and Prevention, lgp4@cdc.gov
|
| Lela Baughman, ICF Macro, lela.n.baughman@macrointernational.com
|
| David Cotton, ICF Macro, david.a.cotton@macrointernational.com
|
| Warren Passin, ICF Macro, warren.f.passin@macrointernational.com
|
| Paulyne Ngalame, ICF Macro, pdn8@cdc.gov
|
|
A program evaluation (PE) capacity building initiative was developed by the CDC/GAP in response to requests from field staff about the need for more PE activities, strengthening PE capacity in country and more emphasis on the "E" component of Monitoring and Evaluation. This is Phase 1 of the initiative aimed at demystifying PE and presenting it as a practical program tool which is need driven, and whose implementation can be completed in a short period of time i.e., 6 months. Countries committed to go through a step-by-step process of PE will be selected to participate. The initiative will cover activities such as evaluability assessment all the way to monitoring the use of results for program improvement. This session will present examples from the field where this initiative is undertaken and will identify: (1) steps used to conduct PE in international settings (2) facilitators, challenges, and (3) lessons learned. The ultimate goal is to increase the number of countries that plan, implement, and use evaluation results routinely to inform program planning and make decisions.
|
|
How to Design a Process Evaluation in an International Setting
|
| Helen Coehlo, ICF Macro, helen.m.coelho@macrointernational.com
|
| Yamir Salabarria-Pena, Centers for Disease Control and Prevention, ycs8@cdc.gov
|
| Roberto Leon, Centers for Disease Control and Prevention, rleon@gt.cdc.gov
|
| Cecilia Arango, MINSA-Panama, dra.arangocecilia@gmail.com
|
| Gladys Guerrero, MINSA-Panama, gguerrero@minsa.gob.pa
|
| Cristina Gómez, MINSA-Panama, crislin25@hotmail.com
|
| Maria Mastelari, MINSA-Panama, mariacmastelari@hotmail.com
|
| Yira Ibarra, MINSA-Panama, drayira@yahoo.com
|
|
As a result of a basic program evaluation (BPE) capacity building initiative in the Central America region, assistance (TA) was requested by the Ministry of Health of the Republic Panamá to determine the reasons why not all patients with Tuberculosis (TB) are tested for HIV according to national TB guidelines. This presentation will address the process of designing a process evaluation in an international setting, ways to overcome obstacles, lessons learned, and recommendations. Country partners are committed to actively engage in all phases of this project, which responds to the country's needs. To date meetings have occurred with multiple stakeholders from MOH and CDC on process evaluation design. As a result an evaluation protocol was designed to include a multi-site process evaluation. As project progresses, lessons learned of the actual evaluation study will be documented and presented. It is expected that this experience will increase the likelihood that other countries in the region will value evaluation as a practical and rigorous tool that can provide useful information for program planning and to improve program performance.
|
|
Session Title: Assessing Contextual Success Factors for Building Evaluation Capacity in the Child and Youth Mental Health Sector: The Ontario Experience
|
|
Demonstration Session 349 to be held in Sebastian Section L3 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
|
| Presenter(s): |
| Evangeline Danseco, Children's Hospital of Eastern Ontario, edanseco@cheo.on.ca
|
| Tanya Witteveen, Lesley University, twitteveen@cheo.on.ca
|
| Susan Kasprzak, Children's Hospital of Eastern Ontario, skasprzak@cheo.on.ca
|
| Abstract:
The Provincial Centre of Excellence for Child and Youth Mental Health provides funding to organizations based in Ontario, Canada to build capacity for evaluation and research. For 2008-09, 22 service organizations were funded to develop evaluation plans and increase internal capacity for evaluation; 13 organizations were funded to implement their evaluation. To facilitate the review of funding for the second round of evaluation grants, we have developed a readiness and needs assessment tool. The assessment tool utilizes Preskill and Boyle's (2008) multidisciplinary model of evaluation capacity building by exploring leadership, organizational culture, structures and communication. The tool is also informed by the knowledge translation literature and our experiences over the past 5 years of funding program evaluation grants. In this workshop, we present the tool and the accompanying interview guide. Participants will be able to apply the tool to identify strengths and areas for improvement for building capacity in evaluation.
|
|
Session Title: Theories of Evaluation TIG Business Meeting and Presentation: The Meaning and Purpose of "Theories of Evaluation"
|
|
Business Meeting Session 350 to be held in Sebastian Section L4 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Theories of Evaluation TIG
|
| TIG Leader(s): |
|
James Griffith, Claremont Graduate University, james.griffith@cgu.edu
|
|
Bianca Montrosse, University of North Carolina at Greensboro, bmontros@serve.org
|
| Discussant(s): |
|
Christina Christie, Claremont Graduate University, tina.christie@cgu.edu
|
|
Nick L Smith, Syracuse University, nlsmith@syr.edu
|
|
Howard R Mzumara, Indiana University Purdue University Indianapolis, hmzumara@iupui.edu
|
| Abstract:
Currently, there is no description of, or statement of purpose or mission for, the Theories of Evaluation TIG on file with AEA. In light of this, and in light of recent discussions about potential overlap with the Research on Evaluation TIG, the Theories of Evaluation TIG leadership is circulating a survey to determine how the purpose of the TIG is perceived. The survey also asks about member preferences for future directions of the TIG.
James Griffith and Bianca Montrosse will present the survey results. This presentation will be followed by commentary by three discussants, after which discussion will be opened up to all attendees.
|
|
Session Title: Informing School Boards and Education Policy Makers in Ways That Improve Educational Decision-making
|
|
Multipaper Session 351 to be held in Suwannee 11 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the AEA Conference Committee
|
| Chair(s): |
| Paul Rendulic,
PerforMetrics of Florida, rendulic@earthlink.net
|
|
A Template for Preparing a Meta-analysis of Evaluation Results: Informing School Board Members of the Impact of Grant Programs
|
| Presenter(s):
|
| Paul Rendulic, PerforMetrics of Florida, rendulic@earthlink.net
|
| Abstract:
Every year school districts compete for and receive federal and state funded grants. These grants usually require that an external evaluator conduct a third party evaluation of the funded program. Customarily the evaluator submits either formative and/or summative evaluation reports covering the status of the program implementation and the extent to which objectives have been attained. Unfortunately, this is where the dissemination of information usually ends; a filing cabinet full of evaluation reports. What about the ubiquitous impact of these grant programs across school district employees and students? To what extent are stakeholders informed of the overall impact that federal and state funded grants have on school district? This will not happen without a process to effectively summarize evaluation reports across multiple grants. This session presents a reporting template that when completed produces a user friendly, meta-analysis of annual evaluations.
|
|
Shift the Attention From Research Design to Research Use: What School Policy Makers Need to Know When Presented With Research?
|
| Presenter(s):
|
| Bo Yan, Blue Valley School District, byan@bluevalleyk12.org
|
| Mike Slagle, Blue Valley School District, mslagle@bluevalleyk12.org
|
| Abstract:
As schools turn to data-driven decision making to improve student achievement, research is playing an increasingly important role in local policy making. Efforts have been made to help policy makers better understand research with a focus on research design and method. However, how to better use research has received little attention. This paper addresses this matter by discussing five issues concerning interpreting and applying research: 1) there is a difference between sample-based and population-based research; 2) statistical results are best guesses rather than decisive answers; 3) statistically significant results are not everything; 4) not all research results are applicable to local setting; and 5) research results are just one source of evidence. It is hoped that school administrators will have a new perspective when interpreting and applying research and feel empowered to ask more from researchers.
|
| |
|
Session Title: Empowerment Evaluation in Context: Tools for Determining Impact of State Capacity Building for Increasing Outcomes of Students With Disabilities
|
|
Demonstration Session 352 to be held in Suwannee 12 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Special Needs Populations TIG
|
| Presenter(s): |
| Paula Kohler, Western Michigan University, paula.kohler@wmich.edu
|
| June Gothberg, Western Michigan University, june.gothberg@wmich.edu
|
| Jennifer Hill, Western Michigan University, jennifer.l.hill@wmich.edu
|
| Abstract:
This session will present the National Secondary Transition Technical Assistance Center's (NSTTAC) Evaluation Toolkit. We begin with an overview of NSTTAC's capacity-building model and how data-based decision-making is used to improve outcomes of students with disabilities. We will follow with an illustration of the Evaluation Toolkit and our strategies to empower SEAs and LEAs to self-evaluate the activities, outputs, and outcomes of their capacity building plans. Examples from the toolkit include checklists, surveys, focus group and interview protocols, database techniques, and examples of capacity building tools in five areas: student-focused planning, student development, family involvement, interagency collaboration, and program structures. The session concludes with a discussion of the challenges to building a data-driven approach for improving outcomes for students with disabilities across a variety of state and local contexts. Participant questions and feedback will be welcomed at the conclusion of the demonstration.
|
|
Session Title: Exploring Institutional Ethnography and Women's Oral Histories for Context in Evaluation
|
|
Multipaper Session 353 to be held in Suwannee 13 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Feminist Issues in Evaluation TIG
|
| Chair(s): |
| Ginger Hintz,
Bill & Melinda Gates Foundation, ginger.hintz@gatesfoundation.org
|
|
Exploring the Value of Institutional Ethnography for Context in Evaluation
|
| Presenter(s):
|
| Jean Eells, E Resources Group, jceells@wmtel.net
|
| Abstract:
Institutional ethnography methodology orients investigation from a standpoint location and requires systematic and frequent attention to context as an investigator proceeds. An institutional ethnography approach was used in evaluative research of local, state, and federal governmental agencies and non-profit organizations in Midwest states. Document analysis, field observation, and interviews were used to examine a gap in participation in services by women. This approach follows pathways of service delivery or policy implementation within and between institutions - an evaluation of participation in an institutional process. Context drawn from lived experience is an essential evaluative criterion that guides the investigator using this approach. Institutional ethnography methodology in this case greatly minimized defensive reactions by the agencies and their personnel, and also fostered action on recommendations or evaluation use. Discussion includes implications for participatory evaluation and for evaluation practice.
|
|
Women's Oral Histories: Using Biographical Method in Constructing a New Historical Perspective
|
| Presenter(s):
|
| Hasmik Gevorgyan, Yerevan State University, vstarm@arminco.com
|
| Yeva Avakyan, World Vision US, yavakyan@worldvision.org
|
| Abstract:
This presentation is based on an oral history research conducted with over one hundred Armenian women of different age groups. The book by Dr Hasmik Gevorgyan that this research resulted in (The Art of Being: history of the 20th century), is a historical retrospective of events that shook the country of Armenia in the past century. It includes personal reflections of the lives of women, their families and the social setting in which these events take place. Recording individual experiences of social change and merging social and personal problems, it creates a new meaning, a new perspective on history by Armenian women, whose take on history is rarely documented.
|
| |
|
Session Title: Using Dosage as a Measure of Evaluating Character Education Impacts on Student Achievement and Student Discipline Outcomes
|
|
Demonstration Session 354 to be held in Suwannee 14 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Presenter(s): |
| Raymond Hart, Georgia State University, rhart@gsu.edu
|
| Carmine Stewart Burkette, Cleveland State University, csb@rshartp.com
|
| Abstract:
The purpose of this study was to evaluate the practical use of dosage as a measure of program implementation in evaluating the extent to which professional development centered on character education translated into changes in classroom practice, and the extent to which it improved student discipline, student perception of school climate, and student learning. The researchers evaluated differences in student discipline and student learning based on levels of exposure to the character education curriculum. There were statistically significant reductions in the average number of days lost to disciplinary actions at each participating school except one (p = 0.51) over the school year. The total minutes of PD (dosage) account for 4% of the variation in days lost to disciplinary action across the schools. At every grade level, except 7th Grade Mathematics, students in the high character education exposure group outperformed students in the low exposure group on state standardized assessments.
|
|
Session Title: Evaluations of Programs Benefiting Young Children and Their Parents
|
|
Multipaper Session 355 to be held in Suwannee 15 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Human Services Evaluation TIG
|
| Chair(s): |
| Lynn Elinson,
Westat, lynnelinson@westat.com
|
| Discussant(s): |
| Lynn Usher,
University of North Carolina at Chapel Hill, lynnu@email.unc.edu
|
|
Georgia's Second Chance Homes: Evaluation Findings and Usefulness
|
| Presenter(s):
|
| Rebekah Hudgins, Independent Consultant, broughudgins@bellsouth.net
|
| Steve Erickson, EMSTAR Research Inc, ericksoneval@att.net
|
| Abstract:
Georgia's Second Chance Home (SCH) network provides a safe alternative living situation for teen mothers who want to parent their children but have few housing options. The overarching goal of SCH is to build strong families and break the cycle of persistent poverty and dependency associated with teenage childbearing. An extensive evaluation system has been in place since 2002 and findings have consistently shown that providing a safe and supportive living environment for teen mothers and their children can help mothers stay free of repeat teen pregnancies, stay in school, rebuild relationships with their families and the father of their children, and make better life choices. This presentation will review these findings related to program management and practice as well as outcomes for the young families. Finally, the presentation will highlight issues related to long-term follow-up and evaluation work with a fragile and highly mobile population.
|
|
Guiding the Fleet: Applying an Overarching Evaluation Framework to Unify a Set of Loosely Connected Studies
|
| Presenter(s):
|
| Yvonne Godber, University of Minnesota, ygodber@umn.edu
|
| Abstract:
One foundation, four stand-alone intervention projects, four separate evaluators, and multiple grantees joined forces to inform a State's emerging early childhood system. Participants will learn how several key decisions and tools paved a cohesive path for the otherwise unconnected studies. Examples of tools and lessons learned from the process will be shared to help evaluators consider how a common framework can be applied to multiple, diverse studies to a) improve stakeholders' understanding of the larger purpose, b) prompt greater involvement in the evaluations, and, ultimately, c) encourage greater use of the evaluation results. The evolution of a research consortium, and development of an overarching conceptual model, evaluation questions, shared assessment battery, and taxonomy will be described. These tools guided the work and relationships of the evaluators and funders to create a structure where both project-specific and foundation-wide findings can be used to better inform the emerging system.
|
| |
|
Session Title: Evaluating Interdisciplinary Team Science: Theory and Methods for an Emerging Field of Inquiry
|
|
Expert Lecture Session 356 to be held in Suwannee 16 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Research on Evaluation TIG
|
| Chair(s): |
| Jacob Tebes, Yale University, jacob.tebes@yale.edu
|
| Presenter(s): |
| Jacob Tebes, Yale University, jacob.tebes@yale.edu
|
| Abstract:
Science has become increasingly more interdisciplinary, with innovations now more commonly produced in teams of researchers drawn from multiple disciplines. This trend toward interdisciplinary team science has enabled researchers to address complex biomedical, psychosocial, and public health challenges more rapidly and effectively. To date, however, systematic and rigorous evaluation of the processes, outcomes, and impacts of team science has lagged behind these developments in scientific practice. Currently, there are no standard approaches to the evaluation of team science, and no generally accepted theories and methods to guide inquiry. This lecture provides an overview of this emerging field of inquiry, describes a conceptual framework for interdisciplinary team science, and shares measures and findings from a comprehensive, mixed methods NIH-funded evaluation of a team science consortium that involves more than 40 scientists working in ten interdisciplinary teams. Included are practical suggestions for conducting evaluations in this emerging field of inquiry.
|
|
Session Title: Developing and Integrating an Assessment Model in Student Affairs: Yes You Can!
|
|
Panel Session 357 to be held in Suwannee 17 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Assessment in Higher Education TIG
|
| Chair(s): |
| Erin Ebersole, Immaculata University, eebersole@immaculata.edu
|
| Abstract:
In March 2007 Learning Outcomes were adopted by the Student Affairs Department. In Fall 2007 an Assessment Committee was convened to begin looking not only at how to assess the learning outcomes, but also how to assess Student Affairs programs and services division-wide. The committee has a representative from each of the Student Affairs divisions, as well as from Mission and Ministry, and Institutional Research. By collaborating, the committee developed an assessment process that can be applied globally to Student Affairs, as well as utilized by each division, to examine how well the Learning Outcomes are being met as they relate to the students of the College of Undergraduate Studies. The presentation will describe the process and challenges encountered to present.
|
|
The Wonderful World of Assessment
|
| Erin Ebersole, Immaculata University, eebersole@immaculata.edu
|
|
In order to strengthen the quality of education delivered, Immaculata University has adopted a rigorous standard of assessment that can only be accomplished through a model of continuous quality monitoring and improvement. This monitoring occurs across the broad spectrum of academic departments and non-academic offices. Immaculata University believes that critical to all decision-making is the use of empirical evidence from multiple sources, and that a culture of assessment should permeate how the institution documents its effectiveness.
The Office of Institutional Research, Planning, and Assessment has worked closely with the Division of Student Affairs to develop a standard of assessment that is applicable to all departments within the division.
This part of the panel discussion will cover the motivation behind our Student Affairs assessment processes (in relation to overall campus assessment) as well as tips to developing a successful assessment process (academic or non-academic).
|
|
|
Assessment: The Student Affairs Perspective
|
| Diane Massey, Immaculata University, dmassey@immaculata.edu
|
|
For many people assessment or evaluation is a bad and scary word. A satisfaction survey is typically the instrument utilized in Student Affairs to evaluate events and programs. Assessment is just not something we're used to doing. To some it seems like busy work and another thing added to the "To Do" list. Some equate assessing the effectiveness of their programs with a personal evaluation of their performance as an employee. At Immaculata University we were required to start looking critically at our departments, the programs, and services offered to ensure students' needs are being met and that we are affecting them in a positive way. This part of the panel discussion will provide a look at assessment from the perspective of a Student Affairs professional, and the fear factor that came along with the development and introduction of this new assessment process.
| |
|
Session Title: Participatory Evaluation and Effective Evaluation Project Management: Incorporating the Participatory Approach throughout the Evaluation Lifecycle
|
|
Demonstration Session 358 to be held in Suwannee 18 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
|
| Presenter(s): |
| Kathy Brennan, Innovation Network Inc, kbrennan@innonet.org
|
| Myia Welsh, Innovation Network Inc, mwelsh@innonet.org
|
| Deloris Vaughn, Innovation Network Inc, dvaughn@innonet.org
|
| Abstract:
Evaluation suffers from many stigmas including that it is something done by an external group to a program, initiative, or organization. In contrast, the participatory approach encourages people involved in programs to own the evaluation process, and reflect on their work making it something that is not done to them but with them. In this demonstration, evaluators from Innovation Network, a firm that practices participatory evaluation, will define what is meant by participatory evaluation, contrast it with empowerment and collaborative evaluations, and give examples from the field that illustrate how participatory evaluation may be used throughout the evaluation lifecycle. The session will highlight success stories as well as places where using the participatory process, if not carefully managed, can go astray. The session will also draw on the experience of participants in conducting participatory evaluations and the strengths and the pitfalls.
|
| Roundtable:
Toward a Model of Sustainable Evaluation: A Focus on Developing Contexts |
|
Roundtable Presentation 359 to be held in Suwannee 19 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Research on Evaluation TIG
|
| Presenter(s):
|
| Mark Constas, Cornell University, mac223@cornell.edu
|
| Lesli Hoe, Cornell University, lmh46@cornell.edu
|
| Abstract:
Sustainability is typically related to environmental concerns and to the stresses wrought by increasing world populations. Sustainability has also been applied to the degree to which programs themselves are sustainable. While the increased interest in sustainability has been paralleled by an interest in how to evaluate for sustainability, less attention has been given to idea of what makes a set of evaluation practices sustainable. The present paper offers a model of sustainable evaluation by exploring two questions: 1) What are the key components that would sustain the continued use of evaluation practices across a range of conditions, over extended periods of time? and 2) How might the integration of such components be modeled to provide a framework of guiding principles for sustainable evaluation? The guidelines proposed in the paper demonstrate how economic imperatives, organizational structures, socio-cultural factors, and political conditions may be integrated with the design features of program evaluation.
|
| Roundtable:
Addressing the Challenges of Evaluating a Professional Development Intervention for Family, Friend, and Neighbor Child Care Providers |
|
Roundtable Presentation 360 to be held in Suwannee 20 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Presenter(s):
|
| Marijata Daniel-Echols, HighScope Educational Research Foundation, mdaniel-echols@highscope.org
|
| Katrina Herbert, HighScope Educational Research Foundation, kherbert@highscope.org
|
| Abstract:
Parents choose family, friend, and neighbor (FFN) child care for a many reasons - for example cost, religion, culture, and kinship. Providers of FFN care often see themselves as babysitters, not early childhood education professionals. Never the less, the public support through child care development funds and private dollars through charitable foundations invested in these providers necessarily leads to a need to evaluate whether and how this type of care positively influences children's social and academic development. How does an evaluator define and measure quality in FFN settings? Is it possible to have a rigorous evaluation design given the highly selective nature of parents' choice in the type of FFN care they use? When multifaceted provider support programs are implemented, how are the many aspects of the intervention parsed and examined? This session will look at an ongoing project and will solicit insight on measurement and design from other evaluators.
|
| Roundtable:
Advantages and Obstacles of External Evaluators in Small Town America |
|
Roundtable Presentation 361 to be held in Suwannee 21 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Presenter(s):
|
| Nakia James, Momentum Consulting and Evaluation LLC, d_njames05@yahoo.com
|
| Michelle Bakerson, Indiana University South Bend, mmbakerson@yahoo.com
|
| Abstract:
External evaluators are often contracted by organizations receiving grants to develop and facilitate programs to benefit the community. The Urban League of Battle Creek is one such organization that was organized to assist an engage in activities which lead to the improvement for opportunities for disadvantaged persons and families. The evaluation was conducted to determine the extent to which the Urban League of Battle Creek has effectively and efficiently followed through with the objectives identified in their grant proposal to the W.K. Kellogg Foundation. The evaluation was designed to be a learning tool for facilitating the improvement of the Urban League of Battle Creek. Accordingly, a collaborative evaluation approach was utilized to actively engage the Urban League of Battle Creek stakeholders during the whole process. A cross-sectional survey design was also selected for this evaluation. The steps, advantages, and obstacles of this evaluation will be shared.
|
|
Session Title: Complexities, Challenges and Lessons Learned From Two Unique Perspectives on Police Involvement in Evaluation
|
|
Multipaper Session 362 to be held in Wekiwa 3 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Crime and Justice TIG
|
| Chair(s): |
| Roger Przybylski,
RKC Group, rogerkp@comcast.net
|
|
Assessing the Needs of Commercial Sex Workers: Developing Recruitment and Interview Methods to Ensure Subject Protection
|
| Presenter(s):
|
| Leslie Aldrich, Massachusetts General Hospital, laldrich@partners.org
|
| Abstract:
To assess the needs of commercial sex workers (CSWs) in Chelsea, Massachusetts, an urban city just outside Boston, the Massachusetts General Hospital Center for Community Health Improvement worked with the Chelsea Police Department and the hospital's internal review board to develop appropriate research methods to protect the safety and confidentiality of this vulnerable population. To access CSWs, police involvement in recruitment of subjects was critical. However, CSW interaction with law enforcement agents for research purposes poses inherent risks to subjects, despite how well-intentioned all parties may be. Ultimately, given the entrenched dangers in the lives of CSWs, recruitment strategies involving police were employed, and appropriate survey and interview tools, recruitment letters, consent forms, and databases were developed and approved to best protect the women and ensure their confidentiality. This approval process was lengthy but necessary, and helped shape the unique recruitment and interviewing methods chosen for this assessment.
|
|
Evaluating Curriculum Change in the Context of the Los Angeles Police Academy
|
| Presenter(s):
|
| Katharine Meese Putman, Fuller School of Psychology, kathyputman@gmail.com
|
| Luann Pannell, Los Angeles Police Department, luann.pannell@lapd.lacity.org
|
| Abstract:
The training philosophy and curriculum at the Los Angeles Police Academy were revised in 2008 in response to several reports indicating a need to train officers with better problem solving, critical thinking, community policing, and communication skills. The Academy's curriculum became centered entirely around 15 scenario events that presented police training on law, tactics, policy, etc. within realistic scenarios. An evaluation was conducted of officers' performance in the field before and after the curriculum change with 12 graduating classes, each with 50 recruits before the curriculum change and an equal number of graduating classes post-change. Surveys of officers' perceptions of their own preparation for police work were compared with surveys and interviews with their supervisors evaluating their performance in the field four weeks after graduation. The politics, complexities and practical lessons learned from designing and conducting an evaluation in this context will be discussed.
|
| |
|
Session Title: Pricing Evaluation: A Case of the Tail Wagging the Dog?
|
|
Demonstration Session 363 to be held in Wekiwa 4 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
|
| Presenter(s): |
| Zita Unger, Evaluation Solutions, zitau@evaluationsolutions.com
|
| Abstract:
Evaluation quality and business viability are critical considerations for any small consulting organization. The mixed method evaluation activity described here centers around a web-based survey platform that is an integral part of the evaluation delivery. Technical development costs and consulting overhead expenses requires an approach that is as efficient, streamlined and sustainable as possible - if possible. The process of costing projects by breaking down task- and- time requirements certainly reaches the heart of evaluation by interrogating methodological priorities, evaluation outcomes and even strategic business objectives.
Two case study examples are presented highlighting how evaluation and online survey activity for various projects were priced, together with lessons learned. One example is a scalable pricing model for evaluation of organizational development and 360-degree feedback. Another is an impact evaluation for a national, multisite, collaborative project undertaken by Australia's eight leading research-intensive universities.
|
|
Session Title: Community-based Evaluation Within a Canadian Cross-Cultural Context
|
|
Multipaper Session 364 to be held in Wekiwa 5 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
|
| Chair(s): |
| Cheryl Poth, University of Alberta, cpoth@ualberta.ca
|
| Abstract:
Using the Preschool Developmental Screening (PDS) Project as an evaluation case study, we forward a framework for conducting community-based evaluations (CBE) within a Canadian cross-cultural context. The PDS project represents collaboration among partners in the health, children's services, education, and non-profit sectors aimed at building parent and community capacity to support early childhood development. The project is complicated by a shift in local demographics in a large urban centre where over 20% of the population is represented by immigrant and refugee families. With limited available resources for this population, it is imperative to evaluate the impact on families when collaborative programming efforts are employed through health and education systems within a dynamic context. Our findings indicate that CBE is mutually beneficial for all partners and informs the development of cross-cultural insights to enhance current health and educational services and programs for all families.
|
|
Identifying Key Characteristics of a Community-Based Evaluation Framework
|
| Rebecca Georgis, University of Alberta, georgis@ualberta.ca
|
| Cheryl Poth, University of Alberta, cpoth@ualberta.ca
|
|
Community-based evaluation (CBE) is a collaborative evaluation framework that focuses on both the process and outcome with specific emphasis on the intended use of evaluation in community-based projects. Stemming from the principles of community-based research (e.g., Israel, Schulz, Parker, & Becker, 1998), CBE is guided by an emphasis on the active and equitable involvement of partners and community members in the evaluation as well as capacity building. Among the key characteristics that will be discussed is the focus on maintaining open access to the emerging evaluation findings. The knowledge generated from participation in the evaluation is shared among partners and community and informs their continued development. Further, by engaging all partners in an iterative process of reflection this leads to a refinement of project goals. This paper extends the literature by identifying key characteristics of a CBE framework and reports its development using a case study.
|
|
Challenges and Opportunities in a Cross-Cultural Evaluation Context
|
| Winnie Chow, University of Alberta, wwchow@ualberta.ca
|
| Rebecca Gokiert, University of Alberta, rgokiert@ualberta.ca
|
|
Using the PDS project as a community-based evaluation case study, this paper explores the challenges and opportunities that emerge within an immigrant and refugee evaluation context. Given the diverse pre-migration and settlement obstacles many immigrant and refugee families face, participation in a program evaluation may be perceived as secondary to securing basic needs for a family. Participation is further constrained by language and the process of engagement used in evaluation that tends to exclude immigrant and refugee involvement. Yet there is growing acknowledgement in the broader community to understand the immigrant and refugee families' perspective in order to develop relevant and meaningful programs to support positive outcomes for all children and families. This paper maps onto the CBE framework and discusses the strategies used to address a variety of challenges (e.g., recruitment, data collection) and opportunities (e.g., engagement with new communities, consent procedures) that emerged within a cross-cultural evaluation context.
|
|
Session Title: Knowledge Translation for Technology Adoption
|
|
Multipaper Session 365 to be held in Wekiwa 6 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Research, Technology, and Development Evaluation TIG
|
| Chair(s): |
| Robin Wagner,
National Institutes of Health, wagnerr2@mail.nih.gov
|
|
The Impact of Farmers' Characteristics on Technology Adoption: A Meta Evaluation
|
| Presenter(s):
|
| Guy Blaise Nkamleu, African Development Bank, b.nkamleu@afdb.org
|
| Abstract:
An abundant number of studies in recent years have been devoted to farmers' adoption of agricultural innovations. Most of the studies are conducted to investigate farmers' characteristics affecting their adoption decision. However, Results from different studies are often contradictory regarding the impact of any given variable on adoption decisions.
This paper examines these often conflicting results. A meta-analysis is conducted on 186 peer-reviewed adoption analyses from the recent literature on agricultural technology adoption. Meta-regressions method is used to evaluate the variations in the outcome of farmers' specific characteristics as significant determinants of their adoption decisions. The results generally show that differences across studies, in terms of study design processes pertaining either to methodological issues, spatiotemporal context and technology characteristics are important drivers of the adoption study results. We therefore conclude that the conflicting research results may, in many cases, be simply the results of differing study-specific design and technology characteristics rather than empirical facts.
|
|
Achieving Knowledge Translation for Technology Transfer: Implications for Evaluation
|
| Presenter(s):
|
| Vathsala Stone, University at Buffalo - State University of New York, vstone@buffalo.edu
|
| Abstract:
Research programs implementing public policies through information or product innovations are held accountable for evidence of societal impact. In response, evaluation tools such as PART and the logic model have been used for program reviews and program planning. Program managers in areas such as healthcare are seeking to demonstrate research impact through knowledge translation. This paper presents an evaluation framework for the special case of knowledge translation for technology transfer (KT4TT), to guide research project managers to plan for successful innovations. While logic models help research programs to plan for needed results, individual projects funded under them must provide credible and relevant data by framing research questions based on explicit connections between planned impacts and user needs. We propose integrating the CIPP model into the logic model so relevance is proactively ensured. By ensuring quality through formative and summative evaluations, the CIPP model is a fitting complement to the logic model.
|
| |
|
Session Title: Culture is Context: The Role of Intersectional Theories in Framing HIV and Substance Abuse Interventions Targeting African American Women
|
|
Panel Session 366 to be held in Wekiwa 7 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Social Work TIG
|
| Chair(s): |
| Jenny Jones, Virginia Commonwealth University, jljones2@vcu.edu
|
| Abstract:
This multipaper presentation includes two studies that highlight the need for culturally relevant theories and methods in evaluating HIV and Substance Abuse interventions. Both papers focus on African American women at the margins who simultaneously struggle with high risk of HIV transmission and use of mood altering substances.
|
|
Shaping the Evaluand of HIV Education and Substance Abuse Treatment Using Black Feminist Theory
|
| Yarneccia Hamilton, Clark Atlanta University, yhamilton97@aol.com
|
|
Heterosexual African American women who smoke crack cocaine and trade sex for more drugs are the fastest growing population of HIV infected individuals in the United States (CDC, 2007). This study uses the lens of Black Feminist Theory to explore the efficacy of substance and HIV education for incarcerated African American women. The research suggests that incarcerated women are at increased risk of HIV due to risky sexual behaviors for the purposes of acquiring and using illegal substances (AmFAR, 2002; Sterk, 2002). These women are also at greater risk of relapse upon program release (Williams & Larkin, 2007). Historically, evaluations o these interventions place little if any emphasis on the role of context in framing the intervention and the outcomes, especially as they relate to the unique contextual issues faced by African American women. This project explores the role of Black Feminist Theory in informing the conceptualization, design, and evaluation of such programs.
|
|
|
The Function of the Black Superwoman Myth in Framing HIV and Substance Abuse Evaluation
|
| Sarita Davis, Georgia State University, saritadavis@gsu.edu
|
|
The Sojourner Project applies an interpretive framework to explore the degree to which gender, race, and class affects the HIV risk and use of mood alterning substances among 50 African American women living in both low and high burden areas in metropolitan Atlanta. Research suggests that a strictly biomedical framework for HIV and Substance abuse program planning and intervention typically serves to homogenize difference or complexity by, for example, separating race from socioeconomic status and gender as discrete, rather than mutually constitutive, concepts (Gentry, 2007; Mullings, 2005). The contribution of the Sojourner Project to evaluation is that it invites us to understand the relational nature of sexual decision making and substance abuse among dispossed women in a way that conveys a message about the interaction of race, class, and gender, as well as dialectic of oppression, resilience, and resistance (Crenshaw, 1995).
| |
|
Session Title: Student Resiliency and Post-Secondary Education Success: Lessons From the Field
|
|
Panel Session 367 to be held in Wekiwa 8 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the College Access Programs TIG
|
| Chair(s): |
| Ruanda Garth-McCullough, SUCCEED Consulting, asmamali@yahoo.com
|
| Abstract:
Post-secondary academic success for at-risk students depends on a variety of factors-- academic achievement, student support, and personal resiliency, and community/family characteristics. In this panel, we propose to explore the factors which influence at-risk student's success in college, based on the support and resources they received during high school through a case study of two programs. The first presentation will focus on the results of the first year of a four-year evaluation of a college preparatory charter high school in Chicago. Located in one of Chicago's lowest income neighborhoods, the high school has developed a four year curriculum and system of support to help at-risk students gain admission and succeed in college. The second presentation will describe an evaluation of an innovative program funded by the Illinois Education Fund to support first-generation, at-risk African American males in three Illinois area high schools. Focusing on the high school seniors, the program mentors and provides financial aid to first-generation African Americans males. Early results indicate that both programs have been successful in securing college admissions for the students. However, college success and graduation may be more elusive.
|
|
Student Resiliency, and College Success: An Evaluation Story of a Four-Year College Preparatory High School
|
| Ruanda Garth-McCullough, SUCCEED Consulting, rmccul1@luc.edu
|
| Asma Ali, University of Illinois at Chicago, asmamali@yahoo.com
|
| Raquel Farmer-Hinton, University of Wisconsin Milwaukee,
|
| Teresa Sosa, SUCCEED Consulting,
|
| Rolanda West, SUCCEED Consulting, rwest@luc.edu
|
|
North Lawndale College Preparatory High School was founded in 1996 to provide educational and social supports for students from academically high-risk communities. In the North Lawndale community where the school is located and derives a majority of its students, only 17% of students graduate from high school (Census, 2000). The systematic evaluation project will investigate how the college preparatory charter school's structures and support programs contribute to the academic persistence of the graduates, including the students' resiliency and its' influence on the students' aspirations and achievements. Specifically, the evaluation will focus on assessing factors contributing to student success and resiliency, including academic achievement and social support. The primary purpose of the evaluation is to document and analyze NLCP's practices as well as the success and challenges of its students once they matriculate to college. More specifically, this evaluation seeks to explore the students' resiliency and its' influence on the students' aspirations and achievements (Freeman, 1997; Kozol, 1991; Ladson-Billings, 2006; McDonough, 2004).
|
|
|
Supporting College Admission and Success: The Black Men on Campus Program Evaluation:
|
| Shaunti Knauth, National-Louis University, shaunti.knauth@nl.edu
|
|
In 2008, the Illinois Education Fund implemented a program designed to increase the rates at which young African-American men applied to and entered college. Black Men on Campus (BMOC), was implemented in a pilot phase in three Chicago high schools. BMOC's approach was to proactively support African-American male high school seniors in applying for college and financial aid, with the program components delivered by young African-American males who had been successful in college.
The initial evaluation found BMOC to be highly successful. Students and counselors considered BMOC's approach valuable and distinctive for the level of support and the role models it provided, even in relation to intensifying CPS efforts to increase college attendance by its graduates. BMOC participation showed a positive effect on the number of completed college applications at two sites. The continuing evaluation is using an innovative approach that incorporates ongoing journal reflections by students.
| |
|
Session Title: Reflections on Indigenous Evaluation: Australia and Hawaii
|
|
Multipaper Session 368 to be held in Wekiwa 9 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the Indigenous Peoples in Evaluation TIG
|
| Chair(s): |
| Katherine Tibbetts,
Kamehameha Schools, katibbet@ksbe.edu
|
|
Issues in the Evaluation of Programmes for Indigenous Communities in Australia
|
| Presenter(s):
|
| Anne Markiewicz, Anne Markiewicz and Associates, anne@anneconsulting.com.au
|
| Abstract:
This paper will consider some of the issues involved in the evaluation of programss established for Indigenous communities in Australia. The presenter received the 2008 award by the Australasian Evaluation Society for Excellence in Indigenous evaluation. This award recognised the complexities involved in undertaking evaluation in this context where the evaluator has to balance ethical approaches to evaluation, cultural sensitivity and provide an evaluation which is rigorous and credible to the program funder. The presentation will consider some of the challenges for the evaluator where the funder has unrealistic expectations of time frames for data collection, the kinds of data that can be collected and the ways in which the evaluation can be designed and implemented. The presentation will focus on some examples in evaluations completed in fields of family violence prevention, education and employment and crime and justice.
|
|
Empowerment, Collaborative, and Participatory Evaluation: Too Apologetic
|
| Presenter(s):
|
| Morris Lai, University of Hawaii at Manoa, lai@hawaii.edu
|
| Susan York, University of Hawaii at Manoa, yorks@hawaii.edu
|
| Abstract:
The debate on the various forms of evaluation that involve the evaluator in programming (empowerment, collaborative, participatory, etc.) seems to challenge the rigor and objectivity of such processes. Such debate does not arise in Native Hawaiian (and other indigenous) communities: how could the community/program have faith in the evaluation if they did not have a relationship with the evaluator? Respect, trust, honor, responsibility-all are tied to the strength of the relationship. We do not apologize for being active partners in the project:
Terms like empowerment, participatory, and collaborative evaluation do not encompass the evaluator-program relationship concept as it is used when considering culturally appropriate evaluations in Native Hawaiian and/or indigenous communities.
|
| |
|
Session Title: Process and Findings From an Analysis of the American Evaluation Association Database
|
|
Expert Lecture Session 369 to be held in Wekiwa 10 on Thursday, Nov 12, 3:35 PM to 4:20 PM
|
|
Sponsored by the AEA Conference Committee
|
| Presenter(s): |
| Ralph Renger, University of Arizona, renger@email.arizona.edu
|
| Hannah Carlson, University of Arizona, hcarlson@email.arizona.edu
|
| Valerie Van Brocklin, University of Arizona, vanbrock@email.arizona.edu
|
| Aimee Diehl, University of Arizona, adiehl@email.arizona.edu
|
| Htay Hla, University of Arizona, hhla@email.arizona.edu
|
| Susan Kistler, American Evaluation Association, info@eval.org
|
| Leslie Goodyear, National Science Foundation, lgoodyea@nsf.gov
|