|
Session Title: Understanding How and Why Child Welfare Systems Change: The Five Year Evaluation of Improving Child Welfare Outcomes Through Systems of Care
|
|
Multipaper Session 833 to be held in Panzacola Section F2 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Human Services Evaluation TIG
and the Systems in Evaluation TIG
|
| Chair(s): |
| Janet Griffith, ICF International, jgriffith@icfi.com
|
| Abstract:
Beginning in 2003, the Children's Bureau launched a demonstration project to examine the utility of systems of care principles within the context of a child welfare agency. Systems of care principles have previously been applied in the mental health arena and have been a means of engaging youth and families, together with service providers, in changing approaches to care. In the current project, Children's Bureau has focused on the impact of the principles on agency culture, family and community engagement and overall systems change. Nine grantees serving 18 communities participated in the five-year project.
Using a multi-methods approach, the evaluation undertook five studies to examine the impact and outcomes of the demonstration project. The evaluation provides insights into the complexity of addressing issues of systems change and implementation of principles that focus on infrastructure change, accountability, cultural competence and family involvement. Three of the studies will be presented in this multi-paper presentation.
|
|
Small Wins Big Steps: Analysis of Critical Events Contributing to the Implementation of Systems of Care in Child Welfare
|
| Yvette Lamb, ICF International, ylamb@icfi.com
|
| Gary DeCarolis, National Training and Technical Assistance Center, gary@centerforcommunityleadership.com
|
| Caitlin Murphy, ICF International, cmurphy@icfi.com
|
| Raymond Crowel, ICF International, rcrowel@icfi.com
|
|
This paper identifies how systems change evolved in each of the 18 participating child welfare agencies. An in-depth analysis shows the importance of local contextual conditions and events as they affect the direction, pace, and success of system change in child welfare. For instance, while systems of care has been implemented in the field of mental health for over two decades, the application of this principle-guided systems change framework is new to child welfare. The implementation of systems of care in this field encountered unique challenges, such as appropriately addressing the principle of family involvement as well as dealing with critical events like a child death or change in agency leadership and priorities. The process of capturing and analyzing critical incidents will be presented. Both promising practices and barriers to successful systems change in child welfare and the small, but often meaningful wins and events needed to create and sustain change will be highlighted.
|
|
The Importance of Agency Context in Organizational Improvement Initiatives
|
| Dan Cantillon, ICF International, dcantillon@icfi.com
|
| Jing Sun, ICF International, jsun@icfi.com
|
| Raymond Crowel, ICF International, rcrowel@icfi.com
|
|
Findings regarding the influence of organizational context on the implementation of a principle-based systems change framework (i.e., system of care) will be provided. Using survey data collected from direct service child welfare workers, this evaluation employed structural equation modeling to model and understand how critical agency contextual variables, such as organizational culture and organizational climate, impacted the level of agency support and caseworker implementation of systems of care principles (interagency collaboration, family involvement, individualized and strengths-based, community-based, cultural competence, and accountability). Additionally, this paper will discuss how organizational readiness affected organizational and systems change activities and efforts.
|
|
Context, Capacity and Culture: An In-depth Examination of Systems of Care Principles Through Site-Based Case Studies
|
| Aracelis Gray, ICF International, agray@icfi.com
|
| Nicole Bossard, National Training and Technical Assistance Center, nbossard@icfi.com
|
| Yvette Lamb, ICF International, ylamb@icfi.com
|
|
Site-based case studies were conducted in two grantee sites participating in the Children's Bureau Systems of Care in Child Welfare demonstration project. These two grantees represent four demographically diverse communities. Analysis included an examination of contextual variables that affect implementation as well as internal and external organizational capacity to create the infrastructure needed to support the dynamics of systems change across child serving agencies. These agencies include Child Welfare, Juvenile Justice, Education, Mental Health and other private community providers. Findings point to the complexity of context in implementing systems change. For example, economic and leadership chang
|
|
Session Title: Capacity Building Techniques in Prevention and Health Care Settings
|
|
Multipaper Session 834 to be held in Panzacola Section F3 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Health Evaluation TIG
|
| Chair(s): |
| Janet Clinton,
University of Auckland, j.clinton@auckland.ac.nz
|
|
What Do Programs Want? Evaluation TA Needs in California Tobacco Control Programs
|
| Presenter(s):
|
| Jeanette Treiber, University of California Davis, jtreiber@ucdavis.edu
|
| Abstract:
When designing program evaluation capacity building curriculum, we often assume that all areas of evaluation are equally important to address. However, program staff may have targeted needs. This paper relates the analysis of data collected through the technical assistance log kept by UC Davis' Tobacco Control Evaluation Center between November 2004 and March 2009. A large portion of the 100 tobacco control programs (run by county health departments and competitive grantees), have contacted the evaluation center for help, many repeatedly. A quantitative analysis of TA log entries is performed with multiple variables such as evaluation stage (evaluation planning, instrument development, data collection, analysis, reporting), types of document (method, sample surveys, observation forms, etc.), type of organization (government or independent agency), position of person requesting the information (project director, evaluator, health officer, etc.). The data shed light on the evaluation capacity building needs of local health promotion programs' needs.
|
|
Evaluating Research Capacity Building in Health Service Organizations
|
| Presenter(s):
|
| Erika Goldt, Michael Smith Foundation for Health Research, egoldt@msfhr.org
|
| Marla Steinberg, Michael Smith Foundation for Health Research, msteinberg@msfhr.org
|
| Abstract:
In British Columbia, Canada, six Health Authorities are responsible for delivering provincial health services: five for their respective geographic areas, and the sixth for province-wide programs and specialized services. In 2005, the Michael Smith Foundation for Health Research provided each Health Authority with a multi-year grant to develop a basic platform enabling them to increase capacity for engaging in and using research, leading to a system that is more strategic in addressing health service and policy research issues. Each Health Authority differs greatly in terms of funding, population, and pre-existing research and evaluation capacity. To evaluate this program, the evaluation framework must address the unique capacity contexts of each Health Authority while still enabling program-wide findings. This paper presents the findings of evaluating the Health Authority Capacity Building Program and addresses how it is possible to build context into evaluation at the local level while still addressing a program-wide scope.
|
|
Program Evaluation: Are You Willing, Are You Ready?
|
| Presenter(s):
|
| Janet Clinton, University of Auckland, j.clinton@auckland.ac.nz
|
| Sarah Appleton-Dyer, University of Auckland, sk.appleton@auckland.ac.nz
|
| Katheryn Cairns, University of Auckland, k.cairns@auckland.ac.nz
|
| Rebecca Broadbent, University of Auckland, r.broadbent@auckland.ac.nz
|
| Abstract:
The stakeholder's evaluation readiness or capacity and willingness to engage in program evaluation have often been shown to positively correlate with successful program outcomes and contribute to program adaptation. While engagement in program evaluation activities is worthwhile, the specific mechanisms of this relationship are not clear. The aim of this paper is to explore this relationship, its measurement and its validity. The current study uses data from a long term community-wide health promotion program where program adaption and evaluation readiness were monitored at multiple program sites over three consecutive years. A measure of evaluation readiness and program adaption using survey, documentary evidence, and interview data was developed. Descriptive and inferential statistics were used to explore the relationships between the variables over time. The paper demonstrates a positive relationship between program adaptation and evaluation readiness over time and provides a useful technique for measurement of these variables.
|
|
Training the Next Generation of Health Care Providers to Decrease Health Disparities
|
| Presenter(s):
|
| Andrea Fuhrel-Forbis, University of Michigan, andreafuhrel@yahoo.com
|
| Ann A O'Connell, The Ohio State University, aoconnell@ehe.osu.edu
|
| Petra Clark-Dufner, University of Connecticut Health Center, clarkdufner@uchc.edu
|
| K. Devra Dang, University of Connecticut, devra.dang@uconn.edu
|
| Philip Hritcko, University of Connecticut, philip.hritcko@uconn.edu
|
| E Carol Polifroni, University of Connecticut, carol.polifroni@uconn.edu
|
| Terry O'Donnell, Quinnipiac University, terry.odonnell@quinnipiac.edu
|
| Catherine Russell, Eastern Connecticut Area Health Education Center, russell@easternctahec.org
|
| Bruce Gould, University of Connecticut, gould@adp.uchc.edu
|
| Abstract:
Using interprofessional training and service learning experiences, students in medicine, dental medicine, nursing, pharmacy, and physician assistant programs were given the option of volunteering in community clinics. Students' experiences in these clinics may help prepare students for working with underserved populations, and could increase their intentions to do so. Students completed three surveys over the course of an academic year and indicated their service learning experiences, as well as their attitudes, knowledge, self-efficacy, and intentions toward working with underserved populations and toward working in interprofessional teams. Propensity score adjustment was used to help control for self-selection bias in comparisons between students who chose to participate in service learning experiences and those who did not. Analyses include hierarchical linear modeling (HLM) to explore individual change over time and differences between professional groups. Results of HLM analyses with and without propensity score adjustment are presented.
|
| | | |
|
Session Title: The Internal Evaluator in the Context of the 21st Century
|
|
Panel Session 836 to be held in Panzacola Section G1 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the AEA Conference Committee
|
| Chair(s): |
| Stanley Capela, HeartShare Human Services, stan.capela@heartshare.org
|
| Discussant(s):
|
| Stanley Capela, HeartShare Human Services, stan.capela@heartshare.org
|
| Abstract:
The purpose of this session is to look at the role of internal evaluation in the context of changing times within the 21st century. It will provide an opportunity to foster a dialogue on the significant issues that impact the internal evaluator and at the same time provides a toolkit to assist those individuals who work as internal evaluators.
|
|
Relationship Between the Internal and External Evaluator
|
| Kathleen Tinworth, Denver Museum of Nature and Science, kathleen.tinworth@dmns.org
|
|
The panelist will present two case studies from her work as an internal evaluator at a major metropolitan Museum. These examples will illustrate both the positives and negatives which can result from collaborations between internal and external evaluators. Discussion will focus on how both internal and external evaluators provide insight and value, especially when teamed. While an internal evaluator can work positively and proactively with external evaluators to bring insider knowledge, context and history to an evaluation, external evaluators can provide necessary objectivity and clarity. When paired effectively and in an open and collaborative way, consistency, validity, and reliability within evaluations can be bolstered.
| |
|
The Art of Being an Internal Evaluator and Dealing With Ethical Dilemmas
|
| Tony Wu, Independent Consultant, tonywu@drwuonline.com
|
|
The panelist has a unique role within a non-profit community mental health center. He is both the supervising psychologist and chief evaluator. Due to the complex nature of the services that the agency provides, it has mixed funding structure that includes contracts with government, private philanthropies, and foundation grants. With a variety of reporting and performance standards, the panelist needs to be very knowledgeable of all program details. Additionally due to the complex outcome measures, having an internal evaluator on staff makes intuitive sense. However, there are times that it is difficult to follow best practice guidelines when the internal evaluator is constrained by upper management who are not receptive to evidence-based practices. In these situations, the panelist as do many other evaluators, are faced with ethical dilemmas. The focus on this presentation will focus on the art in negotiating these ethical dilemmas as an internal evaluator.
| |
|
Designing an Internal Evaluation Unit for a Small Organization With Limited Resources
|
| Michelle Baron, The Evaluation Baron LLC, michelle@evaluationbaron.com
|
|
Using the principles of Evaluation Capacity Building (ECB), the discussant will outline ways small organizations at the early, midterm, and seasoned levels of capacity can develop and maintain internal evaluation systems. The discussant will emphasize that successful internal evaluation is both a mindset and an organizational movement, and that evaluation can thrive in organizations regardless of size or resource limitations.
| |
|
Session Title: Evaluation Self-Efficacy and Utilization: Moving From Social Work Education to Social Work Practice
|
|
Multipaper Session 837 to be held in Panzacola Section G2 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Social Work TIG
|
| Chair(s): |
| Jenny Jones,
Virginia Commonwealth University, jljones2@vcu.edu
|
|
Measurement of Master's of Social Work (MSW) Students' Research Self-Efficacy, Attitude, and Knowledge Across the Foundation Year
|
| Presenter(s):
|
| Helen Holmquist-Johnson, Colorado State University, hjohnson@cahs.colostate.edu
|
| Abstract:
This study examined foundation year, Master's of Social Work student outcomes with regard to the research curriculum. The researcher sought to understand students' attitudes toward research, research knowledge acquisition, and research self-efficacy. Students enrolled at five universities were recruited to complete a survey. A pre-post design allowed students' responses to be matched before and after the foundation year. Findings suggest that students' attitudes are favorable toward research. Knowledge of research increased over the foundation year. Students who completed one semester of research coursework were compared with those completing two semesters of research coursework. The group with two semesters of research had a statistically significant knowledge gain over the one semester group. Research self-efficacy increased 24 points, a statistically significant change, over the foundation year, and suggests a wide range of student preparedness. Recommendations for both social work practice and education communities are made based on the findings of this study.
|
|
Determining The Impact of Social Work Practice Culture on Evaluation Practice
|
| Presenter(s):
|
| Derrick Gervin, Clark Atlanta University, dgervin@yahoo.com
|
| Abstract:
As the 'sine qua non' of social work practice, evaluation has widespread implications for practitioners. This paper examines how social work practice culture impacts evaluation practice. The study is based on the premise social work practitioners have a unique approach to evaluation practice that is guided by practice wisdom, education and training, and a professional code of ethics. A mixed methods design is used to identify and prioritize specific activities that promote evaluation knowledge and skill in social work practice. This study is unique in that it uses a participatory process to increase knowledge related to evaluation practice. The research benefits social work practitioners, educators, and administrators by emphasizing the importance of evaluation and how social work practitioners can inform decision-making related to evaluation. Results suggest social work education has a critical role in promoting evaluation practice, establishing evaluation practice competencies, and using evaluation results to inform policy and practice.
|
|
Presenting a Framework for Evaluating Interventions for Co-occurring Disorders
|
| Presenter(s):
|
| Aisha Williams, APS Healthcare Inc, aishadw2006@yahoo.com
|
| Cindy Ward, APS Healthcare Inc, cward@apshealthcare.com
|
| Beth Spinning, APS Healthcare Inc, espinning@apshealthcare.com
|
| Abstract:
Co-occurring disorders is not a new phenomenon within the mental health arena, however consumers who deal with co morbid issues are usually receiving services from two distinctly different providers. Although, conceptually the field is moving in the direction of integration and collaboration of care, it usually speaks to a combination of an addiction and mental health disorder. However, co-occurring medical and mental disorders can be just as debilitating as the former. This paper seeks to explore the concept of total integrated healthcare as it relates to mental and physical health within a Medicaid population and present a framework used by a healthcare company in the state of Georgia to accomplish the goal of integrated holistic care. This paper contributes to the field of evaluation by presenting a framework for the integration of healthcare with specific clinical outcomes and evaluative measures.
|
| | |
|
Session Title: Understanding How Gender Affects the Context of National and International Program Evaluation
|
|
Think Tank Session 838 to be held in Panzacola Section H1 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Presidential Strand
, the Feminist Issues in Evaluation TIG, and the International and Cross-cultural Evaluation TIG
|
| Presenter(s):
|
| Tessie Catsambas, EnCompass LLC, tcatsambas@encompassworld.com
|
| Discussant(s):
|
| Patty Hill, EnCompass LLC, phill@encompassworld.com
|
| Tessie Catsambas, EnCompass LLC, tcatsambas@encompassworld.com
|
| Michael Bamberger, Independent Consultant, michaelbamberger@gmail.com
|
| Abstract:
Almost all development programs influence the status of women, and the cultural, economic and political relationships between men and women and among household members. Similarly, the success of most programs is affected by how well they understand and take into account the gendered relationships that determine how families, community groups, economic and political organizations respond to programs. Yet one of the weakest areas of program evaluation practice is the lack of adequate evaluation methodologies for understanding how gender affects the context within which programs are designed and operate. The purpose of the think tank is to share experiences among evaluators working internationally and in North America about how gender affects the context of the programs they are evaluating, the methods they have used for assessing gender and some of the challenges in convincing clients that gender issues matter.
|
|
Session Title: Promoting Context-Appropriate Valuation of Public Programs and Policies
|
|
Panel Session 839 to be held in Panzacola Section H2 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Chair(s): |
| George Julnes, University of Baltimore, gjulnes@ubalt.edu
|
| Discussant(s):
|
| Leslie J Cooksy, University of Delaware, ljcooksy@udel.edu
|
| Ernest House, University of Colorado at Boulder, ernie.house@colorado.edu
|
| Michael Scriven, Claremont Graduate University, mjscriv@gmail.com
|
| Abstract:
The recent controversy over a federal agency giving priority to random assignment experiments for establishing the causal impacts of interventions (see Julnes & Rog, 2007; Donaldson, Christie, & Mark, 2008) has raised questions about the role that the evaluation community should play in guiding federal policies on evaluation methods. A core element in the eventual position advocated by AEA was that method-choice should be driven by the needs of specific contexts. A similar controversy may be developing around the methods favored by federal agencies for reaching judgments of the value of programs and policies. This session will not seek to resolve controversies over preferred approaches to assigning values in public decision-making, but it will develop the dialogue on context-appropriate ways to understand stakeholder values and to represent those values in aggregate judgments based on the multiple criteria used in federal policymaking.
|
|
Distinguishing Approaches to Public-sector Valuation: Developing Contextual Frameworks That Promote Productive Dialogue
|
| George Julnes, University of Baltimore, gjulnes@ubalt.edu
|
|
There are three major families of conclusions relevant to most evaluations: representation of needs, activities, and outcomes; causal attribution regarding program mechanisms and impacts; and valuation as it relates to claims about the value of one or more alternatives. Although there has been considerable controversy over preferred approaches to supporting causal attribution, one could argue that valuation is the area with the least consensus over the preferred approaches to use in different contexts. This presentation will address the many approaches that are used to establish the relative or absolute value of public programs and policies. Organizing these approaches in terms of some of their major features can help us identify ways that methods of valuing can be understood in terms of their usefulness in specific contexts. This, in turn, can help promote a more productive dialogue on matching methodological approaches to valuation with their most effective contexts.
|
|
|
Program Value, Program Ethics, and Program Evaluators: How Do We Do the "Right" Thing?
|
| Michael Morris, University of New Haven, mmorris@newhaven.edu
|
|
Most people would agree that ethical considerations should be taken into account when rendering judgments concerning the value of a program or policy. What role, if any, should evaluators play in the formulation and presentation of those considerations? My comments will focus on the opportunities and challenges associated with evaluators providing input into stakeholder discussions of program/policy ethics.
| |
|
Different Criteria Federal Stakeholders Use for Valuing Federal Programs
|
| Stephanie L Shipman, United States Government Accountability Office, shipmans@gao.gov
|
|
How we "value" a program reflects our preferences and priorities, i.e., the importance we place on competing criteria. Recent federal performance-based management initiatives spotlighted program assessment-and made the choice of assessment criteria more transparent. Federal agencies, Congress, and the Office of Management and Budget often disagreed on "comprehensive" assessments of programs over multiple dimensions, because they differed on which dimensions or criteria they considered most important. Evaluators tend to focus on the extent to which a program achieves its objectives; others may focus on its overlap with other programs. The criteria that federal stakeholders use to value a program's merit or worth reflect where they stand vis-a-vis the program, the decisions they face, as well as their "politics," i.e., their policy values and priorities. I will discuss the different criteria these parties used to "value" programs from my perspective at Congress' primary oversight agency.
| |
|
Session Title: Local Evaluator Involvement and Impact in Michigan 21st Century Community Learning Centers After-school Programs
|
|
Multipaper Session 840 to be held in Panzacola Section H3 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Evaluation Use TIG
|
| Chair(s): |
| Laurie A Van Egeren, Michigan State University, vanegere@msu.edu
|
| Abstract:
Evaluation in government-funded programs may target performance monitoring without addressing continuous improvement planning in local programs. Most programs benefit considerably from the involvement of an evaluator who can facilitate program improvement, but relationships and agreements between programs and evaluators vary substantially. In the Michigan evaluation of the federally funded 21st Century Community Learning Centers after-school initiative, grantees are required to hire local evaluators in addition to participating in the state evaluation. This session examines the uses and benefits of local evaluators in 35 organizations operating 229 after-school sites. The session will address questions such as: What types of administrators and programs are most likely to use evaluators? To what extent does local evaluator involvement result in better data reporting and interpretation? How does program quality differ in sites with high vs. low evaluation use? Finally, a local evaluator will discuss the integration of state and local data in program improvement.
|
|
Who Uses Local Evaluators? Links to Site and Staff Characteristics
|
| Heng-Chieh Wu, Michigan State University, wuhengch@msu.edu
|
| Laurie A Van Egeren, Michigan State University, vanegere@msu.edu
|
| Nai-Kuan Yan, Michigan State University, yangnaik@msu.edu
|
| Chun-Lung Lee, Michigan State University, leechun1@msu.edu
|
| Celeste Sturdevant Reed, Michigan State University, csreed@msu.edu
|
|
In after-school programs, "evaluator use" can take many forms, including assisting in program improvement efforts, collecting data to assess outcomes, and developing reports for funders and stakeholders. After-school programs often do not have evaluators, and when they do, they show extensive variability in their evaluator use. In the Michigan 21st Century Community Learning Centers programs, contracts with local evaluators are a condition of funding. However, not all programs use evaluators equally. This paper examines site and staff predictors of several forms of evaluator use, including program improvement, providing school outcomes data, developing reports, grant writing, data collection to meet state requirements, and local data collection. Results from 229 sites suggest that the type of operating organization (schools rather than community-based organizations), years of operation, and perceptions of the importance of using data for program improvement were linked to greater evaluator use.
|
|
Improving Reporting Quality Through Local Evaluator Involvement
|
| Celeste Sturdevant Reed, Michigan State University, csreed@msu.edu
|
| Megan Platte, Michigan State University, plattmeg@msu.edu
|
| Beth Prince, Michigan State University, princeem@msu.edu
|
| Laura Bates, Michigan State University, bateslau@msu.edu
|
| Laurie A Van Egeren, Michigan State University, vanegere@msu.edu
|
|
Over the last several years the MSU State-Wide Evaluation Team has consistently invested in training and technical assistance for grantees, their staff, and local evaluators in order to improve the understanding and use of data for program improvement. Data from the 2007-2008 Annual Report Forms, Web-based documents that all grantees and sites must complete as part of their contractual requirements to receive funds, was used. We hypothesized that local evaluators' involvement would improve the availability of data (i.e., all relevant data was collected) and users' clarity (i.e., they understood the data presented in the charts). Independent variables included factors such as the role the local evaluator played in the reporting process, how often the local evaluator met with various stakeholders, and grantee cohort. The results were mixed; we will suggest other factors that may have influenced the results.
|
|
Sites With High vs. Low Evaluation Use: Differences in Program Quality
|
| Laurie A Van Egeren, Michigan State University, vanegere@msu.edu
|
| Heng-Chieh Wu, Michigan State University, wuhengch@msu.edu
|
| Nai-Kuan Yan, Michigan State University, yangnaik@msu.edu
|
| Chun-Lung Lee, Michigan State University, leechunl@msu.edu
|
| Celeste Sturdevant Reed, Michigan State University, csreed@msu.edu
|
|
In 2007, the Michigan 21st Century Community Learning Centers program created clear guidelines for the use of local evaluators, emphasizing that their primary role was to assist in improving program quality. However, these guidelines have been implemented inconsistently. This paper examines differences in program quality between sites with high levels of evaluation use and those with low levels. A constellation of variables were used to characterize sites as "high" or "low," including evaluator involvement in program quality assessments and annual reporting to the state, regularity of evaluator meetings with program staff, supervisor ratings of evaluator value across several areas of use, and completeness of data submitted to the state. The top and bottom 10% of programs were compared for program quality, including student, parent, and staff perceptions of the program and student outcomes. Results suggest benefits to more evaluation use for putting processes into place that promote program quality.
|
|
The Local Evaluator Speaks: Using State and Local Data for Continuous Improvement
|
| Wendy Tackett, iEval, wendolyn@mac.com
|
|
It is generally assumed that a local evaluator has the potential to greatly impact changes leading to program improvement. iEval, serving as local evaluator to six school districts in Michigan who are implementing 21st Century Community Learning Centers programs, has seen positive impact on programmatic improvements because of evaluation findings that led to increases in student academic achievement, improvements in student behaviors, and greater buy-in from the school and community. Dr. Wendy Tackett will discuss the type of local data collected, how data is analyzed, general evaluation findings using state and local data, and how changes that were a result of evaluation findings made an impact on subsequent program years.
|
|
Session Title: Where You Stand Depends on Where You Sit: How Evaluators' Roles Influence Evaluation Capacity Building
|
|
Panel Session 841 to be held in Panzacola Section H4 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Chair(s): |
| Johanna Gladfelter Morariu, Innovation Network Inc, jmorariu@innonet.org
|
| Abstract:
Evaluation capacity building is a well-researched topic; a recent Google search turned up more than 2 million page hits. The prevailing familiarity of the field with evaluation capacity building ("ECB") has led us to overlook the effect evaluator role has on ECB approaches and goals. This panel will explore how three different types of evaluator roles, an evaluator within a nonprofit that both operates and funds programs; an evaluator within a multiservice nonprofit organization; and an evaluation consultant, affect the panelists' ECB approaches and goals. Each panelist will present on their organization's evaluation philosophy, reflect on their experiences with building evaluation capacity, and summarize what he/she has found to be effective. The panelists will highlight those aspects of their organizational context that have either supported or hindered ECB efforts. After the panelists present, they look forward to a lively exchange with the audience.
|
|
Different Approaches for Building Internal Evaluation Capacity
|
| Lester Baxter, Pew Charitable Trusts, lbaxter@pewtrusts.org
|
|
Les Baxter is director of the Planning and Evaluation (P&E) program at The Pew Charitable Trusts, a national nonprofit with primary offices in Philadelphia and Washington, D.C. P&E views evaluation as a fundamental part of a larger organizational strategy designed to advance Pew's mission. Les will discuss the objective of evaluation at his organization and the primary roles that evaluation plays. He will identify the many effective approaches that P&E has used to strengthen evaluation capacity at Pew, from developing formal policies to focusing on the importance of informal human interactions. Les will also summarize the challenges inherent in building evaluation capacity in an organizational environment with multiple clients, diverse programs, and different levels of support for the evaluation enterprise.
|
|
|
Improving Nonprofit Service Delivery Through In-house Evaluation
|
| Isaac Castillo, Latin American Youth Center, isaac@layc-dc.org
|
|
The Latin American Youth Center (LAYC), a multi-service nonprofit organization located in Washington, DC, has received national recognition for its ability to document the effectiveness of their services and modify programming based on findings. LAYC maintains an in-house team of three full-time employees devoted to data collection and evaluation for the organization.
LAYC's Director of Learning and Evaluation, Isaac Castillo, will share his expertise on the culture change that took place at LAYC which led to data-based decision making using evaluation results. He will also share thoughts on the importance of staff-led evaluation efforts intended to improve services for clients.
| |
|
An Evaluation Consultant's Approach, Experiences, and Lessons With Evaluation Capacity Building
|
| Johanna Gladfelter Morariu, Innovation Network Inc, jmorariu@innonet.org
|
|
Johanna Gladfelter Morariu is an evaluator with Innovation Network (www.innonet.org), a nonprofit evaluation consulting firm that works to increase the evaluation capacity of the nonprofit sector. Evaluation capacity building ("ECB") is at the heart of Innovation Network's mission, free tools and resources, and consulting projects. Johanna works with many organizations to increase evaluation capacity: increased evaluation capacity is the end goal in some projects, and in other projects increased evaluation capacity is but one component. She will discuss how she has moved beyond being seen as an "outsider" with consulting clients to being a trusted, valued team member. In this session Johanna will draw on a variety of projects to illustrate approaches, experiences, and lessons from the perspective of an evaluation consultant.
| |
|
Session Title: Evaluating Education in Mexico, New Zealand, South Africa and the United States
|
|
Multipaper Session 842 to be held in Sebastian Section I1 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Government Evaluation TIG
and the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Maria Whitsett,
Moak, Casey and Associates, mwhitsett@moakcasey.com
|
|
Implementation of an Evaluation Model to Assess the Synergistic Impacts of State Initiatives to Improve Teaching and Learning
|
| Presenter(s):
|
| Janet Usinger, University of Nevada, Reno, usingerj@unr.edu
|
| Bill Thornton, University of Nevada, Reno, thorbill@unr.edu
|
| George Hill, University of Nevada, Reno, gchill@unr.edu
|
| Abstract:
To improve teaching and learning, states have undertaken two interrelated activities, (1) implementation of performance-based accountability systems and (2) allocation of funding for targeted school improvement. Annually, state departments of education provide analyses of performance data. Special funding activities are routinely evaluated. Yet these activities are seldom assessed to understand their synergistic impact on teaching and learning. Weinbaum (2005) has developed an evaluation framework to investigate how state policies and programs have affected the key components of schools that influence student achievement. This model has been adapted and implemented in Nevada, using a multiple case study methodology. Five cases, each consisting of a district office, a high school, up to two feeder middle schools, and up to three feeder elementary schools, have been undertaken. The model uses both qualitative and quantitative data. This presentation will describe the challenges and opportunities of implementation in urban, bedroom community, and rural settings.
|
|
Transform the Context: How a Negative Evaluation Context Has Been Transformed in South Africa
|
| Presenter(s):
|
| Jennifer Bisgard, Khulisa Management Services, jbisgard@khulisa.com
|
| Gregg Ravenscroft, Khulisa Management Services, gravenscroft@khulisa.com
|
| Abstract:
Post-apartheid, democratic South Africa has blended new and old bureaucrats, many of whom are resistant to evaluation and implementing recommendations. Since 2003, we have been evaluating data quality in the public education sector for the South African Government. The process of conducting the evaluation cycles and the sponsorship of provincial and national government officials has positively changed the political and operational context. As resistance decreased, the evaluation scope expanded. Now senior officials have emerged as champions of the data quality evaluation process and are using the results to improve performance, acknowledge perverse incentives and to inform policy making. Elements that have contributed to this outcome include the evaluation champion's consistent commitment over the last six years, building evaluator-government trust, using easily understandable and brief evaluation report formats, and implementing well-documented, rigorous methodologies for collecting and analyzing data. Improving data quality and trustworthiness are key precursors to data-driven decision making.
|
|
Evaluation in Mexican Educational System: From Political to Technical Evaluation?
|
| Presenter(s):
|
| Teresa Bracho, Facultad Latinoamericana de Ciencias Sociales, teresa.bracho@flacso.edu.mx
|
| Jimena Hernandez, Facultad Latinoamericana de Ciencias Sociales, jimena.hernandezf@gmail.com
|
| Abstract:
In Mexico, evaluation in the educational policy is a relatively new issue. Twenty years ago government programs were designed without any systematic analysis about the problems they tried to solve; neither had any explicit statement about their expected results. The purpose of this paper is to analyze how governmental context of policy program's evaluation affects the way educational programs are designed and evaluated. Despite the Mexican government has implemented changes in budgeting mechanisms resulting in greater emphasis on evaluation as an input for policy design and implementation, the new evaluation process still faces many challenges. In this paper we emphasize the problems involved in the use of the recent census of students academic performance evaluation (ENLACE) as the main input for programs evaluation, its design and technical difficulties; we also point out the risks of its uses mainly for political purposes.
|
| | |
|
Session Title: Advances and Applications of Structural Equation Modeling for Evaluation
|
|
Multipaper Session 843 to be held in Sebastian Section I2 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Chair(s): |
| Karen Larwin,
University of Akron, drklarwin@yahoo.com
|
|
Latent Variable Scores From SEM Analysis
|
| Presenter(s):
|
| Karen Larwin, University of Akron, drklarwin@yahoo.com
|
| Abstract:
Latent variable scores are traditionally calculated from the averaging of scores from items associated with an individual construct. These values are then used in further analyses, such as multiple regression analyses. Unfortunately, this traditional approach to computing latent variable scores is problematic in that the resulting composite scores do not take into consideration the strength of the relationship between the each item and the associated construct. In Joreskog (2000) a new approach to latent scores development was proposed in which latent variable scores can be generated from the SEM model provide more accurate values with which to examine the effect of exogenous variables, such as participants' gender, age, or major area on factors. The present application of latent variable scores provides a new level of flexibility to accurately address research questions not previously be answered with the SEM model.
|
|
Internal Mental Distress Among Adolescents Entering Substance Abuse Treatment: Examining Measurement Equivalence Across Racial/Ethnic and Gender Groups
|
| Presenter(s):
|
| Mesfin S Mulatu, Centers for Disease Control and Prevention, MMulatu@cdc.gov
|
| Dionne Godette, University of Georgia, dgodette@uga.edu
|
| Kimberly Leonard, The MayaTech Corporation, kjleonard@mayatech.com
|
| Suzanne M Randolph, The MayaTech Corporation, srandolph@mayatech.com
|
| Abstract:
Objective: It is uncertain whether racial/ethnic or gender disparities in co-occurring mental health outcomes reflect true differences in prevalence or simply measurement bias. This study examines cross-racial/ethnic and cross-gender measurement equivalence of the Internal Mental Distress Scale (IMDS) of the Global Appraisal of Individual Needs (GAIN) - a widely used assessment tool in treatment settings.
Sample and Methods: We randomly selected equal number of male and female White (n=600), Black (n=600) and Hispanic (n=600) adolescents from a large pool of admissions to federally-funded substance abuse treatment programs throughout the U.S. Using the Mplus software, we tested increasingly strict models of measurement equivalence of IMDS'5-factor structure (43 dichotomous items) with a series of multi-group confirmatory factor analyses (CFA).
Results: Multi-group CFA showed that the most restrictive model proposing equal factor loadings, thresholds, and residuals fit the data very well in both racial/ethnic (CFI=.988; TLI=.992; RMSEA=.037) and gender (CFI=.987; TLI=.992; RMSEA=.033) group comparisons. Inspection of item-level estimates indicated potential for further improvement of cross-group equivalence.
Conclusions: GAIN's IMDS appears to measure internal mental distress fairly equally across race/ethnic and gender groups. Group differences on IMDS factor scores are less likely due to measurement bias; thus, its use among diverse populations is supported.
|
|
Evaluation of Translated Instruments: Cross Cultural Factorial Invariance of Multiple Group Confirmatory Factor Analysis and Multiple Indicators, Multiple Causes (MIMIC) Models
|
| Presenter(s):
|
| Fatma Ayyad, Western Michigan University, fattmah@hotmail.com
|
| Brooks Applegate, Western Michigan University, brooks.applegate@wmich.edu
|
| Abstract:
If factorial invariance is established across translated forms of research instruments then it is clear that the meaning of the construct crosses cultures. However, if invariance is not established it is not clear if the construct fails to replicate in the translated instrument or the actual translation in faulty.
This study disentangles this dilemma by determining if cultural/language variance can be decomposed from a more general form of construct variance. Specifically, in translated instruments, is there both construct-pure variance and variance related to language and culture? If so can these two sources of variances be separately estimated?
This paper presents a model of selected instruments translated from English into Arabic. Forward and blind-back translation strategies were conducted by bilingual English-Arabic speaking professionals to achieve conceptual equivalence between the original and translated instruments. Empirical Evaluation of the Scales was conducted using data collected from four samples of two different language populations.
|
|
Recapturing Time in Evaluation of Causal Relations: Illustration of Latent Longitudinal and Nonrecursive SEM Models for Simultaneous Data
|
| Presenter(s):
|
| Emil Coman, Institute for Community Research, comanus@netscape.net
|
| Abstract:
We compare two ways of estimating cross-time relations between causes and effects from simultaneously sampled data, the nonrecursive SEM models and the Gollob & Reichardt's latent longitudinal SEM (LLM) model. Both attempt to distinguish direction of causation from cross-sectional measurements. We evaluated them against cross-lagged SEM models of panel data, the Add Health 3 Wave survey. The cross-lagged analyses of stress and alcohol use in men showed a consistent pattern of stress[1] to alcohol[2] then to stress[3]. We tested nonrecursive SEM loop models for both the W2 and W3 stress-alcohol relations. We tested LLM for the W2 and then W3 (pretending no prior time data was available), with prior latent (unobserved) measures of stress and alcohol. LLMs provided indication that the latent alcohol[1] variable causes the measured stress[2], while the latent stress[W1]->alcohol[W2] was not significant, and no paths lead to observed stress[3] or alcohol[3]. LLM seems to fare better here.
|
| | | |
|
Session Title: Evaluating Impact and Outcomes of Transition Programs in the Context of Federally-funded Projects
|
|
Multipaper Session 844 to be held in Sebastian Section I3 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Special Needs Populations TIG
|
| Chair(s): |
| June Gothberg,
Western Michigan University, june.gothberg@wmich.edu
|
|
Evaluation and Technical Assistance for Youth Transition Demonstration Projects: Managing Role Conflict
|
| Presenter(s):
|
| Anne Ciemnecki, Mathematica Policy Research Inc, aciemnecki@mathematica-mpr.com
|
| Bonnie O'Day, Mathematica Policy Research Inc, boday@mathematica-mpr.com
|
| Abstract:
Mathematica Policy Research is conducting a 9-year random assignment evaluation of Youth Transition Demonstration (YTD) projects for the Social Security Administration. The Mathematica team, which includes staff of two other entities, designed the intervention, selected the sites, is randomly assigning youth to treatment and control groups, is providing technical assistance to insure that the sites implement the intervention correctly, and is conducting process and impact evaluations. This paper will focus upon managing these roles and avoiding role conflict, particularly in providing technical assistance and conducting the evaluation. This topic is timely, as more government agencies are requesting that evaluation teams perform these multiple roles.
|
|
Contextual Issues in Evaluating a Transition to Teaching for Special Education Program
|
| Presenter(s):
|
| Imelda Castañeda-Emenaker, University of Cincinnati, castania@ucmail.uc.edu
|
| Janet Matulis, University of Cincinnati, janet.matulis@uc.edu
|
| Norma Wheat, Campbellsville University, nrwheat@campbellsville.edu
|
| Abstract:
Contextual issues in evaluation have been important considerations in planning and implementing the evaluation of a federally-funded Transition to Teaching (TTT) for Special Education program in one private university within a mid-western state. This state has a lower percentage of highly qualified teachers in special education than the national average. The state department of education is a formal and active partner of this program, which is embedded within layers of internal and external contexts. Knowledge, understanding, and incorporation of contextual issues affecting all stakeholders are driving the implementation of a responsive evaluation for this program. Heightened sensitivity and care are taken to ensure understanding of how the non-traditional student program participants saw themselves within their training contexts, the special needs and the diversity issues around them, their experiences and activities within the program, and what their visions are about teaching within a high need Local Education Agency (LEA) environment.
|
|
Improving Outcomes of Students With Disabilities: Bridges and Barriers Identified From an Evaluation of Non-parametric Evidence Rating Scales and Qualitative Data Analysis of State Capacity Building Plans
|
| Presenter(s):
|
| June Gothberg, Western Michigan University, june.gothberg@wmich.edu
|
| Rashell Bowerman, Western Michigan University, rashell.l.bowerman@wmich.edu
|
| Abstract:
This session will present the findings from an evaluation study of local education association (LEA) capacity building team plans (n=120). Teams reviewed specific practices and plan their strategies for implementing transition-focused education for the next year. In the capacity building plan, each team addressed their perceived local strengths regarding transition practices and areas of needed improvement, as well as rated themselves on implementation practices and data-based evidence for federal Indicator 13. We will discuss the methodology used for non-parametric analysis of two, four-point Likert-like rating scales as well as our systematic approach to qualitative document analysis. The session will conclude with the importance of utilizing qualitative data when using Likert-like rating scales to produce more reliable and valid evaluation results.
|
| | |
|
Session Title: Practice-based Assessment of Evaluation Practice, Method, and Theory
|
|
Panel Session 845 to be held in Sebastian Section I4 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Research on Evaluation TIG
|
| Chair(s): |
| Nick L Smith, Syracuse University, nlsmith@syr.edu
|
| Discussant(s):
|
| Beverly Parsons, InSites, bparsons@insites.org
|
| Melanie Hwalek, SPEC Associates, mhwalek@specassociates.org
|
| Abstract:
Although recent writers have generally advocated external, formal studies in researching evaluation, much can also be learned from practice-based assessments conducted by evaluators themselves. Developing effective evaluation practice requires iterative tests of well designed variations that are sensitive to differences in context, evaluand, personal attributes, and changing local conditions. Understanding what influences the development and effectiveness of evaluation requires such situation rich information. The papers in this panel provide such information: a summary of lessons learned from 25 years of research developing stakeholder participation approaches, reflections on how practitioner feedback from training and use of logic models has improved the utility of this method, and an examination of how critical self-assessment by practitioners can provide formative research information for improving evaluation theory. Evaluation practitioners have training and opportunity to conduct developmental research on evaluation, providing the profession with critical insights not attainable from more external investigations.
|
|
Conducting Research on Program Evaluation in Conjunction With Evaluation Studies
|
| Paul Brandon, University of Hawaii at Manoa, brandon@hawaii.edu
|
|
Calls for conducting research on evaluation by and large have not recognized the utility of existing small bodies of published research on program evaluation. In this paper, I describe a body of research on stakeholder participation in evaluation that my colleagues and I have conducted in conjunction with evaluation projects over a 25-year period. Despite having mostly been published in refereed journals, the research has received less attention than might be expected. I give a brief overview of the research methods and results, discuss their utility for the refinement of evaluation practice and methods, and identify the limitations of the research. I speculate why there has been little attention given to the research and suggest how to enhance the effects of similar bodies of research on evaluation practice and on the discussion about empirical research on program evaluation.
|
|
|
Developing Evaluation Theory From Practice: The Case of Program Theory
|
| Patricia Rogers, Royal Melbourne Institute of Technology, patricia.rogers@rmit.edu.au
|
|
Program theory (also known as logic models and theory-based evaluation) has now been widely used for over 20 years in some organizations and regions. This presentation reports on a current project to distil and impart practice wisdom from this experience, drawing from different sources - the researchers' own experiences using program theory (based on their observations, feelings and conversations with other participants before, during and after projects), the experience of others working on these projects (gathered in conversations, anonymous interviews and written feedback), others' experiences using program theory (reported in formal papers, conference presentations and informal discussions), and research students' dissertations (which are more formally systematic in data sources and critical analysis of interpretations). One of the enduring challenges is how we, as professionals and academics, can feel safe enough to share uncertainties and errors, rather than triumphant narratives to show us, and our preferred methods, in our best light.
| |
|
Reflective Practice as Formative Research on Evaluation Theory
|
| Nick L Smith, Syracuse University, nlsmith@syr.edu
|
|
Although recent calls for improved research on evaluation theory have emphasized formal external studies, the profession has historically relied heavily on the reflective practice of theorists and practitioners in evaluating theory. This paper reviews:
(a) the benefits of relying on reflective studies for evaluating evaluation theory, especially their preservation of the complexity of evaluation practice, and their provision of direct, first-hand knowledge of a theory's value, (b) the problems that arise from reflective studies in terms of bias, lack of controls, and limited generality of findings, and (c) improvements that are needed in the way reflective studies are conducted if they are to provide convincing evidence of a theory's quality and contribution, for example, full disclosure and independent confirmation.
Conducted properly, reflective studies of evaluation practice constitute an important form of formative research on evaluation theory that provide a type of primary, contextually-rich evidence not available from formal external studies.
| |
|
Session Title: Evaluation in the Context of Peace Building Programs
|
|
Panel Session 846 to be held in Sebastian Section L1 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the International and Cross-cultural Evaluation TIG
|
| Chair(s): |
| K Lynn McCoy, Pact Inc, lmccoy@pactworld.org
|
| Discussant(s):
|
| Ana Coghlan, BRAC University, ana.coghlan@gmail.com
|
| Abstract:
Evaluation professionals are increasingly aware of the need to shape methods and the overall approach to the context in which they work. A team of monitoring and evaluation professionals working in low intensity conflict and post conflict settings in Sudan, Ethiopia and Kenya have recently spent the better part of a year reviewing their peace building program monitoring and evaluation methodologies and tools. Context evaluation was found to be critical and led to the development of an improved monitoring and evaluation approach. Panelists will describe their findings and review 7 critical questions for evaluation of peace building programs and present simple methods for analyzing the underlying programmatic theories of change -- illustrating how the dimensions of context influence method-choice and evaluation practice. The panel will also discuss how to more closely link initial conflict assessments to ongoing program monitoring and evaluation activities.
|
|
Examples of Peace Building Evaluation Practice: Opportunities and Challenges in the Field Setting
|
| Aisha Ali, Pact Sudan, aaisha@pactsudan.org
|
|
Ms. Aisha Ali is a Monitoring and Evaluation Officer in Sudan. She will present the context in which peace building monitoring and evaluation must take place and the dynamic factors that influence method choice. Ms Ali will present the challenges facing evaluation practice in conflict settings and illustrate how these challenges led to an effort to rethink their approach to building monitoring and evaluation practice.
|
|
|
Building Monitoring and Evaluation Systems for Peace Building Programs
|
| Hannah Kamau, Pact Inc, hannah.kamau@pactke.org
|
|
Ms. Hannah Kamau is the Africa Regional Monitoring, Evaluation and Learning Advisor for Pact in Africa. She oversees evaluation practice across 14 countries, and has worked extensively with multiple peace building programs in Sudan and supported democracy and advocacy programming in DR Congo, Nigeria, Tanzania, Kenya, and Ethiopia. Ms Kamau will present the organizations new program module for building monitoring and evaluation systems in peace building, explaining the inherent relevance to context evaluation.
| |
|
Contributing to Peace Building Evaluation Practice and Theory
|
| Dina Esposito, Pact Inc, dina.esposito@pactke.org
|
|
Ms Dina Esposito is a Regional Senior Peace Building Advisor in Africa. She currently resides in Nairobi Kenya and has supported the design and management of peace building programs within the region. She will briefly review current theory in evaluation practice in peace building and discuss how context evaluation promotes quality programming.
| |
|
Results From Best Practice Research on Peace Building Programming
|
| Tim Hayden-Smith, Pact Inc, tim@pactsa.org.za
|
|
Mr. Tim Hayden-Smith is a Peace Building Specialist who has spent the past year researching innovations and best practices in peace building programming in Africa. He will present key lessons drawn from his research about the importance of context evaluation in peace building projects in multi-site scenarios.
| |
|
Session Title: The Evolution of Evaluation in International Development Organizations
|
|
Multipaper Session 847 to be held in Sebastian Section L2 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the International and Cross-cultural Evaluation TIG
|
| Chair(s): |
| Doug Horton, Independent Consultant, d.horton@mac.com
|
| Discussant(s): |
| Zenda Ofir, Evalnet South Africa, zenda@evalnet.co.za
|
| Abstract:
There is a vast professional literature on theory and methods for evaluating international development programs. In contrast, there is little systematic knowledge of how evaluation is actually practiced in international organizations and the factors that drive changes in evaluation practices over time. Evaluations are carried out in strikingly different ways in different international organizations, and views of what constitutes "good evaluation" vary among organizations and change over time. This multi-paper session examines evaluation practices in international organizations and the main factors that drive changes in evaluation practice over time. Presenters will describe and analyze the history of evaluation in four international development organizations: a multilateral development bank (the World Bank), an international non-governmental organization (CARE), a Canadian Crown corporation that support research for development (IDRC), and a global consortium for agricultural research (the CGIAR). A discussant will highlight patterns and trends across the organizational settings.
|
|
History of Evaluation in the World Bank
|
| Patrick Grasso, Independent Consultant, pgrasso45@comcast.net
|
|
The World Bank's Independent Evaluation Group (IEG) is one of the leading evaluation organizations in the development community. What began in 1970 as a small office reporting to Bank President Robert S. McNamara, became the Operations Evaluation Department in 1973, and was made independent of Bank management beginning with the appointment of a Director-General in 1975. Modeled on the US General Accounting Office, IEG reports directly to the World Bank Group's Board of Executive Directors, and conducts evaluations of projects, country programs, sector and thematic policies, global programs, and Bank processes. The focus of IEG's evaluations and the methods used have evolved over time to take account of changes in the development environment, Bank Group policies and practices, IEG leadership, and trends in evaluation practice. This presentation traces these changes over more than 35 years and analyzes the forces behind them.
|
|
Approaches to Evaluation in CARE and Other International Non-governmental Organizations
|
| Jim Rugh, Independent Consultant, jimrugh@mindspring.com
|
|
Under Jim's leadership CARE International developed an evaluation policy, principles and standards, as well as M&E guidelines, instruments to measure M&E capacity, training programs, etc. It also conducted a series of meta-evaluations that assessed the quality of evaluations as well a synthesis of results reported by project evaluations conducted over the previous two years. In this presentation Jim will summarize the main elements of these policies and practices, and the trends revealed by four of those biannual CARE meta-evaluations as well as a series of strategic impact inquiries. He will also briefly mention trends in evaluation by other relief and development agencies that are members of InterAction, a consortium of 170 US-based international NGOs.
|
|
History of Evaluation in the International Development Research Center
|
| Fred Carden, International Development Research Centre, fcarden@idrc.ca
|
|
IDRC first established an evaluation unit in 1992. This paper explores the motivations and values that guided the establishment of the unit. IDRC was a leader in the establishment of the learning function in evaluation systems. The paper explores the changes in practice over the past seventeen years in the development of an evaluation system that embraces both the learning and accountability functions of evaluation. The relationship of the unit to the Centre's overall mission and culture were important driving forces not only in the establishment of the unit but in its development over time. The evaluation function has been affected over time by the changing developing funding context. It has also been affected by and played a role in the debates on how to evaluate development research through its active engagement in the development of tools and methods for research evaluation.
|
|
What Drives Evaluation Practice: The Consultative Group in International Agricultural Research (CGIAR) Experience
|
| Doug Horton, Independent Consultant, d.horton@mac.com
|
| Ronald Mackay, Independent Consultant, mackay.ronald@gmail.com
|
|
This paper seeks to contribute to our understanding of the forces driving evaluation practices over time by examining the history of evaluation in the Consultative Group on International Agricultural Research (CGIAR). It explores the early motives for, and disciplinary roots of, evaluation in this field, and it charts the evolution of evaluation practice up to the present time. Recent developments are highlighted, including the introduction of the logical framework, strengthened emphasis on economic impact assessment, and development of a performance measurement system. These developments are related to the evolving socio-economic and political context in which international agricultural research takes place, the nature of the programs being conducted and evaluated, the key actors involved, and the institutions and organizational culture that guide behavior in the CGIAR.
|
|
Session Title: Evaluation Capacity Building Strategies to Promote Organizational Success
|
|
Multipaper Session 848 to be held in Sebastian Section L3 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
|
| Chair(s): |
| Duane House, Centers for Disease Control and Prevention, lhouse1@cdc.gov
|
| Discussant(s): |
| Abraham Wandersman, University of South Carolina, wanderah@gwm.sc.edu
|
| David Fetterman, Fetterman & Associates, fettermanassociates@gmail.com
|
| Abstract:
The increase in public accountability for agencies has increased the need for organizations to be able to monitor and evaluate their own activities. Evaluation capacity building (ECB) has been recognized as a process for enabling organizations and agencies to develop the mechanisms and structure to facilitate evaluation to meet organizational goals and accountability requirements. ECB has been conceptually defined as "a context-dependent, intentional action system of guided processes and practices for bringing about and sustaining a state of affairs in which quality program evaluation and its appropriate uses are ordinary and ongoing practices within and/or between one or more organizations/programs/sites" (Stockdill, Baizerman, & Compton, 2002, p. 8). This panel will demonstrate three strategies for building evaluation capacity in different contexts. Although each strategy draws from a different framework, each shares common ground in empowerment evaluation principles.
|
|
Building General and Innovation-Specific Capacities for Evidence-based Prevention Programs in Schools
|
| Paul Flaspohler, Miami University, flaspopd@muohio.edu
|
| Kate Keller, Health Foundation of Greater Cincinnati, kkeller@healthfoundation.org
|
| Dawna-Cricket-Martita Meehan, Miami University, meehandc@muohio.edu
|
|
Schools are in a unique position to significantly impact the health and well-being of youth through evidence-based prevention programs and services. Given the well-documented problems in introducing new ideas to schools and sustaining innovative practices, it is critical that attention be given to understanding barriers and facilitators of the adoption and implementation of evidence-based practices (Flaspohler et al., 2006). Recently, increased attention has been focused on understanding and assessing readiness and capacity to adopt and implement research-based innovations (i.e., EBPs). Research on implementation and readiness for change suggests that inattention to forces and factors that impact adoption seriously jeopardizes any project seeking to introduce a new idea into an organization. Readiness and capacity, therefore, become a crucial planning and surveillance activity. This presentation provides an overview of systematic efforts to assess and build readiness and evaluation capacity for evidence-based prevention programs and services in schools.
|
|
Connecting the Dots: Building Evaluation Capacity in Schools
|
| Melissa Maras, University of Missouri, marasme@missouri.edu
|
|
Schools have become the context of choice for delivering a variety of interventions, programs, and services. These activities are diverse, ranging from school-wide behavior support systems to classroom academic curriculum to individual mental health interventions. Schools demonstrate varying levels of competency in planning, implementing and evaluating these activities within buildings and across districts, but there is a general disconnect between various activities and evaluation processes, as well as rich data resources that could support these efforts. Using a case study example, this presentation will focus on the unique challenges and opportunities of building evaluation capacity within the school context. It will highlight the benefits of building evaluation capacity by layering efforts around existing activities and resources in schools to help schools connect the dots between their various initiatives and build general evaluation capacity. Results supporting the positive impact of the unique approach used in this case example will be presented.
|
|
Training of Technical Assistance Providers on Evaluation (TOTAP-E) Capacity Building
|
| Catherine A Lesesne, Centers for Disease Control and Prevention, clesesne@cdc.gov
|
| Christine Zahniser, GEARS Inc, czahniser@cdc.gov
|
| Jennifer L Duffy, University of South Carolina, jenduffy@sc.edu
|
| Abraham Wandersman, University of South Carolina, wandersman@sc.edu
|
| Mary Martha Wilson, Healthy Teen Network, marymartha@healthteennetwork.org
|
| Gina Desiderio, Healthy Teen Network, gina@healthyteennetwork.org
|
|
There is often a need to train para-professionals to provide basic training and technical assistance (T/TA) on evaluation. This presentation will describe a two-day training curriculum developed for and used with T/TA providers and evaluators to further develop the core/technical skills for evaluation capacity building (ECB) with community partners. The training was informed by published/unpublished evaluation capacity approaches and learning theory but required substantial new developments. For example, new tools were created for: Conducting individual and team self-assessment of ECB core and technical skills; assessing the evaluation capacity of a local partner in order to create a T/TA plan; evaluation summary narratives that motivate partners because they self-derived, and hands-on learning labs on selected topics. The pre-post evaluation of the TOTAP-E demonstrated significant gains in confidence providing ECB (Mpre=3.52, SE=0.61; Mpost=4.09, SE=0.49) (t(36)= -7.83, p<.01). The training and training evaluation will be fully described.
|
|
Session Title: Youth Participatory Evaluation: The Perspectives of Youth Researchers and Evaluators
|
|
Panel Session 850 to be held in Suwannee 11 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Extension Education Evaluation TIG
|
| Chair(s): |
| David White, Oregon State University, david.white@oregonstate.edu
|
| Discussant(s):
|
| Kim Sabo Flores, Kim Sabo Consulting, kimsabo@aol.com
|
| Abstract:
Youth-led research is receiving increased interest as an engaging pathway to adolescent development. The ability for youth to participate in an evaluative process is believed to positively impact cognitive abilities, organizational capacity, intergenerational relationships, socially just and democratic processes, diversity, mattering, and self reflection. However, youth involvement in research and evaluation remains a relatively underdeveloped and underexplored field of practice and subject of study. This panel, led by 4-H youth researchers will provide youth development practitioners with an example of youth-led research focused on a community action project. Youth presenters will define their roles as researchers and evaluators, provide their perspectives on the successes and challenges related to a research and evaluation process, and make recommendations for best practice. This youth-led panel presentation and discussion is intended for evaluators and practitioners interested in developing and exploring youth-led research as a way of lifting youth voices within their programs and communities.
|
|
Getting a Youth-led Research Team Organized
|
| Netti Knowles, Deschutes County 4-H, family@knowlescreek.com
|
|
As a 4-H member, I was asked to participate in a youth-led research study. Through projects and activities, my 4-H group developed competence, confidence, connection, character, caring and contribution in a fun and meaningful way. I used my writing and desktop publishing skills to help create an informational flyer. The flyer invites people to participate in an event hosted by our youth led research group. Our group presented and distributed the flyers to local high schools in the Bend/La Pine district. I enjoyed the marketing aspects of our project. During the event planning, I helped organize a mock forum and participated in the decision making process. My presentation will be focused on the organization of a participatory research team.
| |
|
Organizing, Analyzing, and Interpreting Data
|
| Anna Shoffner, Deschutes County 4-H, lynshof@ykwc.net
|
|
I was originally commissioned to be a part of a research project by the Proposer which would study the development of several characteristics (namely competence, confidence, connection, character, caring, and contribution) in youth over a period of time as we were involved in youth led research and evaluation. As a member of this Youth Led Research Team I organized and arranged for facilities for our research project and helped recruit youth to be a part of our community forum. I will be responsible for discussing institutional review boards and the organization, analysis, and interpretation of data.
| |
Sharing Research Results With Others
| Madison Mills, Deschutes County 4-H, madi.mills@yahoo.com
|
|
As part of a Youth Led Research project I participated in surveys, took part in a two day intensive training, and finally, put together and held a forum with my colleges. In the forum planning process a main role of mine was to write the grant proposal to fund the forum. The writing of the proposal involved a great deal of communication with my peers, layers of paper work, and articulate writing. I will be presenting information on the process of writing the grant proposal and sharing research results with others.
| |
|
Session Title: Four Multi-Site Randomized Studies in K-12 Settings
|
|
Panel Session 852 to be held in Suwannee 13 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Art Burke, Northwest Regional Educational Laboratory, burkea@nwrel.org
|
| Abstract:
Randomized studies receive high priority under existing federal guidelines for conducting scientifically based research on educational programs. This panel discusses the design and implementation of four ongoing multi-site randomized studies supported by the US Department of Education. The programs evaluated include elementary school writing, middle school mathematics, high school language arts, and English / English language arts for middle school English language learners. Principal investigators for each study will address the following as they relate to their study:
- Research design
- Recruiting participants / Attrition
- Assessing fidelity of implementation across sites
- Statistical analysis
- Use of qualitative methods as adjuncts to the statistical analysis
|
|
An Investigation of the Impact of a Traits-Based Writing Model on Student Achievement
|
| Michael Coe, Northwest Regional Educational Laboratory, coem@nwrel.org
|
|
The goal of this study is to provide high quality evidence on the effectiveness of the analytical trait-based model for teaching and assessing student writing. The study will answer two experimental questions to determine the impact of the intervention on student achievement in writing:
1. What is the impact of 6+1 Trait(tm) Writing on student achievement in writing?
2. How do student impacts vary by pre-existing characteristics of schools, teachers and students?
The study is a cluster-randomized design with random assignment at the school level. Study participants will be 5th grade students and teachers in Oregon. The study will be conducted in approximately 64 elementary schools (32 experimental schools and 32 control schools) randomly assigned from schools not already implementing the six-trait writing approach that are willing to participate. Descriptive studies of treatment and control classrooms will be conducted to help interpret and understand the results.
|
|
|
The Effect of Connected Mathematics Program 2 (CMP2) On the Math Achievement of Middle School Students in Selected Schools in the Mid-Atlantic Region
|
| Ed Coughlin, Metiri Group, ecoughlin@relmid-atlantic.org
|
| Kelli Millwood, Metiri Group, kelli.millwood@pearson.com
|
| Taylor Martin, University of Texas at Austin, taylormartin@mail.utexas.edu
|
|
This study will examine the efficacy of the Connected Mathematics Program 2 in 25 6th or 7th grade classrooms throughout the Mid-Atlantic region. Key research questions are:
1. Does 6th grade students' use of Connected Mathematics 2 as a comprehensive math curriculum cause higher student math achievement compared to students who use traditional curricula?
2. Does 6th grade students' use of Connected Mathematics 2 cause higher levels of engagement in doing mathematics compared to students who use a traditional curriculum?
The design for this study will be a randomized controlled trial (RCT) with a two-group comparison that compares students in classrooms using CM2 to students in classrooms using traditional math curricula. The investigators for this study will enroll 50 math teachers across the Mid-Atlantic region at either the 6th or 7th grade level, dependent upon the level of interest in schools in the region.
| |
|
Quality Teaching for English Learners (QTEL)
|
| Hans Bos, Berkeley Policy Associates, hans@bpacal.com
|
|
This study will evaluate professional development intended to equip secondary teachers to advance development of academic English fluency for English language learners (ELs). The intervention includes three years of professional development for teachers.
The key research questions include: (1) Does participation in QTEL result in changes in middle school teachers' pedagogical content knowledge, teaching expertise, attitudes about capacity to learn, and instructional practices? (2) Do the teachers change their practice to be aligned with the theoretical orientation and strategies demonstrated in the professional development sessions? and (3) Does QTEL improve language proficiency and achievement in English language arts?
This study involves 12,000 6th, 7th, and 8th grade EL students and 240 ELA and ESL teachers at 40 middle schools in one or more large Western urban districts. Within each selected district, 40 schools will be randomly assigned to treatment or control conditions, with approximately six teachers per school.
| |
|
Improving Adolescent Literacy Across the Curriculum in High Schools (Content Literacy Continuum, CLC)
|
| Matt Dawson, Learning Point Associates, matt.dawson@learningpt.crg
|
| William Corrin, MDRC, william.corrin@mdrc.org
|
|
The CLC intervention is presented in the form of guidebooks that contain instructional protocols and support materials for teachers. A team of three to four professional developers works with all administrators and teachers in a high school on a sustained basis (three to five years) to implement comprehensive change in literacy instruction across the curriculum.
Research questions are:
To what extent does a literacy-across-the-curriculum intervention improve students' reading skills and other outcomes such as attendance; persistence in school; course-taking patterns; and performance on high-stakes, standards-based assessments?
What is the effect of a literacy-across-the-curriculum approach on literacy instruction (among both language arts teachers and teachers of other subjects)?
What factors promote or impede successful implementation of a literacy-across-the-curriculum approach?
The design will include a total of 40 high schools (20 treatment group and 20 in a control group) across 12 districts or consortia of districts and at least three states.
| |
|
Session Title: Evaluation Strategies Designed to Measure Program Context: Implications for Promoting Retention in After School Programs
|
|
Multipaper Session 853 to be held in Suwannee 14 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Tiffany Berry, Claremont Graduate University, tiffany.berry@cgu.edu
|
| Discussant(s): |
| Sae Lee, Harder + Company, slee@harderco.com
|
| Abstract:
Retaining youth in after school programs is a challenge, especially for programs serving middle school students. Recent national evaluations of afterschool programs suggest that students participate less than one day per week in middle school (Kane, 2004). For students to achieve social and academic benefits from after school programs, students must demonstrate sustained participation. The purpose of this session is to discuss how evaluation could be used to promote and sustain youth participation in after school programs. Specifically, we will discuss how to (1) measure the way the program context interacts with students' characteristics, (2) identify and target students at-risk for dropping out of programs before they leave; and (3) measure more sensitively after school attendance through tracking activity level attendance as well as global attendance levels. Our discussion will include evaluation data from an on-going external evaluation of After-School All-Stars, Los Angeles.
|
|
Comparing Measures of Dosage as a Method for Understanding Program Context
|
| Tiffany Berry, Claremont Graduate University, tiffany.berry@cgu.edu
|
| Kelly Murphy, Claremont Graduate University, kelly.murphy@cgu.edu
|
|
This paper will focus on measuring student dosage afterschool, often considered one of the most important measures of program context. We will compare global indicators for attendance (number of days attended overall) to activity level attendance (the number of days attended by type of activity, such as enrichment or academic) to determine whether different measures of students' attendance differentially predicts students' social and academic outcomes. It is possible that participation in similarly themed enrichment programs (e.g., sports, arts, academics, dance, etc.) might relate more closely to students social and academic outcomes than global indicators of program attendance. If so, then evaluators should consider incorporating activity level attendance, rather than just daily counts of program participation, into evaluation practices. We will also couch our analyses within traditional measures of dosage, such as duration, intensity, and breadth, similar to the classifications discussed by Chaput, Little, and Weiss (2004).
|
|
Promoting Retention by Matching Activity Type to Students' Individual Characteristics
|
| Krista Collins, Claremont Graduate University, krista.collins@cgu.edu
|
|
Research has suggested that high attrition in afterschool programs may stem from the programs' inability to captivate students enough for them to return (Lauver, Little, & Weiss, 2004). Thus, it is important to examine whether increased retention results when program activities match students' personality characteristics (such as introversion or extroversion). Using the concept of "flow" (Csikszentmihályi,1990) as a measure of engagement, we will examine the relationship between personality trait and activity type, as well as how these factors influence attendance rates within the After-School All-Stars program. Although this type of evaluation technique is not feasible or desired in all evaluation contexts, it provides a good example of how to measure youth participation more intensively than just through retrospective self-reports.
|
|
Self-Joined Versus Other-Joined Students: Identifying Students At-Risk for Dropping Out of After School Programs
|
| Katherine Byrd, Claremont Graduate University, katherine.byrd@cgu.edu
|
| Tiffany Berry, Claremont Graduate University, tiffany.berry@cgu.eu
|
|
As practitioners struggle to recruit and retain youth, evaluators can assist by identifying students who may be at-risk for dropping out of programs. This paper will describe an evaluation technique useful for identifying students who may be particularly at-risk for dropping out of the program. In a recent evaluation of After-School All-Stars, Los Angeles, we measured whether students joined because they wanted to (self-joined) or if they joined because of their parents, teachers or friends (other-joined). This paper will describe social outcomes (e.g., self-efficacy, feelings of autonomy, prosocial behavior) as well as the attendance patterns of these two groups of students. Based on evaluation results in 07/08, self-joiners reported higher levels of program satisfaction, self-efficacy, and prosocial behavior than other-joiners (Berry et al., 2009). Incorporating students' motivations for initially participating may help programs identify students who may drop out prior to realizing the benefits of their participation.
|
|
Session Title: Impact Evaluation: Randomized Controlled Trial and Alternative Strategies in Schools
|
|
Multipaper Session 854 to be held in Suwannee 15 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Andrea Beesley,
Mid-continent Research for Education and Learning, abeesley@mcrel.org
|
| Discussant(s): |
| Susan Henderson,
WestEd, shender@wested.org
|
|
Evaluating Impact: Use of Yin's Partial Comparisons Case Study Approach
|
| Presenter(s):
|
| Michael Trevisan, Washington State University, trevisan@wsu.edu
|
| Jennifer LeBeau, Washington State University, jlebeau@wsu.edu
|
| Abstract:
Since the passing of the No Child Left Behind Act in 2002, randomized controlled trials (RCTs) have maintained a prominent place in the debate over valid assessment of program impact. While proponents of RCTs argue that this methodology is the most valid means of assessing impact of educational programs, the idea that RCTs can be applied to all projects and programs seems unsatisfying and inadequate. One strategy proposed to assess impact of programs in which RCTs may not be useful is a case study approach that employs multiple partial comparisons rather than one overall design (Yin, 1995). The purpose of this paper is to describe the technique, to illustrate its use in the assessment of actual projects, and to discuss its strengths and limitations in light of the national push for documentation of impact and the determination of what works in social policy and programs.
|
|
The Evaluation of A Multilevel Analysis of Teacher-Student Racial and Ethnic Congruence on Student Mathematics Learning in the Context of a Randomized, Controlled Experiment
|
| Presenter(s):
|
| Antionette Stroter, University of Iowa, a-stroter@uiowa.edu
|
| Abstract:
There has been increased attention given to the underlying concerns of educational inequalities related to matters of race (Jost, Whitefield, & Jost, 2005). We evaluated a corpus, using Hierarchical Linear Modeling (HLM), centered on student learning gains gathered in the context of a successful focused, randomized, controlled experiment of middle school mathematics learning that contrasted a 3-week curricular unit using SimCalc MathworldsG™ curriculum and a TexTeams control. Performance data from the 92 7th grade mathematics teachers in several regions of Texas and 1342 of their students were examined to investigate the effects of (1) teacher-student racial/ethnic congruence for this Hispanic and White sample, (2) Hispanic-Hispanic versus White-White racial/ethnic congruence, and (3) teacher race above and beyond student ethnicity (matched or unmatched). The study revealed evidence suggestive that embedded complex differences in racial and ethnic groups have serious implications for comparisons and generalizability between and within these minority groups.
|
|
Evaluating Intervention Effect of a Reading Program for Low-Achieving Incarcerated Youth With Multi-level Growth Modeling
|
| Presenter(s):
|
| Jing Zhu, The Ohio State University, zhujingosu@gmail.com
|
| William Loadman, The Ohio State University, loadman.1@osu.edu
|
| Richard Lomax, The Ohio State University, lomax.24@osu.edu
|
| Raeal Moore, The Ohio State University, moore.1219@osu.edu
|
| Abstract:
Although there has been a substantial increase in rigorous evaluations of curricula, literacy instruction for adolescent struggling readers is one neglected area of investigation. This longitudinal study evaluated if the Scholastic READ 180 program had a meaningful impact on reading proficiency of low-achieving incarcerated youth in a large mid-western state, when salient covariates were controlled for their influences. The study was based on an experimental design in which eligible youth were randomly assigned to either the READ 180 program or a comparison group being instructed by traditional English class. The course of investigation lasted for two school years (2006-2008) during which the subjects were measured by the Scholastic Reading Inventory before treatment and at the end of each of the eight terms. Multilevel growth modeling was applied to the longitudinal assessment data for program evaluation. Results indicated that subjects receiving READ 180 demonstrated more rapid reading growth over time.
|
| | |
|
Session Title: Rapid Response Methods for Real-time Evaluation
|
|
Panel Session 855 to be held in Suwannee 16 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Chair(s): |
| Gale Berkowitz, David and Lucile Packard Foundation, gberkowitz@packard.org
|
| Abstract:
In evaluation practice, timing can be everything. To ensure their work gets used, evaluators that aim to support real-time learning and decision making must deliver the right data at the right time. While this kind of real-time evaluation sounds good in theory, it can be difficult to achieve successfully in practice. When quick decisions need to be made, methods must both allow for quick design, implementation, and analysis, and provide useful and trustworthy strategy-level data. Unfortunately, many conventional evaluation methods are neither responsive nor quick. This session will introduce a set of "rapid response methods" that emphasize timing, flexibility, and responsiveness. They have quick turnaround times and bring evaluation data, in accessible formats, to the table for reflection and use in decision making. The session's presenters will describe the rapid-response approaches they have developed, discovered, and used.
|
|
With These Methods, Time is On Our Side. Yes It Is!
|
| Julia Coffman, Harvard Family Research Project, jcoffman@evaluationexchange.org
|
|
This presentation will feature the results from recent research on rapid-response methods. It will start by grounding the discussion about rapid-response approaches in evaluation history that draws on the tenets of responsive, utilization-focused, and developmental evaluation, and recognizes the contributions of a family of approaches called rapid evaluation and assessment methods (REAM). The presentation will then describe new methods uncovered during this research that can be used to quickly gather trustworthy data that decision makers can use at critical points in time. These methods either can form the basis for an evaluation design, or can be used in combination with other more conventional evaluation methods.
|
|
|
What Do We Want? Data! When Do We Want It? Now!
|
| Ehren Reed, Innovation Network Inc, ereed@innonet.org
|
|
Rapid response methods and real-time approaches are particularly useful for advocacy efforts in which strategy constantly evolves without a predictable script. To make informed decisions, advocates need timely answers to the strategic questions they regularly face and evaluation can help fill that role. This presentation will present a suite of rapid-response methods being used in the advocacy evaluation field. Methods discussed will include media scorecards, system mapping, intense-period debriefs, and policymaker ratings.
| |
|
Time is the Enemy of Utilization
|
| Andy Rowe, ARCeconomics, andy.rowe@earthlink.net
|
|
Evaluators need techniques to quickly return information and useful insights; particularly early in an evaluation when this can significantly increase the evaluator's social capital in the eyes of those who influence evaluation use. This presentation will discuss two rapid response techniques: 1) WURT (While U R There), a technique developed to obtain data on the organizational development elements of a major governance and urban environmental services program in India, and 2) web-based surveys that cut to two weeks the time from design to reporting. The presentation will discuss how rapid response techniques change the social contracts and power in an evaluation, encourage collaboration, and promote use. It also will discuss the challenges these techniques bring, such as why some methods (web surveys) flourish, and others (WURT) founder, and how to deal with the precipitous modification of programs or strategies before evaluators are fully comfortable with their own advice.
| |
|
Session Title: Faculty-Staff Evaluation: Meeting Higher Education's Challenges
|
|
Multipaper Session 856 to be held in Suwannee 17 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Assessment in Higher Education TIG
|
| Chair(s): |
| Rick Axelson,
University of Iowa, rick-axelson@uiowa.edu
|
| Discussant(s): |
| Shaunti Knauth,
National-Louis University, shaunti.knauth@nl.edu
|
|
Evaluating Department Chair's Effectiveness Using Faculty Ratings as Formative Feedback
|
| Presenter(s):
|
| B Jan Middendorf, Kansas State University, jmiddend@ksu.edu
|
| Stephen Benton, The Idea Center, benton@theideacenter.org
|
| Abstract:
This paper examines the underlying dimensions of faculty perceptions about the academic chair's effectiveness through exploratory research analyzing faculty ratings from The IDEA Center's Faculty Perceptions of Department Head/Chair Survey (FPDHS). The study was conducted to determine if the perceptions from this instrument can be assessed validly and reliably to provide formative feedback on the chair's performance. This paper presents the research findings from data collected on 9,125 faculty members' ratings of 604 department heads/chairs across the years 2003 to 2007. Ratings were collected using the FPDHS which is a 70-item instrument containing 67 objectively worded items and three short-answer written response items. Recommendations will be provided on how to use summary information from the FPDHS to conduct formative evaluations of the chair's effectiveness along several dimensions.
|
|
An Evaluation of an Adjunct Faculty Performance Appraisal Program
|
| Presenter(s):
|
| Joshua Black, Indiana Wesleyan University, joshua.black@indwes.edu
|
| Jeannie Trudel, Indiana Wesleyan University, jeannie.trudel@indwes.edu
|
| Abstract:
This presentation discusses a completed evaluation of a private Midwestern University off site Adjunct Faculty Performance Appraisal Program for adult degree programs. According to Fitzpatrick, Sanders and Worthen (2004), evaluations are conducted to judge the worth, merit and value of programs and products. The evaluation utilized Stufflebeam's (2000) context, input, process, and product (CIPP) evaluation model to assess the adjunct faculty performance appraisal program. The evaluation addresses questions of adjunct faculty effectiveness in relation to institutional goals, as well as identifying areas for improvement.
|
|
A New Approach to Using Monitoring and Evaluation to Improve Undergraduate Teaching and Learning
|
| Presenter(s):
|
| Valerie Ruhe, University of British Columbia, valerie.ruhe@ubc.ca
|
| Chris Lovato, University of British Columbia, chris.lovato@ubc.ca
|
| Abstract:
Public universities routinely use student course evaluations to provide instructors with feedback on teaching performance; however these data are rarely used to implement and monitor curricular improvements at the program level. This presentation will focus on a monitoring system used by the Evaluation Studies Unit, Faculty of Medicine, University of British Columbia, to evaluate the medical education curriculum. Student course evaluation data are used to identify areas of weakness and strength, and to draft recommendations. These recommendations are negotiated with faculty who use them for program planning. The following year, a monitoring form is used to report specific actions that were taken to implement the recommendations. We will discuss the approach and tools used to facilitate the process, as well as lessons learned in collaborating with stakeholders. Finally, we will explain how this new approach is applicable to a wide range of educational programs.
|
| | |
| Roundtable:
Evaluation of the Healthy Maine Partnerships Initiative: A Systems-Level Approach |
|
Roundtable Presentation 858 to be held in Suwannee 20 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
|
| Presenter(s):
|
| Sarah Martin, Maine Center for Public Health, peanut_eval@yahoo.com
|
| Pamela Bruno MacDonald, Maine Center for Public Health, pbrunomac@earthlink.net
|
| Michelle Mitchell, Maine Center for Public Health, mici.mitchell@gmail.com
|
| Ruth Dufresne, Maine Center for Public Health, dufresne@maine.rr.com
|
| Melissa Furtado, Maine Center for Public Health, mfurtado@mcph.org
|
| Marco Andrade, Maine Center for Public Health, mandrade@mcph.org
|
| Abstract:
The Healthy Maine Partnerships (HMP) Initiative represents a progressive approach to public health - community coalitions, supported with funding and guidance from the state, act as a platform for delivering comprehensive health interventions at the local level, thereby reinforcing state efforts. Evaluation of the HMP Initiative requires consideration of 8 state agencies representing chronic disease prevention and coordinated school health, and 30 community coalitions differing in terms of geography, staffing, and populations served.
This presentation will highlight our systems-level evaluation developed to explore barriers and successes in achieving process and outcome measures at both the state and community levels. It will detail the use of mixed methodology including the use of surveillance data, electronic monitoring, key-informant interviews, web surveys of state and coalition staff, and telephone surveys for distinct settings (e.g., worksites). We will share challenges and lessons learned in evaluating common objectives across diverse settings, within a collaborative management structure.
|
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
A Participatory Evaluation of a Banking Accessibility Pilot Project for People With Mental Health Issues |
|
Roundtable Presentation 859 to be held in Suwannee 21 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
|
| Presenter(s):
|
| Elizabeth Whitmore, Carleton University, elizabeth_whitmore@carleton.ca
|
| Abstract:
This presentation highlights a participatory evaluation (PE) of a Banking Accessibility Pilot Project (BAPP), a community-driven partnership between Canadian Mental Health Association and Canada Trust/Toronto Dominion Bank. The year-long pilot provided 30 participants with 'severe mental illness' with no-fee bank accounts and accompanying supports. The goals of the project included increasing access to bank accounts, educating consumers about using financial services to improve money management skills, and educating bank employees about mental illness and adapting services to meet their needs.
Three project participants were part of an evaluation team that conducted the project evaluation. This presentation will focus on the process of designing and implementing the evaluation and some of the benefits and challenges of doing it in this way. As the project evaluator, I will also critically examine the rationales for using a PE approach, based on follow up interviews with team members.
|
| Roundtable Rotation II:
At the Touch of a Button: Evaluation of On-Demand Patient Education |
|
Roundtable Presentation 859 to be held in Suwannee 21 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
|
| Presenter(s):
|
| Ryan Lee, Lakeland Regional Medical Center, ryan.lee@lrmc.com
|
| Gwen Rogerson, Lakeland Regional Medical Center, gwen.rogerson@lrmc.com
|
| Merlande Petit-Bois, University of South Florida, mpetitbo@mail.usf.edu
|
| Abstract:
This collaborative evaluation focuses on an On-Demand Patient Education television system and its success in a specific hospital in Lakeland, Florida. This formative evaluation involved the participation of various stakeholders for successful completion of the on-demand television evaluation. Mixed methods were used for data collection which enabled the evaluators, clients, and stakeholders to get a more complete picture of what was happening in this particular hospital. This allowed us to evaluate the significance for the nurses who encouraged their patients to use the on-demand television system for their own education regarding their particular disease states. Data gathering, such as the use of surveys and focus groups, allowed us to better understand the demands of the nurses, the benefits to the patients, and ultimately the value of on-demand education for hospitals.
|
|
Session Title: Evaluating Minority Programs From K-20: Perspectives From Directors and Administrators
|
|
Panel Session 860 to be held in Wekiwa 3 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Multiethnic Issues in Evaluation TIG
|
| Chair(s): |
| Faye Jones, Florida State University, fjones@admin.fsu.edu
|
| Discussant(s):
|
| Faye Jones, Florida State University, fjones@admin.fsu.edu
|
| Abstract:
The proposed panel represents educational institutions of varying types and levels and includes presenters from a Hispanic Serving Institution (HIS), an extensive research university, a K-12 school system, and a Historically Black College and University. Each panelist will describe how minority programs are evaluated and how objectives are measured in a selected program they have directed. Presenters will also provide key elements and practices for evaluating minority programs and disclose vital methods and strategies of examining programs that serve underrepresented populations. Each presenter will also indicate the qualities they feel are needed by evaluators to collect valid and reliable data.
|
|
Evaluating the South East Alliance for Graduate Education and the Professoriate (SEAGEP) Program at the University of Florida
|
| Anne Donnelly, University of Florida, adonnelly@seagep.ufl.edu
|
|
The presenter will describe objectives, measures, and outcomes for an evaluation of the South East Alliance for Graduate Education and the Professoriate Program, which is a member of the NSF AGEP network. UF serves as the lead institution in partnership with Clemson and the University of South Carolina. SEAGEP is a comprehensive minority graduate level program offering a variety of supports to U.S. citizen or permanent resident students who are pursuing degrees in science, engineering, or mathematics (SEM). Students are offered a variety of training experiences to prepare them for academic careers. At UF, 107 students in 28 SEM departments have been directly served through monetary awards to support their studies and research, travel to professional conferences, professional development seminars, mentoring, and peer support. To date, program participants have earned 31 Ph. D. and 11 Master's degrees, and an additional 53 are currently enrolled and making progress towards their degrees. Minority graduate enrollments and degrees awarded in SEM departments have increased over the life of the grant. In addition, SEAGEP offers research experiences to minority undergraduate SEM students at 24 other Florida and South Carolina institutions to increase their interest in and preparedness for graduate school.
|
|
|
Evaluating the Title V program at Florida International University
|
| Consuelo Boronat, Florida International University, boronat@fiu.edu
|
|
Florida International University's Title V grant program aimed at increasing the success of its largely Hispanic student population. As a significant grant effort that involved the development of Summer bridge term Student Learning Communities, these were studied using focus groups, student and faculty surveys and interviews, and longitudinal analyses. Students participating in the Learning Communities (LC) were compared to matched non-Learning Community students on academic success measures. The presenter will describe analyses that showed a significant LC retention benefit, which led to continued, post-grant support for this program.
| |
|
Evaluating the Science, Engineering, Communication, Mathematics Enhancement Program (SECME) at Miami-Dade County Public Schools
|
| Ava Rosales, Miami-Dade County Public Schools, arosales@dadeschools.net
|
|
SECME is a national strategic alliance to renew and strengthen the professional capacity of K-12 educators, motivate and mentor students, and empower parents so that all students can learn and achieve at higher levels. SECME encourages K-12 students to pursue careers in science, technology, engineering and mathematics through partnerships with local universities, government and industry agents. The presenter will describe components of evaluating a minority-based program that reaches more than 100 schools (K-12) and directly impacts over 2,000 students.
| |
|
Evaluating the Environmental Sciences Student Organization at Florida A&M University
|
| Jacqueline Hightower, Florida A&M University, jacqueline.hightower@famu.edu
|
|
The Environmental Sciences Student Organization (ESSO) at Florida A&M University is an affiliate chapter of the Ecological Society of America's (ESA) program,Strategies for Ecology Education, Development and Sustainability (SEEDS). The core SEEDS program components offer hands-on, engaging experiences with ecology that exhibit the relevance and applications of the science. Each experience also provides opportunities to interact with a diverse group of ecologists and other motivated students to both broaden and deepen students' understanding of ecology and potential careers.
Over the years, ESA has partnered on SEEDS with the United Negro College Fund, Historically Black Colleges and Universities, Tribal Colleges, the Institute of Ecosystem Studies, and others. With the goal of diversifying and advancing the profession of ecology, the SEEDS program provides a full spectrum of mentoring and learning opportunities to underrepresented undergraduate students. The panelist will describe core areas of evaluating the SEEDS program at a HBCU.
| |
|
Session Title: Models for Evaluating the Impact School-and Community-Based Arts Programs
|
|
Multipaper Session 861 to be held in Wekiwa 4 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Evaluating the Arts and Culture TIG
|
| Chair(s): |
| Ching Ching Yap,
Savannah College of Art and Design, ccyap@mailbox.sc.edu
|
|
Multi-Instrumentation in Assessing Multi-Lingual Learners: A Systematic Pre/Post Model in Arts-Based Interventions
|
| Presenter(s):
|
| Kimberly V Feilen, University of California Los Angeles, kvfeilen@ucla.edu
|
| Kylie A Peppler, Indiana University, kpeppler@indiana.edu
|
| James S Catterall, University of California Los Angeles, jamesc@gseis.ucla.edu
|
| Abstract:
Our two-year evaluation investigated a collaborative project between an inner-city middle school and a non-profit organization that focused on the arts as a medium to assist English Language Learners. Our investigation was conducted with 120 urban English Language Learners ages 12-15 and 95.2% Latino. All measures followed a systematic pre- and post-test design with a two-stand focus: Arts Learning in Theatre and Visual Arts and Academic English Language Acquisition. This study is one of the first to incorporate a variety of formative and summative instruments in the arts and English Language Development, and with older children and English Language Learners. Although previous studies establish a general link between particular arts disciplines and English Language Development, few demonstrate specific linkages between aspects of the visual and performing arts and key language constructs. Multi-instrumentation provides a model for measurable arts education and additional impacts on arts interventions reaching academically at-risk English learners.
|
|
Thinking Outside the Frame: Conceptualizing the Impact of Funded Art Research
|
| Presenter(s):
|
| Michelle Picard-Aitken, Science-Metrix, m.picard-aitken@science-metrix.com
|
| Nicole Michaud, Social Sciences and Humanities Research Council of Canada, nicole.michaud@sshrc-crsh.gc.ca
|
| Frederic Bertrand, Science-Metrix, frederic.bertrand@science-metrix.com
|
| Courtney Amo, Social Sciences and Humanities Research Council of Canada, courtneyamo@hotmail.com
|
| Abstract:
Capturing the impacts of funded research is an ongoing challenge for granting councils and the evaluation community. A research study was completed to better conceptualize the impacts of projects situated at the intersection between academic research and artistic creation funded by the research/creation program at the Social Sciences and Humanities Research Council (Canada). Impact data extracted from this program's evaluation and additional sources were 1) systematically coded and analyzed through a modified grounded theory approach, 2) characterized based on the groups affected and the categories of impact, and 3) represented visually in a conceptualization/analytical framework showing the relationships between these groups and categories. This study provides a common understanding of the nature of research/creation impacts and will support future performance measurement activities. More generally, it is hoped that the framework will be discussed and used for the development of research impact assessment in the contemporary arts and design, and beyond.
|
|
Supporting Communities Through After-School Arts: Three Urban Case Studies
|
| Presenter(s):
|
| Gail Burnaford, Florida Atlantic University, burnafor@fau.edu
|
| Olga Vazquez, Florida Atlantic University, ovazquez@fau.edu
|
| Laura Tan, Chicago Arts Partnerships in Education, ltan@capeweb.org
|
| Abstract:
SCALE, (Supporting Communities Through Arts Learning Environments) involved three Chicago schools after four years of implementation of an after-school program. The schools engaged elementary/middle students in teaching artist partnerships with classroom teachers who co-planned and taught afterschool. The evaluation involved the collection and analysis of student work, teacher and artist focus group data, observations, and teacher blogs. Results indicated that teachers in the program applied strategies from the after-school program in their in-school classrooms. Students exhibited particular gains in social and emotional 'soft' skills, including listening, focus, concentration, collaboration and risk-taking, consistent with the literature on after-school programs.
|
| | |
|
Session Title: From Practice to Praxis: Integrating Evaluation Anthropology Theory and Method
|
|
Multipaper Session 862 to be held in Wekiwa 5 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
|
| Chair(s): |
| Rodney Hopson, Duquesne University, hopson@duq.edu
|
| Discussant(s): |
| Michael Lieber, University of Illinois at Chicago, mdlieber@uic.edu
|
| Abstract:
Blending theory and methods of both disciplines, evaluation anthropology has emerged as a new subfield. Most anthropological evaluations focus on the application of ethnographic methods to the solution of evaluation problems. But although it is often implicit, anthropological evaluations are also inherently theory-driven. Ethnographic methods are fundamentally based on social and culture theory. Evaluation anthropology is both theory-driven and data-driven. Convening four experienced evaluation anthropology scholars and practitioners, this session presents and analyzes two evaluation cases, one from the United States and the other from El Salvador, exploring the their theory-method linkages. It argues for an evaluation anthropology praxis approach recognizing the intricate and unavoidable interdependence of theory, method and practice in this emerging subfield. The panel ends with a discussion of the challenges and value-added of a praxis model to both evaluation practice and scholarship.
|
|
Organizational Needs/Assets Mapping: Bringing Anthropological Systems Thinking Into Practice
|
| Eve Pinsker, University of Illinois at Chicago, epnsker@uic.edu
|
|
Current evaluation practice often finds the evaluator working collaboratively to shape project planning, both during start-up and ongoing formative evaluation. Anthropologically trained evaluators have the advantage of using systems concepts built into our ethnographic methods, opening their partners' eyes to contextual connections not initially obvious to them. This is particularly relevant in arenas where "developmental" evaluation is relevant, such as coalition-building projects requiring cross-organizational collaboration. The Organizational Needs/Assets Mapping questions summarized here were developed as a tool for use in contexts where organizations, not just individuals, are considering what they have to offer or need from a larger initiative. The anthropological theory built into the questions may be invisible to the users of the tool. However, in the current environment where "systems thinking" has become a catch phrase, it is important for anthropologists to become more explicit about the ways that systems concepts are built into our approaches.
|
|
The Entanglements of Process, Theory and Practice: Evaluating Asset-Based Development in El Salvador
|
| James Huff, Vanguard University, jhuff@vanguard.edu
|
|
Asset-based community development (ABCD) has growing appeal in the United States but has received relatively little attention international development. This case study explicates the theoretical and methodological underpinnings of an anthropological process evaluation of an El Salvador non-profit organization. The paper has three goals: First, it provides empirical ethnographic data on the interactions that occurred between local actors involved in the identification of development projects in three different communities. Second, the paper draws on anthropological theory, for example Olivier de Sardan's notion of the "entangled social logics" (2004), to analyze the processes by which varied stakeholders attempted to create consensus about which local development initiatives to implement. Finally, it presents how the entangled social logics approach can both strengthen and challenge evaluation anthropologist practice.
Olivier de Sardan, J. 2004. Anthropology and Development: Understanding Contemporary Social Change. London: Zed Books.
|
|
Session Title: Evaluation of Research, Technology, and Development (RT&D) to Illuminate Innovation
|
|
Multipaper Session 863 to be held in Wekiwa 6 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Research, Technology, and Development Evaluation TIG
|
| Chair(s): |
| Shigeko Togashi, National Institute of Advanced Industrial Science and Technology, s-togashi@aist.go.jp
|
| Abstract:
In this session, we propose how we can measure outcomes of RT&D to illuminate innovation in a national research institute (AIST) and a national funding organization (NEDO) belonging to METI. At first, a strategy formation as an essential issue in the process of RT&D for innovation will be proposed. Then, an efforts to set a useful evaluation system by analyzing issues to be incorporated in AIST will be introduced; 1) efficient and practical evaluation with appropriate interval and self-evaluation, 2) novel evaluation indexes to evaluate the diversity of RT&D and human resource cultivation, 3) appropriate formation and selection scheme of reviewers for diversity and excellence, and 4) secure feedback action to reorganize RT&D agenda. Moreover, follow-up monitoring in NEDO will be mentioned in order to grasp the current statuses of project participants and their ex-post activities including further RT&D and commercialization for 5 years after the end of projects.
|
|
Strategy and Evaluation in Research, Technology and Development (RT&D) for Innovation
|
| Naoto Kobayashi, National Institute of Advanced Industrial Science and Technology, naoto.kobayashi@aist.go.jp
|
| Osamu Nakamura, Science and Technology Promotion Bureau, Nagasaki Prefectural Government, osamu.nakamura@pref.nagasaki.lg.jp
|
| Kenta Ooi, National Institute of Advanced Industrial Science and Technology, k-ooi@aist.go.jp
|
|
A strategy formation is an essential issue in the process of RT&D for innovation. The strategy should reflect global, social, national, and technological points of view. RT&D programs/projects are to be planned, implemented and evaluated along this strategy in AIST. The evaluation should include (1) foresight, (2) process and progress, (3) output and outcome and (4) the young human resource cultivation. The evaluation of the strategy (meta-evaluation) should be, on the other hand, executed by checking the gap between scenario of the strategy and the real world. If a big gap is still observed between them after a fixed period (5~10 years), the reformation of the strategy is necessary. We have named this cycle SDCS (Strategy-Do-Check-Strategy) cycle [1], and the combined SDCS-PDCA cycle might be an effective tool for the realization of the RT&D programs originating from the research institute.
[1] O.Nakamura et al., New Directions for Evaluations 118, pp.25-36 (2008).
|
|
Systematic Evaluation to Recognize Outcomes of the National Institute of Advanced Science and Technology (AIST) in Society
|
| Kenta Ooi, National Institute of Advanced Industrial Science and Technology, k-ooi@aist.go.jp
|
| Osamu Nakamura, Science and Technology Promotion Bureau, Nagasaki Prefectural Government, osamu.nakamura@pref.nagasaki.lg.jp
|
| Shuichi Oka, National Institute of Advanced Industrial Science and Technology, s-oka@aist.go.jp
|
| Koji Masuda, National Institute of Advanced Industrial Science and Technology, koji.masuda@aist.go.jp
|
| Shigeko Togashi, National Institute of Advanced Industrial Science and Technology, s-togashi@aist.go.jp
|
| Naomasa Nakajima, National Institute of Advanced Industrial Science and Technology, n-nakajima@aist.go.jp
|
|
With the introduction of an outcome-oriented strategic evaluation system specifically designed to foresee and monitor RT&D outcomes, research units in AIST have come to have a clear strategic agenda for formulating RT&D scenario toward outcome that contribute to AIST's mission. On the other hand, the present system needs some systematic modifications to recognize how AIST has been contributing toward the development of a sustainable society. We have been analyzing issues to be incorporated for a useful evaluation system; 1) efficient and practical evaluation with appropriate interval and self-evaluation, 2) novel evaluation indexes to evaluate the diversity of RT&D and human resource cultivation, 3) appropriate formation and selection scheme of reviewers for diversity and excellence, and 4) secure feedback action to reorganize RT&D agenda to maximize the total achievements of AIST. Systematic evaluation for impact assessment must be constructed so that AIST could act as a real innovation hub in society.
|
|
Utilization of Follow-up Monitoring Results of Projects for Accountability and Research, Technology and Development (RT&D) Management at New Energy and Industrial Technology Development Organization (NEDO)
|
| Jun-ichi Yoshida, New Energy and Industrial Technology Development Organization, yoshidajni@nedo.go.jp
|
| Tsutomu Kitagawa, New Energy and Industrial Technology Development Organization, kitagawattm@nedo.go.jp
|
| Takahisa Yano, New Energy and Industrial Technology Development Organization, yanotkh@nedo.go.jp
|
|
New Energy and Industrial Technology Development Organization (NEDO) is Japan's largest public RT&D management organization for promoting wide range of industrial technologies. To ensure social accountability and to provide feedback for improving RT&D management, it is very important for funding agencies as NEDO to monitor the post-project activities of the private companies participated in the past projects toward the practical application of RT&D achievements. Thus, we conducted follow-up monitoring in order to grasp the current statuses of project participants and their ex-post activities including further RT&D and commercialization for 5 years. In this study, the analysis of the latest project evaluation results and follow-up monitoring data including some indicators for short-term outcomes are discussed.
|
|
Session Title: Improving Government Performance Management
|
|
Panel Session 864 to be held in Wekiwa 7 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Research, Technology, and Development Evaluation TIG
|
| Chair(s): |
| Henry Doan, United States Department of Agriculture, hdoan@csrees.usda.gov
|
| Abstract:
Government performance management is the focus of the Administration's efforts to demonstrate accountability and effectiveness to the public. To prepare for recommendations for improving government performance management to the New Administration during the Administration transitional period, it is important to review Federal agencies' experiences with PART, the PMA, BPIR, etc. Panelists will discuss how this NAPA project progressed from its conceptualization, through data collection and analysis, to suggestions for improving performance management for the New Administration. Panel discussions will also include interviewing OMB examiners and their agency counterparts to glean information and recommendations from "the trenches" about PART, its beneficial effects and downside, its issues of implementation, its limits in measuring program effectiveness and efficiency, etc. Suggestions from interviews with the PIO Council and selected agency PIOs to improve government performance management will also be discussed as well as reviews of state and local best practices, including Washington, Oregon, Maryland (CityStat, StateStat) and their common themes, and suggestions from national experts on the issue of performance management improvement.
|
|
Government Performance Management Improvement: The Project
|
| Kathryn E Newcomer, George Washington University, newcomer@gwu.edu
|
|
As the chair of the NAPA panel on government performance improvement, Kathy will describe how this project, a brainchild of hers, was conceptualized, planned, organized, implemented, and how recommendations were made to the New Administration to help guide and improve government performance management.
|
|
|
Government Performance Management Improvement Project: The Process
|
| Mara Patermaster, National Academy of Public Administration, mpatermaster@napawash.org
|
|
As the project manager, Mara will share details of project implementation, her experience with the project, and its progresses over the course of the year.
| |
|
Government Performance Improvement: Reviewing Best Practices
|
| Henry Doan, United States Department of Agriculture, hdoan@csrees.usda.gov
|
|
The presenter will discuss how best practices of the states and local governments were reviewed and analyzed to be included in recommendations for improving future government performance management.
| |
|
Session Title: Employing Arts-based Strategies in Qualitative Evaluation
|
|
Multipaper Session 865 to be held in Wekiwa 8 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Qualitative Methods TIG
|
| Chair(s): |
| Jennifer Jewiss,
University of Vermont, jennifer.jewiss@uvm.edu
|
| Discussant(s): |
| Jennifer Jewiss,
University of Vermont, jennifer.jewiss@uvm.edu
|
|
Capturing Realities of an After School Program Through the Use of Photovoice
|
| Presenter(s):
|
| Ann G Bessell, University of Miami, agbessell@miami.edu
|
| Valentina Kloosterman, University of Miami, vkloosterman@miami.edu
|
| Sylvia Gutierrez, University of Miami, sgutierrez@miami.edu
|
| Shanika Watson, University of Miami, s.watson@miami.edu
|
| Abstract:
This session focuses on the use of Photovoice, an innovative process in which cameras are handed over to participants to document personal experiences. As part of a mixed method evaluation of an after school program, we implemented Photovoice. This session describes the step-by-step process of implementing Photovoice and presents findings of its implementation with 5th grade elementary students participating in an after school program in two elementary public schools. Findings suggest that Photovoice facilitated participants' articulation of thoughts and emotions by speaking through photographs, encouraged self-reflection on the act of recording and selecting photographs, and promoted critical knowledge and dialogue between group members through a discussion of selected photographs representing their after school program. In our study, all responses were audio-taped, transcribed, and thematically coded. In addition, the possibility of uses of this innovative tool for evaluation and research will be discussed with the audience.
|
|
Utilizing Arts-Based Practices in the Field of Evaluation: A Convergence of Ideas in Five Acts
|
| Presenter(s):
|
| Michelle Searle, Queen's University, michellesearle@yahoo.com
|
| Abstract:
Arts-based research provides an extension to the qualitative paradigm already operating within the field of evaluation. It provides a way to extend traditional qualitative and hybrid forms of data collection, analysis, and representation (Knowles & Cole, 2008). Leavy (2009) identifies similarities between artists and researchers: 'holistic, dynamic, involving reflection, description, problem formation and solving, the ambiguity to identify and explain intuition and creativity in the research process' (p. 10). This explanation could easily encompass the work of evaluators. Barone (2001) provides such an example by inviting readers interested in arts-based inquiry to 'opt for an epistemology of ambiguity that seeks out and celebrates meanings that are partial, tentative, incomplete, sometimes even contradictory, and originating from multiple perspectives' (p. 152-153). This paper illustrates that evaluation, research, and arts-based practices share overlapping processes and goals; including the need for creativity, innovation, flexibility, responsiveness, and a willingness to work with/for, as well as create with/for diverse audiences.
|
|
Using Photovoice Methodology in a Participatory Evaluation of Home Visitation Programs
|
| Presenter(s):
|
| Britteny Howell, Cincinnati Children's Hospital Medical Center, britteny.howell@cchmc.org
|
| Lisa Vaughn, Cincinnati Children's Hospital Medical Center, lisa.vaughn@cchmc.org
|
| Janet Forbes, Cincinnati Children's Hospital Medical Center, janet.forbes@cchmc.org
|
| Abstract:
The purpose of this study was to conduct a pilot participatory evaluation of Every Child Succeeds (ECS), a well-established home visitation program in Cincinnati and Northern Kentucky, using Photovoice methodology. The current evaluation and research of ECS offers quantitative data and information that is useful in identifying outcomes relative to the success of ECS programming. However, it has become evident that there are barriers to families optimally benefiting from the service as provided. Through an in-depth, qualitative, and participatory action evaluation of the mothers' lived experience with ECS using Photovoice, we can inform engagement and retention within such programs and thereby inform and enhance educational and social services programming. Qualitative data gathered with Photovoice can be used in conjunction with the formal program evaluation to better illustrate the impact of the program on the community and on the participants.
|
| | |
|
Session Title: How Traditional Evaluation Thinking and Frameworks can be Adapted for Advocacy/Policy Evaluation
|
|
Multipaper Session 866 to be held in Wekiwa 9 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Advocacy and Policy Change TIG
|
| Chair(s): |
| Jacqueline Williams Kaye,
Atlantic Philanthropies, j.williamskaye@atlanticphilanthropies.org
|
|
Where's the Fit? Applying the Centers for Disease Control and Prevention's (CDC's) Framework for Program Evaluation to Policy Evaluation
|
| Presenter(s):
|
| Susan Ladd, Centers for Disease Control and Prevention, sladd@cdc.gov
|
| Diane Dunet, Centers for Disease Control and Prevention, ddunet@cdc.gov
|
| Eileen Chappelle, Centers for Disease Control and Prevention, echappelle@cdc.gov
|
| Lauren Gase, Centers for Disease Control and Prevention, lgase@cdc.gov
|
| Abstract:
There is growing literature around the evaluation of policy advocacy, but less emphasis on the evaluation of policy implementation. In 2008, the Centers for Disease Control and Prevention (CDC) initiated a project to evaluate the implementation of a legislative policy enacted in several states. From this project, CDC wished to develop a framework for evaluating policy implementation. A natural starting place was the CDC Evaluation Framework Program Evaluation which lays out six steps for program evaluation. This presentation will describe the fit of the CDC framework to policy evaluation including what adaptations would be useful when evaluating policy implementation. Evaluation of policy implementation differs from program evaluation in several key areas including: support and opposition to policy may be more intense than for programs; and the extent to which policy enactment and implementation is driven by contextual factors. As an example, the framework will be applied to recent primary stroke center Legislation.
|
|
Evaluating Policy Advocacy: Employing Systems Change Outcomes to Evaluate Community and State Environmental Policies to Reduce Asthma Disparities
|
| Presenter(s):
|
| Mary Kreger, University of California San Francisco, mary.kreger@ucsf.edu
|
| Claire Brindis, University of California San Francisco, mary.kreger@ucsf.edu
|
| Simran Sabherwal, University of California San Francisco, mary.kreger@ucsf.edu
|
| Katherine Sargent, University of California San Francisco, katherine.sargernt@ucsf.edu
|
| Annalisa Robles, The California Endowment, arobles@calendow.org
|
| Mona Jhawar, The California Endowment, mjhawar@calendow.org
|
| Marion Standish, The California Endowment, mstandish@calendow.org
|
| Abstract:
Outcome measures are presented for policy advocacy, collaboration among interdisciplinary stakeholders, and media usage. The primary topics include: housing, schools, and outdoor air.
Policy advocacy and systems change concepts are discussed as they relate to structural changes across multiple sectors of communities. These sectors include outcomes at the individual, organizational, interorganizational, policy, and systems levels. A typology of collaboration is presented that assesses successes and cross pollination opportunities. Examples of resource leveraging to create sustainable policies are included.
Multiple policy outcomes, ranging from organizational procedures to redesigning policies for slum housing and transportation routes, have been undertaken by community coalitions in California, many with substantial success. Mature and middle coalitions gained traction more quickly than younger coalitions; strategies to share outcomes enabled younger coalitions to enhance their success rates and mature coalitions to diversify approaches. Suggestions for maximizing effectiveness of policy advocacy for place-based programs and initiatives are discussed.
|
| |
|
Session Title: Multi-Method Evaluation of a Comprehensive Community-Based Initiative: The National Weed and Seed Strategy
|
|
Panel Session 867 to be held in Wekiwa 10 on Saturday, Nov 14, 1:40 PM to 3:10 PM
|
|
Sponsored by the Crime and Justice TIG
|
| Chair(s): |
| James Trudeau, RTI International, trudeau@rti.org
|
| Discussant(s):
|
| Denise Viera, United States Department of Justice, denise.viera@usdoj.gov
|
| Abstract:
The National Weed and Seed (W&S) Strategy has been implemented in hundreds of communities over the past 15 years and represents the nation's premier effort at integrating crime prevention and community development. The Strategy includes key components of law enforcement, community policing, prevention/intervention/treatment, and neighborhood revitalization; core principles of collaboration, coordination, community participation, and leveraged resources; and the critical role of U.S. Attorneys.
This panel describes the Strategy and cross-site evaluation and presents findings on implementation, outcomes, and linkages between implementation and outcomes. The evaluation provided an overview across the national W&S Initiative using GPRA data from 250+ grantees, Census data, and web-based stakeholder surveys. The evaluation studied 13 sites in detail using surveys of target and comparison community residents; site visits; document review; and data on local business activity. Quantitative analyses (longitudinal growth models and multi-level regression models) are augmented by qualitative synthesis of case studies.
|
|
Evaluation of the National Weed and Seed Strategy: Introduction and Overview
|
| James Trudeau, RTI International, trudeau@rti.org
|
|
This presentation describes the National Weed and Seed (W&S) Strategy and the cross-site evaluation. The W&S Strategy includes key components of law enforcement, community policing, prevention/intervention/treatment, and neighborhood restoration; core principles of collaboration, coordination, community participation, and leveraged resources; and the critical role of U.S. Attorneys.
The evaluation integrated process and outcome components to explore linkages between local W&S implementation and outcomes. For all sites the evaluation formulated a broad overview across the national W&S Initiative, using GPRA data from 250+ grantees, Census data, and web-based stakeholder surveys including social network analysis. In 13 Sentinel Sites the evaluation derived additional information from surveys of target and comparison community residents; site visits; document review; and data on local business activity. Analyses included longitudinal growth models, multi-level regression models, and social network analysis. The evaluation was designed to evaluate the national W&S Strategy, rather than simply a collection of individual W&S sites.
|
|
|
Evaluation of the National Weed and Seed Strategy: Resident Survey Findings
|
| James Trudeau, RTI International, trudeau@rti.org
|
| Jon Blitstein, RTI International, jblitstein@rti.org
|
| Karen Morgan, RTI International, kcmorgan@rti.org
|
| Jan Roehl, JRC Consulting, janroehl@redshift.com
|
|
This presentation describes findings from a resident survey conducted in 13 Weed and Seed target areas and matched comparison areas. The presentation describes methods used to sample sites, identify matched comparison areas, and sample households using GIS. Survey findings address resident perceptions of the neighborhood; victimization (e.g. robbery, burglary, violence); perceptions of police; perceptions of city programs, job opportunities, and housing; attitudes toward the neighborhood; participation in programs; and awareness and perceptions of W&S. Survey data are augmented with crime data, census data, and data on employment, businesses, and housing. Multi-level regression models are used to compare outcomes in W&S target areas and matched comparison areas. Observed differences are explained using information on grantee and community characteristics; problems targeted; specific approaches in law enforcement, community policing, prevention/intervention/treatment, and neighborhood revitalization; functioning of the local W&S initiative; and degree and effectiveness of interaction among stakeholders.
| |
|
Evaluation of the National Weed and Seed Strategy: Stakeholder Survey Findings
|
| Phillip Graham, RTI International, pgraham@rti.org
|
| James Trudeau, RTI International, trudeau@rti.org
|
| Kelle Barrick, RTI International, kbarrick@rti.org
|
|
This presentation describes findings from a web-based survey of W&S stakeholders conducted in more than 130 sites with more than 1,000 stakeholders. The many approaches used under the W&S Strategy presented the challenge of capturing detailed information on numerous topics while keeping respondent burden manageable. In a modular approach, each stakeholder received one module specific to his or her role or domain (site coordinator, law enforcement, community policing, prevention/intervention/treatment, or neighborhood revitalization) and two additional modules, each addressing one of the following topics: target area problems, local initiative focus, local initiative functioning, and stakeholder collaboration. Findings reflect the breadth and variety of local W&S initiative; agreement/disagreement among different types of stakeholders; alignment between target area problems and initiative focus; and alignment between perceived focus and reported activities. Stakeholder survey data are also used to explain observed outcomes in other data (e.g. crime data, resident survey data).
| |
|
Evaluation of the National Weed and Seed Strategy: Synthesis of Case Studies
|
| Jan Roehl, JRC Consulting, janroehl@redshift.com
|
| James Trudeau, RTI International, trudeau@rti.org
|
|
This presentation describes a synthesis of findings from case studies of 13 Weed and Seed grantees selected for in-depth study. Source materials for case studies included interviews with grantee staff and key stakeholders; observations of grantee activities and tours of target areas; grant applications, progress reports, local evaluations, and other materials; and findings from other cross-site evaluation components (e.g. resident survey, stakeholder survey, crime data).
Cross-site findings from the qualitative case studies describe key site characteristics, activities, accomplishments, and challenges of the 13 Sentinel Sites, with special attention paid to collaboration and coordination, prior and new relationships among key actors, community reaction and involvement, committee functioning, the special role of the U.S. Attorney and other federal agencies, implementation successes and failures, subjective impact of Weed and Seed strategies, and sustainability. The cross-site synthesis describes the common and novel aspects of these processes and how they affected program functioning and success.
| |