|
Session Title: Evaluating Web-Based Learning Support Tools
|
|
Multipaper Session 875 to be held in Panzacola Section F1 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Distance Ed. & Other Educational Technologies TIG
|
| Chair(s): |
| Michael Porter,
College Center for Library Automation, mporter@cclaflorida.org
|
|
An Evaluation of Wiki as a Tool for Building Communal Constructivism in a Graduate-Level Course
|
| Presenter(s):
|
| Kathleen D Kelsey, Oklahoma State University, kathleen.kelsey@okstate.edu
|
| Hong Lin, Oklahoma State University, hong.lin@okstate.edu
|
| Tanya Franke, Oklahoma State University, tanya.franke@okstate.edu
|
| Abstract:
Wikis have been praised as tools that enhance learning and collaborative writing within educational environments and move learners toward a state of communal constructivism (Holmes et al., 2001). Many pedagogical claims exist regarding the benefits of using wikis. These claims, however, have rarely been tested empirically. This study used a three-year longitudinal cohort survey design (Creswell, 2008) to test the pedagogical claims of wiki, including the theory of communal constructivism, when implemented as a writing tool to create an online textbook in a graduate-level course. Holmes, et al. (2001) assertions were not substantiated by our findings. The overall survey mean was 2.33 on a four-point scale, indicating learners were not sure if the wiki writing experience impacted their knowledge construction or critical thinking skills. Instructors must encourage and reward students for collaborating and stress learner responsibility when using wikis as a collaborative writing tool.
|
|
Evaluating Learners and Building Evaluation Capacity in an Online Community Learning Model
|
| Presenter(s):
|
| Cindy Beckett, Independent Consultant, cbevaluate@aol.com
|
| Abstract:
Evaluating learners in the context of an online environment where a community learning model is implemented through peer and professional mentor interaction provides unique evaluation opportunities. In this format, evaluating the process and progress of learners can be challenging. In some cases, the degree to which these elements can be evaluated may be dependent on information gleaned from activity and the evaluation capacity built, for example the ability to collect data on the community website. Challenges may arise when evaluators are faced with evaluating web based programs that may have not considered an evaluation approach or tools and lack capacity built into their online programs from the outset. The following example of an evaluation of a non-profit organization strives to illustrate the benefits and challenges of evaluating web based learning in this context and provide some solutions and insight in evaluating distance learning in a community learning model.
|
|
Evaluating Supplemental Educational Services: A Randomized Control Trial
|
| Presenter(s):
|
| S Marshall Perry, Dowling College, smperry@gmail.com
|
| Abstract:
The paper concerns an evaluation of an online individualized supplemental educational services program aimed at improving middle school reading performance for students who are below grade level. The diverse study sample consisted of nearly 400 students from 15 schools in three states. Study measures included two standardized assessments and a student survey conducted three times over a school year. The paper examines a randomized control trial to determine the relationship between student involvement in the program and changes in academic achievement and academic attitudes and behaviors. By the mid-tests, the treatment group significantly outperformed control group students by nearly 3/4s of a grade. Students with lower pre-program achievement tended to experience greater growth. The treatment and control group did not always differ significantly in attitudinal and behavioral measures, but changes were correlated with academic growth. The paper also highlights methodological and logistical challenges and implications for future evaluation research.
|
|
Evaluation of the Multi-Phase Release of Florida's Community College Library-Resource Website: A Mixed Methods Approach
|
| Presenter(s):
|
| Michael Porter, College Center for Library Automation, mporter@cclaflorida.org
|
| Dawn Aguero, College Center for Library Automation, daguero@cclaflorida.org
|
| Barry Harvey, College Center for Library Automation, bharvey@cclaflorida.org
|
| Aimee Reist, College Center for Library Automation, areist@cclaflorida.org
|
| Abstract:
The web-based Library Information Network for Community Colleges (LINCCWeb) is the library-resource search tool used by nearly 1,000,000 students, faculty, and staff at 80 libraries of Florida's 28 community and state colleges. In 2008, LINCCWeb version 2.0 was released to library staff while still in development ('beta') by Florida's College Center for Library Automation (CCLA). A second beta release for staff, faculty, and students occurred in early 2009. During these periods, staff critiqued the website via online forums and the CCLA help desk. Students and faculty participated in focus groups and user test analysis, and also had the opportunity to complete a LINCCWeb version 2.0 user satisfaction survey. A small team of CCLA staff analyzed all of the qualitative and quantitative data, identifying important trends and themes for CCLA's web developers. This paper presents the analysis approach of this team and discusses the lessons learned for an extensive website evaluation.
|
| | | |
|
Session Title: The Organizational Context of Human Services Evaluation: Capacity, Data Collection, and Utilization
|
|
Multipaper Session 876 to be held in Panzacola Section F2 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Human Services Evaluation TIG
|
| Chair(s): |
| Michel Lahti,
University of Southern Maine, mlahti@usm.maine.edu
|
| Discussant(s): |
| James Sass,
Independent Consultant, jimsass@earthlink.net
|
|
Assessing the Capacity of Non-profit Community-Based Organizations to Conduct Program Evaluation
|
| Presenter(s):
|
| Neil Vincent, DePaul University, nvincen2@depaul.edu
|
| Reginald Richardson, Northwestern University, r-richardson2@northwestern.edu
|
| Abstract:
Community-based non-profit organizations (CBOs) now operate with the expectation that they measure program effectiveness. However, little is known about the capacity of CBOs to conduct program evaluation. This paper presents a mix-method study that explores how CBOs from a large metropolitan area conceptualize and implement program evaluation efforts as well as the barriers and resources needed to improve their efforts. It presents results from survey data collected on 134 CBOs and in-depth interviews with staff from 15 of these organizations. Implications for program evaluators who consult with CBOs are presented.
|
|
Helping Human Services Programs Succeed: Challenges for the Internal Evaluator
|
| Presenter(s):
|
| Robbie Brunger, Ounce of Prevention Fund of Florida, rbrunger@ounce.org
|
| Abstract:
The classic critique of internal evaluators is that they know more and care more about programs than an external evaluator would; this situation presents them with two special challenges. The experiences of a funding agency for local human services programs in Florida suggest that the first challenge is to improve the likelihood that those programs will be successful. That process begins with the program's design and approach to data collection, and it continues with the need to monitor the results and supply program staff with information they can use for program improvement. The second challenge occurs when writing an evaluation report, for it is necessary to avoid the perception of 'carrying water' for the program. Best practices that can help ensure a high degree of credibility for reports include an organizational structure that supports independent inquiry and a systematic documentation process to substantiate all statements of fact and conclusions.
|
|
Challenges and Successes! Working With an Entire County to Collect Outcomes Evaluation Data From Human Services Programs
|
| Presenter(s):
|
| Peggy Polinsky, Parents Anonymous® Inc, ppolinsky@parentsanonymous.org
|
| Abstract:
The experience of conducting evaluation activities in multiple sites across a large geographical area and with multiple human services program types, including anger management, adoption services, counseling, home visitation, and parenting, has created an evaluation approach that must take 'context' into account on many levels, yet make sure the evaluation activities are consistent across all sites. Parents Anonymous® Inc. is in its fourth year of working with Riverside County, California, Department of Public Social Services (DPSS) to assure mandated provider evaluation data collection and submission, as required by CAPIT and PSSF funding. This presentation will document the challenges and successes of setting up a web-based evaluation system with geographically distanced providers from multiple human services program types, helping the providers understand the necessity and value of evaluation data, working with providers and DPSS to determine appropriate outcome measures, and frequent discussions with DPSS and providers regarding data issues and interpretations.
|
|
Mandated Data Collection as Catalyst for Program Learning
|
| Presenter(s):
|
| Lois Thiessen Love, Uhlichs Children's Advantage Network, lovel@ucanchicago.org
|
| Abstract:
The extensive demands on human service organizations to produce data for funding, regulation and accreditation requirements are often viewed as program interruptions, not as aids to program learning and improvement. The evaluator's challenge is to help programs balance these demands with opportunities for program learning and improvement. A survey of human service evaluators will provide case examples of the relative success of evaluator strategies for successful use of mandated data collection. Using case examples, and a systems analytic framework, this presentation will propose contextual factors that support program learning from mandated data collection, and practical strategies for the human service evaluator in working with programs and organizations.
|
| | | |
|
Session Title: Evaluating Complex Multi-Site Community-Based Interventions: The Controlling Asthma in American Cities Project
|
|
Multipaper Session 877 to be held in Panzacola Section F3 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Health Evaluation TIG
|
| Chair(s): |
| Maureen Wilce, Centers for Disease Control and Prevention, mwilce@cdc.gov
|
| Abstract:
In 2001, the Centers for Disease Control and Prevention funded 7 community-based coalitions to develop, implement and evaluate comprehensive, culturally appropriate asthma programs. Those programs targeted children (0-18 years) in inner-city areas with high rates of asthma prevalence and morbidity and with documented health disparities. Evaluation of this Controlling Asthma in American Cities (CAAC) project was guided by a program theory that used an ecological model of behavior change, encompassing several levels of influence on health behavior: intrapersonal, interpersonal, institutional, community and policy. This session introduces the program theory for this project and demonstrates the multifaceted approach to its evaluation. Individual papers address evaluation questions at different points in the model: one describes a participatory method for capturing changes in the local political/social/cultural environments, the second addresses challenges in evaluating an intervention that is implemented differently across sites, the final reports a low-cost methodology for measuring population -level change.
|
|
Evaluating the Added Value of Implementing Complex Projects Through Community-Based Coalitions
|
| Elizabeth Herman, Centers for Disease Control and Prevention, ehh9@cdc.gov
|
|
The terms of the Controlling Asthma in American Cities Project required the 7 participating inner-city sites to identify or develop community-based coalitions through which to plan and implement a comprehensive community asthma plan. A participatory process was developed to evaluate the added value of implementing the work through coalitions, focusing on changes at the community, institutional and policy levels. That participatory process involved the collaborative development of definitions and terms, agreement upon a common theoretical model of how the coalitions operated, classification of outcomes into different categories of "added value", the development of inclusion and exclusion criteria, as well as a process of negotiation and review. This presentation reviews the steps of this collaborative process, describes the points of difficulty or disagreement, reports the outcomes of the process, and makes recommendations for improving the process in future projects.
|
|
A Cross-site Presentation of Key Program Variables and Process Indicators Among Family and Home Asthma Services Provided By the Controlling Asthma in American Cities Projects
|
| Amanda Savage Brown, Centers for Disease Control and Prevention, abrown2@cdc.gov
|
| Sheri Disler, Centers for Disease Control and Preventions, sdisler@cdc.gov
|
|
The Centers for Disease Control and Prevention (CDC) developed a seven-site cooperative agreement program, the Controlling Asthma in American Cities Project (CAAC), whose primary goal was the development of innovative, effective community-based interventions impacting asthma control community-wide. All CAAC sites found a need to deliver family and home asthma services (FHAS) which were multi-component (e.g., asthma self management, social services, or coordinated care), multi-trigger environmental interventions. Although specific evaluation measures were not prescribed, CDC assisted each site in developing tools (i.e. indicator grids) to track each intervention's annual progress toward accomplishing five-year targets. Information specific to FHAS was compiled from the grids and a CDC-developed cross-site survey, administered during the sites' final year of implementation, gathered additional information about program management, content, and delivery. This paper synthesizes key program variables and process indicators of six CAAC FHAS interventions for consideration by others planning to implement similar activities.
|
|
Use of Pharmacy Prescription Fill Data to Evaluate the Impact of a Community-Wide Asthma Project
|
| Amanda Savage Brown, Centers for Disease Control and Prevention, abrown2@cdc.gov
|
| Victoria Persky, University of Illinois at Chicago, vwpersky@uic.edu
|
| Steven Q Davis, University of Chicago, sqdavis@gmail.com
|
| Jerry A Krishnan, University of Chicago, jkrishna@medicine.bsd.uchicago.edu
|
| Kwan Lee, Walgreens Health Initiatives, kwan.lee@walgreens.com
|
| Edward T Naureckas, University of Chicago, tnaureka@medicine.bsd.uchiacgo.edu
|
|
The Controlling Asthma in America's Cities Project's Chicago site used a multifaceted approach to improve asthma care for inner-city children. Objectively evaluating the project's influence on a population-wide basis required novel methods. Asthma-related medication dispensing data obtained from a large pharmacy chain was used to assess the project's effect on appropriate asthma medication use. Appropriate medication use was defined in two ways and using two comparison groups. The most notable finding was a significant association between living in the intervention area and appropriate asthma care for children aged 5-9. This is consistent with the focus of the project's interventions on younger children and on promoting appropriate medication use. The results suggest a beneficial effect on quality of asthma care in the subgroup of children with asthma targeted by the project. This methodology has potential for evaluating medication use in the management of other diseases amenable to large-scale, community-wide interventions.
|
|
Session Title: Dominoes or Pick-Up Sticks? Philanthropy's Struggle to Acknowledge Complex Systems
|
|
Panel Session 878 to be held in Panzacola Section F4 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Systems in Evaluation TIG
and the Non-profit and Foundations Evaluation TIG
|
| Chair(s): |
| Michael Patton, Union Institute, mqpatton@prodigy.net
|
| Discussant(s):
|
| Gale Berkowitz, David and Lucile Packard Foundation, gberkowitz@packard.org
|
| Abstract:
Enticed by the "what works" movement and seduced by manufacturing's process standardization, foundations and nonprofits have embraced planning and evaluation tools that assume a direct and linear relationship between their activities and their desired outcomes, just like knocking over one domino and watching the entire string tumble in sequence. As they deconstruct social change into bite-size projects, foundations have come to judge success by grantees' fidelity to process and their compliance with near-term implementation requirements. In fact, social change plays out in real life more like a game of pick-up sticks than knocking over a row of dominoes. Even more troubling is that in real life players do not even take turns. Everyone is moving sticks at the same time. In this volatile setting of complex systems, foundations must focus less on compliance-oriented variables and devote more energy to continual feedback, adaptive behaviors and real-time adjustments. In this session, foundation executives and foundation consultants will discuss reasons and remedies for the current state of practice.
|
|
Philanthropy, Accountability and Social Change
|
| John Bare, Arthur M Blank Family Foundation, jbare@ambfo.com
|
|
The accountability movement is diminishing philanthropy's appetite for investing in social change, as well as the nonprofit sector's ability to execute against a social-change agenda. The rewards promoted as part of the accountability movement favor compliance and rote behavior. An effect of the accountability movement is that organizations are substituting evidence of process standards for a display of value added to society. One reason is that the tools of the accountability movement are intended for a narrow, important function but are poorly suited to meet the needs social-change agendas. A second reason is the trend favoring certain types of evidence. This inhibits investment in which these types of evidence are unlikely to surface. As a remedy, philanthropy should adopt tools robust enough to be helpful within complex systems. Pursuit of a social-change agenda requires attention to risk analysis and a highly flexible nature that rewards continual adjustments.
|
|
|
Partnerships, Complexity, and Community Change
|
| Teresa Behrens, The Foundation Review, behrenst@foundationreview.org
|
|
Understanding that the complex relationships within a community can contribute to, or inhibit, the success of community change efforts, many funders have turned to community partnerships to implement initiatives. In this presentation, Behrens reviews partnerships funded by the W. K. Kellogg Foundation over several years, discusses the role these partnerships play in the theory of change, and the evidence about their effectiveness. Have these partnerships really been effective? Does measuring the effectiveness of partnerships drive us further into the dark side of "evidence based practice"? Does requiring partnerships violate the mandate to "do no harm"?
| |
|
Innovating Evaluation in Philanthropy
|
| Victor Kuo, WestEd, vkuo@wested.org
|
|
Industry norms for evaluation in philanthropy barely exist; various approaches abound. Since the late 1990's, calls for evaluation in philanthropy have ignited a frenzy of activity. Theory driven approaches to evaluating complex social change efforts, dashboards and performance metrics, and grantee perception studies have been launched. Some foundations with strong beliefs in technology's hope are investing in data systems to provide regular feedback for short and long-term horizons. Some examples of how each approach has contributed to social change exist. This presentation will pose reflections on the past decade of evaluation in philanthropy from an evaluator who has served in the evaluation function of three foundations based on the West coast. The panelist will consider why innovative approaches to evaluation, including evaluation of complex systems, compete for attention. A key role of philanthropy in society is to innovate. Looking forward, evaluators can anticipate evaluation approaches to continue to be claimed as innovative and shared in new settings.
| |
|
Session Title: Gaining Real World Forensic Evaluation Experience in the Classroom
|
|
Multipaper Session 879 to be held in Panzacola Section G1 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Graduate Student and New Evaluator TIG
|
| Chair(s): |
| Gregory T Capriotti, Wright State University, capriotti.2@wright.edu
|
| Discussant(s): |
| Betty Yung, Wright State University, betty.yung@wright.edu
|
| Cheryl Meyer, Wright State University, cheryl.meyer@wright.edu
|
| Abstract:
A group of graduate students from Wright State University School of Professional Psychology collaborated on multiple pro bono program evaluations for five local human service and governmental agencies. All evaluations conducted were forensic in nature. Group consultation took place over the course of ten weeks as part of a Program Evaluation Class. This collaborative effort generated new surveys, improved methodology, and recommendations that will allow these agencies to better serve their community. This training method allows local agencies to receive valuable services while contributing to the professional development of psychology students. The papers presented in this panel will describe the process of collaborative evaluation as a classroom assignment at the graduate level. It will also focus on factors inherent in forensic evaluation including ethics, the sensitive nature of client information, and political motivation. Two projects involved juvenile courts, one with family services and one with a bar association.
|
|
Drug Court Report Card: Capturing the Quantitative and Qualitative Aspects of Success in Drug Court
|
| Caprice Parkinson, Wright State University, parkinson.7@wright.edu
|
| Gregory T Capriotti, Wright State University, capriotti.2@wright.edu
|
|
In strength-based Juvenile Drug Court Programs, it is imperative to assess the participant's motivation, resilience and level of self-esteem. In addition, the relationship between the probation officer and participant is integral in the child's growth and progress within the program. However, to determine effectiveness, it is important to be able to assess progress of each participant through the program. In our evaluation, we attempted to create an instrument to integrate measurable, objective data with subjective judgments of these Drug Court Programs. Objective data included prior offenses, education and family life. Subjective data allowed the probation officer an opportunity to "grade" the subjective functioning of each participant. Our goal is to find validity within the subjective data and to help these programs better serve the participants.
|
|
The Dayton Bar Association Judicial Candidacy Preference Poll Evaluation
|
| Jennica Karpinski, Wright State University, karpinski.2@wright.edu
|
|
The Dayton Bar Association (DBA) requested an evaluation of the preference poll the organization conducts prior to Montgomery County judicial elections. Poll results in 2008 were criticized due to perceived racial and gender bias. The method in which the DBA poll is currently conducted leaves the results vulnerable to criticism and dismissal because of ambiguous evaluative categories and scales. After researching judicial evaluation processes nationwide and standards put forth by the American Bar Association (ABA), the evaluator determined that the poll should be re-structured to be clear and to follow guidelines put forth by national organizations such as the ABA. The foremost recommendation of this research encouraged a transition from a preference poll to an evaluation poll. A modified Judicial Evaluation Poll was created for this purpose. Additional recommendations included improving statistical analysis, and providing more information about methodology to the public so poll results could be better understood.
|
|
Family Preservation: Family Stability Committee Decisions Regarding Placement for Children in Troubled Families
|
| Patrice Hairston, Wright State University, hairston.15@wright.edu
|
| Seema Jacob, Wright State University, jacob.14@wright.edu
|
|
The Family Stability Committee (FSC) is a multi-agency team whose primary goals are family preservation, safety of children and communities and stabilization of families in crisis. The committee meets with families to make a recommendation either for placement into foster care or for additional services to be provided for the family of origin. The objectives of this program evaluation are to : 1) compare and contrast the percentage of placements versus the percentage of non-placements (per the FSC decision), and 2) identify common characteristics of the families staffed at FSC. A coding sheet was developed to code the information of the families staffed at FSC during 2008. Results indicated that 42.72 percent of the families faced removal of children from their homes. The common characteristics of these families in danger of their children being removed included legal problems, single parent and financial problems.
|
|
Increasing Referrals to a Program for Offenders Who are 10 and Under
|
| Jennifer Esterman, Wright State University, esterman.2@wright.edu
|
| Stephanie Adams, Wright State University, adams.173@wright.edu
|
|
The Montgomery County Juvenile Court in Ohio has a program for offenders ten years of age and younger. The purpose of this evaluation was to determine reasons for the low referral rate to the program by police officers. The three police districts that referred the least amount of child offenders to the program were surveyed to determine why they were not making referrals. Overall, respondents indicated a lack of knowledge of the program. In addition, police officers indicated they received information about programs through coworkers and emails. Therefore, it was recommended they revise their program materials addressing myths about the program and disseminate program information via listservs and email.
|
|
After Care Program: A Needs Assessment for Juvenile Offenders Leaving Residential Treatment
|
| Joann Wright-Mawasha, Wright State University, mawasha.2@wright.edu
|
| Candace Beck, Wright State University, beck.54@wright.edu
|
|
The purpose of the project was to design an assessment tool to capture quantitative and qualitative outcome data to determine the effectiveness of a local Aftercare program for juvenile offenders who had been discharged from a residential treatment facility. Because the Aftercare had recently changed its focus, the program evaluation team determined that a needs assessment would be an important first step to identify the types of services that were needed by the families and adolescents. Results indicated that both parents and adolescents would be willing to participate in an Aftercare program, and both preferred services in the areas of anger management and employment. Recommendations were provided to serve as a blueprint in the design, implementation, and successful service delivery of the Aftercare program.
|
|
Session Title: The Strategic Use of Data by Community: Experiences and Challenges in Building and Sustaining Local Evaluation Capacity in Four Cities
|
|
Panel Session 880 to be held in Panzacola Section G2 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
|
| Chair(s): |
| Tom Kelly, Annie E Casey Foundation, tkelly@aecf.org
|
| Discussant(s):
|
| Peter York, TCC Group, pyork@tccgrp.com
|
| Abstract:
This panel describes the challenges and successes in building community evaluation capacity in four cities implementing broad-scale, community initiatives: Atlanta, Denver, San Antonio and Providence. Complex community change requires evaluation capacity to be strengthened within residents, organizations, and evaluators. Challenges include the history of exploitation and lack of power and control that community residents have had in past community development evaluations. Success stories capture how local capacity is being built and how strategic and sustained use of data for accountability and learning have influenced organizational and community behaviors. Four local evaluators and researchers from will also address the challenge of ensuring that evaluation capacity is valued and maintained through continued local investment after external (foundation) funding ends. Peter York, an evaluator experienced in building and evaluating capacity building in multi-site initiatives will respond to these examples and lead a discussion of the implications for building and measuring community evaluation capacity.
|
|
The Atlanta Experience: Addressing the Negative Historical Experiences of Research and Evaluation in Community- Dr. Dana Rickman
|
| Dana Rickman, Annie E Casey Foundation, drickman@atlantacivicsite.org
|
|
Historic patterns of disadvantage are often deeply entrenched, leading to a culture of despair and hopelessness. The pressure of economic hardships erodes the ability of community residents to participate in civic activities. The Annie E. Casey Foundation's Atlanta Civic Site, however, views residents and neighborhood organizations as critical and indispensable resources for successful community change. The Civic Site used community builders to implement Living Room Chats to stimulate and guide conversation around peoples' own perceptions of the community, especially those related to the quality of relationships and civic activity taking place in the neighborhood. This presentation will review the Living Room Chat process and present results. The methodology provided a deeper knowledge of the quality and nature of interpersonal and inter-organizational relationships, helped to determine the community's readiness and ability to engage in neighborhood change activities, and provided insight into resident perspective on levels of civic participation.
Dr. Dana Rickman brings a strong academic and policy-focused background that supports the Atlanta Civic Site, Annie E. Casey Foundation. Dr. Rickman has over 10 years evaluating and researching projects related to poverty and urban development. She is responsible for evaluating programmatic efforts at the Atlanta Civic Site.
|
|
|
The Denver Experience: Building the Capacity of Residents in Research and Evaluation, Dr. Sue Tripathi
|
| Sue Tripathi, Making Connections Denver, stripathi@mcdenver.org
|
|
Making Connections Denver, an initiative of Mile High United Way, provides a unique perspective on the use of residents as community-based researchers. The initiative teaches residents to develop the relationships, skills, and leadership necessary to take action toward creating positive community change. The Community Research Team oversees research and evaluation and is committed to a resident driven approach to community change. Residents receive training in research and evaluation, and then help to build similar capacity in other community based organizations to use data and evaluation in their work. This presentation focuses on the challenges and successes in engaging and empowering community researchers in ways that are congruent with the guiding principles of community change. Also, it provides a framework on scope, scale and sustainability of a community change initiative where addressing capacity building of residents is integral to the sustainability of the initiative.
Dr. Sue Tripathi has worked in several non-profit, local and national foundations and government agencies for the past 13 years in the field of research and evaluation related to health, poverty, urban and rural development, education, social services and child welfare. Apart from research and evaluation, Dr. Tripathi has taught at several universities and has experience in budgetary and policy issues. She is a past National Science Foundation Fellow and a Junior Fellow, National Geographic Society and is responsible for evaluating the efforts of the initiative in Denver.
| |
|
The San Antonio Experience: Building the Evaluation Capacity of Organizations, Systems, and Nonprofits, Sebastian Schreiner
|
| Sebastian Schreiner, Making Connections San Antonio, sebastian.schreiner@sanantonio.gov
|
|
An integral part of the evaluation capacity building in community change initiatives lies in addressing issues of capacity development of individual organizational partners and the overall collaborative systems being developed to evaluate progress and steer strategies. Appealing to organizational self-interests in demonstrating their own successes is a prerequisite for any structural changes to be implemented and sustained in the long term beyond the life of a community initiative, thereby highlighting the need for individualized approaches in capacity building activities. This session addresses the elusive issue of developing collaborative capacity between organizations and lessons learned when convening organizations with differences in scope and the resulting power differentials. Also addressed will be the effects of varying organizational cultures in the use of data, differences in organizational structures and organizational understanding and acceptance of the concept of community accountability and commitment to the process of joint learning.
Sebastian Schreiner coordinates the Local Learning Partnership and its efforts for the Making Connections site in San Antonio and has been working with the initiative since 2007. He has a background in community advocacy and grassroots organizing in homeless, immigrant and low income communities and holds a Master's of Science in Social Work from the University of Louisville, KY and a Diploma in Social Work from the Catholic University of Applied Sciences in Munich, Germany.
| |
|
The Providence Experience: Building the Capacity of Evaluators to Work in and With Community, Tanja Kubas-Meyer
|
| Tanja Kubas-Meyer, Providence Making Connections, tkubasmeyer@cox.net
|
|
Providence's Local Learning Partnership has provided a unique "on the ground" capacity that has been directly engaged in the community change work as well as providing evaluation services. The skills that local evaluators need to help support and move community work forward include participating in and supporting the development of work that may be led by others; building capacity of residents and program staff/partners with a wide variance in skill or interest in data or evaluation; and providing timely information to support learning and practice. As the end of the initiative approaches, evaluators need to help community shift to more evaluative and critical analysis; a difficult transition when local context, need, and resources have not been in place to maximize learning practices along the way. Examples of learning work with program partners and residents in projects including resident database development and school-based participant family data collection will be discussed.
Tanja Kubas-Meyer, MSW, MA has more than 25 years experience in administration, policy, and evaluative work with non-profit human service providers and has been a consultant with the Providence Local Learning Partnership since 2006. She is a doctoral student at the Heller School for Social Policy and Management at Brandeis University.
| |
|
Session Title: Today's Challenging Context for Supreme Audit Institutions: Case Examples From the United States, Canada, United Kingdom and Norway
|
|
Panel Session 881 to be held in Panzacola Section H1 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Presidential Strand
, the Government Evaluation TIG, and the International and Cross-cultural Evaluation TIG
|
| Chair(s): |
| Valerie J Caracelli, United States Government Accountability Office, caracelliv@gao.gov
|
| Discussant(s):
|
| Rakesh Mohan, Idaho Legislature, rmohan@ope.idaho.gov
|
| Abstract:
The presentations from leaders of Supreme Audit Institutions illustrate how oversight and accountability are taking on ever-increasing importance in a complex and troubled global environment. The first describes how the U.S.GAO is responding to recent legislation that has ramifications for workload, coordination, and turnaround times. The second paper describes the Canadian audit context in which GAO's sister organization carries out government-wide audits that measure and report on the program effectiveness but does not conduct evaluations. The third presentation describes the U.K. audit context; it traces methods and practices over 15 years and shows how changing contextual factors drive and limit choices. The last paper, on the Norwegian audit context, illustrates the importance and challenge of incorporating a citizen's perspective into performance audit work. The discussant, an experienced U.S. state auditor and past member of the Advisory Council on Government Auditing Standards, will draw from his experience and guide audience discussion.
|
|
Building Evaluation Capacity to Respond to Oversight Needs: The United States Government Accountability Office
|
| Nancy Kingsbury, United States Government Accountability Office, kingsburyn@gao.gov
|
|
The Obama Administration has promised a renewed commitment to transparency and oversight. These goals are particularly relevant to the Troubled Asset Relief Program (TARP) and the American Recovery and Reinvestment Act (ARRA) of 2009, which provide federal funds to stabilize and stimulate the economy. These acts also assign the United States Government Accountability Office (GAO) a challenging range of responsibilities to promote accountability and transparency, including recurring bimonthly reviews of selected states' and localities' uses of funds, and targeted studies in key areas. GAO has responded by assembling multidisciplinary teams with a wide range of skills, building appropriate in-house technical expertise through targeted new hires, and consulting outside experts. This presentation will discuss the challenges associated with creating the right mix of staff capacity, delivering fast turn-around products, and coordinating with other bodies that also have responsibilities for monitoring and overseeing the TARP and ARRA.
|
|
|
Program Evaluation in the Canadian Federal Government Viewed From the Context of the Office of the Auditor General
|
| Colin Meredith, Office of the Auditor General of Canada, colin.meredith@oag-bvg.gc.ca
|
|
The legislative mandate of the Office of the Auditor General of Canada does not empower the Office to carry out evaluations of government programs. Instead, the Office has a mandate to examine and report on whether satisfactory procedures have been established by federal departments and agencies to measure and report the effectiveness of their programs. In the past, exercise of this mandate has most often taken the form of performance audits of government programs which included examination of the measurement and reporting of program results by responsible departments. The Office has also carried out government-wide audits of the function in 1978, 1983, 1986, 1993 and this year. The paper will review and discuss the role of the legislative audit office in the context of its mandate to audit but not conduct program evaluations, and in the context of an evolving central policy on the function.
| |
|
The Right Tools for the Job? Methods Choice and Context in the Performance Audit of the United Kingdom National Audit Office
|
| Jeremy Lonsdale, National Audit Office United Kingdom, jeremy.lonsdale@nao.gsi.gov.uk
|
|
This paper examines the selection of methods used in performance audits at the UK National Audit Office, which produces over 60 reports a year. It traces the development of research methods used over the last 15 years and examines how contextual factors, in particular, the accountability focus of the work, its use in formal scrutiny processes, the changing nature of government bodies, and changing staff mix, have shaped these choices. The paper also considers how contextual factors determine what kinds of evidence are deemed to be of particular merit by audit audiences (for example, independently generated quantitative data) and what methods and the evidence they yield are considered inappropriate or less valued. The paper will provide a detailed review of reports and interviews with practitioners about what has guided their choices. The paper will show how contextual factors - in particular, setting - both drive and limit the selection of methods.
| |
|
Improving Public Services Through a Citizen's Perspective: A Contextual Factor in Norway's Supreme Audit Institution
|
| Kristin Amundsen, Office of the Auditor General of Norway, kristin.amundsen@riksrevisjonen.no
|
|
In the last decade there has been increasing recognition that a citizen's perspective is an important contextual factor for Supreme Audit Institutions. This presentation provides examples from performance audits in Norway. The paper explores why this perspective has become important in performance audit practices. It will show that taking into account a citizen's perspective is useful when analyzing efficiency and effectiveness of public services and demands new methods for practice. It will address several issues related to citizen views and their importance in the development of performance audit work: (a) the kinds of methods that can be applied in measuring performance, (b) the relationship of the citizen's perspective to the effectiveness of public services, and (c) the manner in which a citizen's perspective contributes to better reporting to the Parliament and improving public services. Last, calling on examples from practice, the paper identifies some challenges in implementing a citizen's perspective.
| |
|
Session Title: New Applications, Large Challenges, and Strategic Approaches in Managing Data
|
|
Multipaper Session 882 to be held in Panzacola Section H2 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Chair(s): |
| Lihshing Wang,
University of Cincinnati, leigh.wang@uc.edu
|
|
Incorporating Multilevel Techniques Into Quality Control Charts
|
| Presenter(s):
|
| Christopher McKinney, University of Northern Colorado, christopher.mckinney@unco.edu
|
| Pablo Olmos, Mental Health Center of Denver, antonio.olmos@mhcd.org
|
| Linda Laganga, Mental Health Center of Denver, linda.laganga@mhcd.org
|
| Kathryn DeRoche, Mental Health Center of Denver, kathryn.deroche@mhcd.org
|
| Abstract:
The use of statistical quality control charts has become more popular over the past two decades within behavioral and healthcare service systems. Though the use of common statistical process control charts can improve process quality and reduce variability, they utilize statistical methods which assume each individual measured is the same. In practice we know this is an erroneous assumption, where the means and rates of change vary across individuals. Multilevel techniques provide means and rates of change conditional on specified environmental factors, along with partitioning the variability of the individuals and even higher level groups from that of the measurement and random error. The current presentation will discuss the use of statistical quality control charts in the continued evaluation of behavioral and healthcare services, and demonstrate that multilevel statistical techniques can improve the function of quality control charts in these settings.
|
|
Data Envelopment Analysis: An Evaluator's Tool for Identifying Best Practice Among Organizations
|
| Presenter(s):
|
| John Hansen, Indiana University, joahanse@indiana.edu
|
| Abstract:
Comparing ranks of organizations such as schools or businesses is a common approach to evaluate relative group performance. To level the playing field across organizations, rankings may be based on a production function which relates the organization's inputs to its outputs - for example, ranking schools on test scores while accounting for poverty status. These rankings give a descriptive picture of relative performance but they offer little in terms of prescriptive comparisons in terms of best practice. Rankings identify top performers on defined criteria, but often there is substantial variability across organizations' inputs that restrict the utility of attempting to emulate the top performer. This paper demonstrates how the technique Data Envelopment Analysis identifies best practice peers along a continuum of rankings. This technique provides a prescriptive framework by identifying top performs with variable input levels for modeling best practice. Organization-level diagnostics for targeting best practices will be presented and interpreted.
|
|
Complex Database Design for Large-Scale Multi-Level Multi-Year and Multi-Cohort Evaluation in the e-Age
|
| Presenter(s):
|
| Lihshing Wang, University of Cincinnati, leigh.wang@uc.edu
|
| Abstract:
Evaluation research that involves large-scale, multi-level, multi-year, and multi-cohort data presents special challenges to researchers. Most quantitative programs and publications focus on the research design, data collection, and data analysis phases, but largely leave the database design phase out of the research cycle. This study examines the database design issues encountered in a recent state-wide endeavor to explore the causal relationships among three clusters of variables: one exogenous cluster (teacher education), one direct endogenous cluster (teacher quality), and one indirect endogenous cluster (student learning). The two endogenous clusters were repeated over seven years and collected from six cohorts. The following topics are addressed: (a) alignment of the conceptual framework, the operational model, and the database design; (b) match-linking of relational multiple databases and specification of unique ID's; and (c) security, confidentiality, and collaboration on a shared Internet platform. Implications for conducting complex evaluation research across multiple sites in the e-age are discussed.
|
| | |
|
Session Title: Maintaining Evaluation's Integrity in Trying Times: Three Strategies
|
|
Multipaper Session 883 to be held in Panzacola Section H3 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Evaluation Use TIG
, the Pre-K - 12 Educational Evaluation TIG, and the Research on Evaluation TIG
|
| Chair(s): |
| Susan Tucker,
Evaluation and Development Associates LLC, sutucker1@mac.com
|
| Discussant(s): |
| Jennifer Iriti,
University of Pittsburgh, iriti@pitt.edu
|
|
Maintaining Integrity: Evaluation as a Tool for Assisting Programs Undergoing Major and Unexpected Budget Reductions
|
| Presenter(s):
|
| Gary Walby, Ounce of Prevention Fund of Florida, gwalby@ounce.org
|
| Emilio Vento, Health Connect in the Early Years, evento@hscmd.org
|
| Abstract:
The evaluation of Health Connect in the Early Years, a maternal child health home visiting program in Miami-Dade County Florida, and the adaptation to a mandatory downsizing due to a major drop in revenue is presented. Project management, staff, evaluators, and the funding body, worked together to help the program streamline and manage the direct and devastating effect of a 50% reduction in program funding as a result of the global economic downturn while maintaining program and evaluation integrity. Focus groups, document analysis, ongoing engagement with management and staff, and analysis of data captured pre- and post program reduction was used to help the program make decisions on program implementation as well as provide information on program impact on participants. This presentation tells the story of evaluation and program responses and provides lessons learned for evaluators in similar circumstances.
|
|
Using the Bloom Adjustment to Distinguish Intention-to-Treat Estimates and Impact-on-the-Treated Estimates: The Striving Readers Evaluation
|
| Presenter(s):
|
| Matthew Carr, Westat, matthewcarr@westat.com
|
| Jennifer Hamilton, Westat, jenniferhamilton@westat.com
|
| Allison Meisch, Westat, allisonmeisch@westat.com
|
| Abstract:
In a randomized controlled trial evaluation researchers are occasionally restricted to performing Intention-to-Treat studies. Difficulties understanding impacts of the treatment arise when participants assigned to the treatment group do not actually receive the treatment. Removing 'no-shows' from the sample can bias the composition of the treatment group because of the potential self-selection of these participants. As a result, researchers typically include these participants, trading potential underestimation of treatment effects for maintenance of the randomized evaluation design. However, policymakers are typically more interested in Impact-on-the-Treated estimates, which more accurately reflect the effects of the program. In this paper we examine the potential for the Bloom adjustment to provide researchers with information about both estimates, thereby mitigating the need to make the traditional trade-off between design consistency and research focus. Data from the Striving Readers evaluation, a program designed to improve middle school students' literacy skills, are used as an example.
|
|
Troubled Asset or Valued Resource? A Study of Recommendations From 53 Evaluation Reports
|
| Presenter(s):
|
| Kari Nelsestuen, Northwest Regional Educational Laboratory, nelsestk@nwrel.org
|
| Elizabeth Autio, Northwest Regional Educational Laboratory, autioe@nwrel.org
|
| Ann Davis, Northwest Regional Educational Laboratory, davisa@nwrel.org
|
| Angela Roccograndi, Northwest Regional Educational Laboratory, roccogra@nwrel.org
|
| Caitlin Scott, Northwest Regional Educational Laboratory, scottc@nwrel.org
|
| Abstract:
In evaluation circles, ongoing debate surrounds decisions about whether to include recommendations in evaluation reports; and if so, what information to include. In this study, we examine recommendations from 53 state evaluation reports of the same federal program, Reading First. The presence of recommendations varied; 62 percent had recommendations; 38 percent did not. When recommendations were present, we analyzed each recommendation on six different characteristics, including addressing a general problem with or without a course of action, linking to research, and offering one strategy or a set of strategies to solve the problem.
For reports with recommendations, we survey project directors about their relevance and usefulness. We also survey those project directors with reports without recommendations about their preference for recommendations. Our findings contribute to the ongoing dialogue among evaluators about the role and characteristics of recommendations.
|
| | |
|
Session Title: Large Scale Multi-Level Government Program Evaluation Strategies and Partnerships
|
|
Multipaper Session 885 to be held in Sebastian Section I1 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Government Evaluation TIG
|
| Chair(s): |
| Sharon Stout,
United States Department of Education, sharon.stout@ed.gov
|
|
Challenges and Strategies in Evaluating Large Center Grants
|
| Presenter(s):
|
| Judith Inazu, University of Hawaii at Manoa, inazu@hawaii.edu
|
| Abstract:
Federal funding agencies are increasingly requiring external evaluations of large, research and educational center grants. Many of these federally-funded centers encompass multiple institutions, require synergistic integration of research and education, and mandate outreach initiatives with an emphasis on increasing diversity. Often, staff in these directorates can provide little guidance regarding the evaluation since they themselves have had little training in evaluation. This paper discusses the challenges to evaluators posed by these large center grants and strategies that have been used to address these challenges. Some of the challenges include working with diverse populations at multiple institutions; measuring macro-level concepts such as collaboration, sustainability, and system changes; assessing the center's multiple missions; researchers' lack of familiarity with evaluation; and metrics for diversity. Some of the strategies adopted to meet these challenges include focusing on institutional case histories, collaboration maps, mass internet surveys, interviews with institutional leaders, and mining institutional databases.
|
|
Evaluation in Multi-Level Governance Settings
|
| Presenter(s):
|
| Thomas Widmer, University of Zurich, thow@ipz.uzh.ch
|
| Abstract:
This paper discusses the issues of evaluating in settings where many levels are involved and where the intervention mode is more shaped by negotiation than by hierarchy. The paper presents first recent developments in public policy which are responsible for the trend towards multi-level governance. In order to understand better these kinds of settings, a set of typical characteristics is elaborated in the paper. Topics like multiplicity and volatility of goals, inter-level transparency and trust are in the centre of the discussion. Based on experiences from evaluations in various fields such as public health, environmental education and sustainable development, the appropriateness of evaluation approaches, conceptions, methods and instruments in such settings are discussed. A special emphasis lies herewith on ethical considerations involved in evaluating multi-level governance. The paper closes with some recommendations on how to improve quality (in a broad sense) of evaluation in multi-level governance settings.
|
|
Building Relationships: Partnerships between State and Local Governments and State Universities in the Time of Evaluation
|
| Presenter(s):
|
| Virginia Dick, University of Georgia, vdick@cviog.uga.edu
|
| Melinda Moore, University of Georgia, moore@cviog.uga.edu
|
| Abstract:
Increasingly government agencies at all levels are facing requirements for extensive evaluations of programs and services. These requirements are coming from all funding sources - government and foundation. Often, the agency lacks the resources to adequately address all of the evaluation requirements without external support. In addition, often the requirement dictates the necessity of an external evaluator. This is where building relationships between state and local governments and state colleges and universities can meet important needs for both groups. This presentation will focus on how faculty and institutions can work with local and state governments to provide the expertise to support evaluation efforts for funded programs, services and collaborations. Examples from real world programs and projects will be used to explore the various issues, challenges and strengths related to building these relationships.
|
|
Starting Over in the Middle: Program Evaluation in an Era of Accountability
|
| Presenter(s):
|
| Maliika Chambers, California State University East Bay, maliika.chambers@csueastbay.edu
|
| Abstract:
Federal partnership grants present a unique challenge for evaluators in that the multiple accountability relationships can significantly impact the purpose and quality of the program evaluation. Recent literature in the field of evaluation examines the how pressures of accountability can shape performance measurement into a tool for monitoring, rather than serving a goal of program improvement, and highlights key points of analysis in these settings.
In this article, challenges of program evaluation under pressure are described in conjunction with the methods used to document the evidence of program impact. The author illustrates how striking a balance between the roles of researcher and evaluator to asking the right questions and sharing the load to make project evaluation and accountability everyone's business was a critical turning point in the overall success and effectiveness of the project evaluation. Evaluation models from the literature are presented, and suggestions for further research are offered.
|
| | | |
|
Session Title: Using Systems Tools to Understand Multi-Site Program Evaluation
|
|
Skill-Building Workshop 887 to be held in Sebastian Section I3 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
|
| Presenter(s):
|
| Molly Engle, Oregon State University, molly.engle@oregonstate.edu
|
| Andrea M Hegedus, Northrop Grumman Corporation, ahegedus@cdc.gov
|
| Abstract:
Coordinating evaluation efforts of large multi-site programs requires specific skills from evaluators. Connecting multi-site evaluations with overall program objectives can be accomplished with quick diagramming tools that show function, feedback loops, force fields, and leverage points for priority decisions. Targeting evaluators who are responsible for evaluating large multi-site programs or evaluators within a specific program of a larger multi-site program, participants will, both individually or in small groups, draw a program system and consider its value to program goals and objectives. Drawings will be discussed, the method assessed, and insights summarized. The workshop also will review, "What did you learn and how do you intend to use this skill?" along with "What was the value of this experience to you?" This skill building workshop integrates the sciences of intentional learning, behavioral change, systems thinking and practice, and assessment as functional systems of evaluation and accountability.
|
|
Session Title: Examining Evaluation Approaches in Practice
|
|
Multipaper Session 888 to be held in Sebastian Section I4 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Research on Evaluation TIG
|
| Chair(s): |
| Eric Barela,
Partners in School Innovation, ebarela@partnersinschools.org
|
|
Are There Important Differences Between Curriculum Evaluation and Program Evaluation?
|
| Presenter(s):
|
| Karen Zannini Bull, Syracuse University, klzannin@syr.edu
|
| Abstract:
It has been claimed that program evaluation evolved out of curriculum evaluation during the 1970's (Pinar, et al., 1995). If so, we might wonder, three decades later, whether there are currently important differences between curriculum and program evaluation that could be exploited to improve evaluation practice in each area. This paper examines that question through a comparative analysis. We will employ Smith's analytical framework for characterizing program and curriculum evaluation (Smith, 1999).
|
|
Do RCT-Mixed Method Designs Offer An Improved "Gold Standard" for Determining "What Works?" in Educational Programming
|
| Presenter(s):
|
| John Hitchcock, Ohio University, hitchcoc@ohio.edu
|
| Burke Johnson, University of South Alabama, bjohnson@usouthal.edu
|
| Abstract:
Randomized controlled trials (RCTs) are currently advocated by federal agencies and prominent methodologists as the "gold standard" or best way to answer the question of "What Works?" In this methodological article, we attempt to broaden the meaning of the term "What Works?" to include evidence of explanatory causation, program process, program exportability, and information required when programs might need intelligent tailoring and adaptation to local contexts. We present an argument for an improved "gold standard" based on the language and logic of mixed methods research. Based on a cross-disciplinary literature review, we document how qualitative data can improve and have improved traditional quantitative/RCT approaches to documenting "What Works?" Our intention is to provide an overview of how qualitative work can supplement some cutting edge concerns in RCT implementation, analysis and interpretation, as well as to increase discussion about optimal ways to evaluate educational programs, not to present the final answer.
|
|
Inquiry Into Context - Lessons for Evaluation Theory and Practice From Applying the Principles of Evaluability Assessment
|
| Presenter(s):
|
| Kate McKegg, The Knowledge Institute Ltd, kate.mckegg@xtra.co.nz
|
| Meenakshi Sankar, Martin Jenkins, meenakshi@mja.co.nz
|
| Abstract:
There is reasonable agreement within the evaluation profession that there is no 'best' evaluation plan or design. The criteria that have emerged to judge the quality or value of an evaluation include utility, feasibility, propriety, accuracy, credibility and relevance. Judgments using these criteria are dependent on the situation, they are context bound. Similarly, programs and policies are implemented and shaped by the context in which they operate. Thus for evaluators, being able to understand context is critical if evaluations are to be relevant, credible and useful. In this paper, the authors will discuss our experiences in applying many of the principles of evaluability assessment to undertake structured inquiry into context and its potential impact on the types of evaluation questions that can be addressed, the types of methods and approaches that can feasibly be used, and the types of use that can most likely be planned for. Using case study examples, the authors will talk about the strengths and limitations of evaluability assessment as a form of structured inquiry into the influence of context.
|
|
Taking Stock: Reflections on the Centers for Disease Control and Prevention's (CDC's) Framework for Program Evaluation at Ten Years
|
| Presenter(s):
|
| Michele Mercier, Centers for Disease Control and Prevention, zaf5@cdc.gov
|
| Stewart Donaldson, Claremont Graduate University, stewart.donaldson@cgu.edu
|
| Abstract:
2009 marks the 10th anniversary of the publication of the Centers for Disease Control & Prevention's (CDC) Framework for Program Evaluation in Public Health. The framework was developed to incorporate, integrate, and make accessible to public health practitioners useful concepts and evaluation procedures from a range of evaluation approaches. While the framework has been widely adopted for evaluating federally funded programs throughout the United States, the evaluation contexts in which the framework is being utilized have not been systematically identified or characterized. Using data derived from peer-reviewed journal publications from 1999-2009, we examine the framework's impact, influence and reach in public health and beyond to evaluation more generally, the social sciences, and education.
|
| | | |
|
Session Title: Evaluation Use in International Evaluation: Working Effectively With Stakeholders
|
|
Multipaper Session 889 to be held in Sebastian Section L1 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the International and Cross-cultural Evaluation TIG
|
| Chair(s): |
| Paula Bilinsky,
Independent Consultant, pbilinsky@hotmail.com
|
| Discussant(s): |
| Paula Bilinsky,
Independent Consultant, pbilinsky@hotmail.com
|
|
Three Big Players Try to Play Together: Context Lessons From a Needs Assessment in Four African Countries
|
| Presenter(s):
|
| Mary Crave, University of Wisconsin-Extension, crave@conted.uwex.edu
|
| Abstract:
All eyes were on an assessment to explore linking small farmers with school feeding programs in four African countries. Funded by the Bill and Melinda Gates Foundation, managed by the US Department of Agriculture, and with assistance from the World Food Program, this collaboration was a first. Many aid organizations were interested in the outcomes. The assessment sites of Mali, Ghana, Kenya and Uganda each had distinctive political cultures and histories. While there were common goals for the assessment, each organization also had some unique goals as well as a strong organizational culture that influenced how the assessment was organized, led and carried out. This session will review the distinct agendas of each of the collaborators, how the assessment was carried out and lessons learned on how the project was influenced by the context of the site and collaborating organizations.
|
|
Fulfilling the Promise of Education for All (EFA) for Developing Countries: Building Decision-makers Awareness and Buy-in
|
| Presenter(s):
|
| Edward Kissam, JBS International Inc, ekissam@jbsinternational.com
|
| Thomaz Alvares de Azevedo, JBS International Inc, talvares@jbsinternational.com
|
| Jo Ann Intili, JBS International Inc, jintili@jbsinternational.com
|
| Abstract:
Decision-makers buy-in to monitoring and evaluation of education systems in developing countries is crucial. It affects willingness to secure sound data, analyze it meaningfully, and use the results to guide planning. However, adoption and reporting of standard indicators of education system functioning (e.g. GER, GPI, school-survival) are often seen as more a ceremonial exercise required by funders than as a tool for improvement. We explore the conditions necessary to make the EFA indicators practically useful with particular attention to the emerging research regarding the need for reliable and relevant assessment of student learning quality to supplement the operational indicators now used. We then examine strategies for building system-wide buy-in, including: analyzing and sharing information on performance at the sub-national and local level, collaborative inquiry regarding factors contributing to observed patterns, and using analyses to formulate, pilot, and refine strategies to address the most pressing problems.
|
| |
|
Session Title: Evaluating Scale Up in International Health Programs
|
|
Multipaper Session 890 to be held in Sebastian Section L2 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the International and Cross-cultural Evaluation TIG
|
| Chair(s): |
| Tessie Catsambas,
EnCompass LLC, tcatsambas@encompassworld.com
|
| Discussant(s): |
| Tessie Catsambas,
EnCompass LLC, tcatsambas@encompassworld.com
|
|
Tracking Contextual Factors in Large-scale, 'real-life' Evaluations of Child Survival Programs in Africa
|
| Presenter(s):
|
| Elizabeth Hazel, Johns Hopkins University, ehazel@jhsph.edu
|
| Jennifer Callaghan, Johns Hopkins University, jcallagh@jhsph.edu
|
| Kate Gilroy, Johns Hopkins University, kgilroy@jhsph.edu
|
| Abstract:
Large-scale evaluations of child survival programs in low and middle-income countries occur in complex, dynamic environments. Tracking contextual factors is integral to the overall evaluation process and in combination with program implementation documentation, it provides valuable insight for interpretation of evaluation results and is vital for the internal and external validity. Evaluators need information on contextual factors such as demographic patterns, socio-economic factors and the presence of other child survival initiatives to design a sound evaluation and to identify potential confounders and effect modifiers for analysis and interpretation. We describe methodological guidelines for documenting contextual factors, with an emphasis on child health program mapping exercises to track efforts external to the evaluated program. The value and challenges of documenting contextual factors prospectively is demonstrated through case studies from Ghana and Malawi.
|
|
Understanding and Evaluating Scale-Up: Research and Programmatic Challenges
|
| Presenter(s):
|
| Rebecka Lundgren, Georgetown University, lungrer@georgetown.edu
|
| Susan Igras, Georgetown University, smi6@georgetown.edu
|
| Ruth Simmons, University of Michigan, rsimmons@umich.edu
|
| Abstract:
Many international health programs aim to achieve national level scale up, yet serious gaps exist in understanding the processes by which innovations are scaled up and sustained, and hence, in evaluating programs with scale up goals. A five-year prospective study of the process and outcomes of scaling up a family planning innovation, the Standard Days Method, is underway in Mali, India, Madagascar, Guatemala and Rwanda, drawing upon ExpandNet/WHO's scale up model, which guides scaling up processes and grounds development of a feasible yet rigorous methodology to evaluate results of scaling up programs. First year research findings suggest that periodic systems assessments help maintain accountability and build systems evaluation skills of stakeholders. Analysis of different program elements of scale up led to clearer definition of outcome indicators as well as values such as informed choice that should not be lost as scale up progresses. Program and research lessons learned will be discussed as well as methodological issues.
|
|
Multi-Country Evaluations of Health Care Collaboratives: The Challenges and Opportunities
|
| Presenter(s):
|
| Mary Gutmann, EnCompass LLC, mgutmann@encompassworld.com
|
| Tessie Catsambas, EnCompass LLC, tcatsambas@encompassworld.com
|
| Abstract:
As part of a multi-country evaluation of health care improvement collaboratives in low-to middle income countries, EnCompass used a developmental approach to evaluating 35 collaboratives in 14 countries over three continents. Each collaborative focused on one or more health care topics including maternal and newborn health, pediatric hospital improvement, HIV/AIDS, tuberculosis, malaria, and family planning. The contextual diversity of the different countries and health care systems posed a major challenge to the design of an evaluation framework to document best practices and evaluate the results of health care collaboratives in developing countries. The paper describes a participatory, developmental evaluation approach to using context to inform the crafting of evaluation questions, testing hypotheses and identifying new lines of inquiry and exploration. Incorporating context allowed the evaluation to capture system dynamics, interdependencies, and emergent interconnections that increased the validity and generalizability of findings across different models of health care collaboratives.
|
| | |
|
Session Title: How Nonprrofit Organizations Can Build and Sustain Capacity for Evaluation
|
|
Multipaper Session 891 to be held in Sebastian Section L3 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
and the Non-profit and Foundations Evaluation TIG
|
| Chair(s): |
| Stanley Capela,
HeartShare Human Services, stan.capela@heartshare.org
|
| Discussant(s): |
| Stanley Capela,
HeartShare Human Services, stan.capela@heartshare.org
|
|
Evaluation Capacity Building With Community-Based Organizations: Results of a Yearlong Planning Process and Curriculum
|
| Presenter(s):
|
| Srividhya Shanker, University of Minnesota, shan0133@umn.edu
|
| Cindy Reich, University of Minnesota, reichcin@aol.com
|
| Laura Pejsa, University of Minnesota, pejs0001@umn.edu
|
| Jean A King, University of Minnesota, kingx004@umn.edu
|
| Abstract:
As funders continue presenting grantees with reporting requirements of increasing scope and complexity, it is essential that nonprofit agencies have appropriate expertise and capacity to build meaningful evaluation into their work. Many community-based organizations, however, lack such capacity. In 2006, a Twin Cities coalition of 25 nonprofits, primarily community centers and others sharing settlement house values, first began discussing their capacity with respect to evaluation. One resulting goal was for evaluation to be embedded into each organization, for participating agencies to engage in intentional work to create and sustain overall organizational infrastructure and processes that would make quality evaluation and its use routine. In this paper, facilitators of the yearlong process that grew out of this goal share the results of two years spent working closely with participating organizations to clarify their understanding of Evaluation Capacity Building (ECB), help them assess their current evaluation capacity, and assist them in writing their ECB plans to implement the following year.
|
|
Evaluating an Existing Community-level Initiative: Lessons From the YMCA of the United States of America
|
| Presenter(s):
|
| Andrea M Lee, YMCA of the USA, andrea.lee@ymca.net
|
| Abstract:
Community-level initiatives designed to promote healthy lifestyles are growing in response to the prevalence of obesity in the United States. The YMCA of the USA launched its Healthier Communities Initiatives in 2004, convening local leaders from diverse sectors to change policies, environments, and systems within communities to increase physical activity and promote healthier eating. By the end of 2009, these Initiatives will encompass 132 communities from 40 states. The Initiatives are designed to provide community teams with tools, such as Y-USA's Community Healthy Living Index, to create broad and sustainable change.
Our work evaluating these Initiatives provides several lessons for other evaluators tackling initiatives with community components. These lessons include incorporating existing data into a comprehensive evaluation plan, integrating limited community-level data, and identifying the essential components of successful community collaborations. By sharing our process for establishing our evaluation plan, we hope to increase the base of community-level evaluation knowledge.
|
|
Using Evaluation for Organizational Learning in an Evolving National Non-Profit Context: The Case of City Year
|
| Presenter(s):
|
| Gretchen Biesecker, City Year Inc, gbiesecker@cityyear.org
|
| Tavia Lewis, City Year Inc, tlewis@cityyear.org
|
| Dannalea D'Amante, City Year Inc, ddamante@cityyear.org
|
| Ashley Kurth, City Year Inc, akurth@cityyear.org
|
| Abstract:
City Year is a national service organization founded in 1988 that unites young people for a year of full-time service in urban communities. Each year, more than 1,500 17-24 year olds from diverse backgrounds work with underserved children as tutors and mentors in 18 cities in the U.S. In 2008, City Year established a more standardized model of school service (Whole School Whole Child) to address the academic, social, and emotional needs of children in their school environment, with an urgency framed by the high school drop-out crisis. This sharp focus on the context of K-12 education, and the era of outcomes-based accountability, has changed the context for evaluation activities and use for organizational learning across City Year. In this paper, we will share systems and example tools led by an internal evaluation department that allow us to deploy data collection and feedback in a large, national, education non-profit context.
|
|
Context and the Serenity Prayer: The Evaluator's Role in Evaluation Capacity Building
|
| Presenter(s):
|
| Salvatore Alaimo, Grand Valley State University, salaimo@comcast.net
|
| Abstract:
Evaluation capacity building (ECB) continues to gain prominence in the evaluation profession. This study examines the role of the evaluator as a key stakeholder in ECB. Twenty one-on-one interviews were conducted with evaluators of nonprofit human service programs coming from various backgrounds. Executive directors, board chairs, program staff and funders of nonprofit human service organizations were also interviewed for comparative analysis. Preliminary results provide some insight into how relationships and context impact the ECB process within the nonprofit, human services environment. They indicate that effective evaluation capacity building requires more than just funds, personnel and expertise. Some other important factors include leadership; value orientations; congruence among stakeholders; resource dependency; quality signaling; stakeholder involvement and understanding of roles; organizational culture; organizational learning; personal preferences; and utilization of available evaluation tools. This study suggests that evaluators should be cognizant of a variety of contextual implications to successfully engage in ECB.
|
| | | |
|
Session Title: Evaluation Policy and Evaluation Practice: Where To Next?
|
|
Panel Session 892 to be held in Sebastian Section L4 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Theories of Evaluation TIG
|
| Chair(s): |
| William Trochim, Cornell University, wmt1@cornell.edu
|
| Abstract:
An evaluation policy is any rule or principle that a group or organization uses to guide its decisions and actions when doing evaluation. Every group and organization that engages in evaluation - including government agencies, private businesses, and nonprofit organizations - has evaluation policies. Sometimes these are formal, explicit and written; at other times they are implicit and ad hoc principles or norms that have simply evolved over time. Evaluation policies profoundly affect the day-to-day work of all evaluators. Many recent and current controversies or conflicts in the field of evaluation can be viewed, at least in part, as struggles around evaluation policy. Because evaluation policies typically apply across multiple evaluations, influencing policies directly may have systemic and far-reaching effects for practice. This panel discusses current thinking on the topic of evaluation policy, especially how it is informed by and affects evaluation practice, and suggests directions for future work in this area.
|
|
Introduction to Evaluation Policy
|
| Melvin Mark, Pennsylvania State University, m5m@psu.edu
|
| Leslie J Cooksy, University of Delaware, ljcooksy@udel.edu
|
| William Trochim, Cornell University, wmt1@cornell.edu
|
|
This introductory presentation will address the following questions in general terms and provide a foundation for the panel:
-What is evaluation policy? What questions or issues should a comprehensive organizational evaluation policy address?
-How does evaluation policy influence evaluation practice?
-When does systematic evaluation get deployed? What programs, policies, or practices are chosen as the subject of evaluation, when, and why?
-What policies should guide the identification and selection of evaluators? What credentials should evaluators have? What kind of relationship should evaluators have to the program or entity being evaluated?
-What policies should guide the timing, planning, budgeting and funding, contracting, implementation, methods and approaches, reporting, use and dissemination of evaluations?
-What policies should guide how evaluation participants and respondents are engaged and protected?
-How can existing (e.g., the Guiding Principles for Evaluators) or prospective professional standards inform evaluation policy?
|
|
|
Evaluation Policy and Evaluation Practice: Taxonomy and Methodology
|
| William Trochim, Cornell University, wmt1@cornell.edu
|
|
This presentation significantly extends Trochim's 2008 Presidential Address and begins by describing an evaluation policy as "any rule or principle that a group or organization uses to guide its decisions and actions when doing evaluation." Examples of evaluation policies are provided to illustrate the form they might take. The paper offers a tentative taxonomy of evaluation policies, dividing them into eight broad topical areas: goals, participation, capacity building, management, roles, process and methods, use and meta-evaluation. The idea of evaluation policy methodology is introduced and a general method based on the notion of the policy wheel is described. Key principles that guide evaluation policy formulation are discussed, including: specificity; inheritance; encapsulation; exhaustiveness; continuity; delegation and accountability. The methodology is illustrated in the context of U.S. federal evaluation. Critical issues and challenges for the field of evaluation policy, and the implications for policy development methodology and for future research are considered.
| |
|
Evaluation Policy and Evaluation Practice: Where Do We Go From Here?
|
| Leslie J Cooksy, University of Delaware, ljcooksy@udel.edu
|
| Melvin Mark, Pennsylvania State University, m5m@psu.edu
|
| William Trochim, Cornell University, wmt1@cornell.edu
|
|
This presentation synthesizes key themes and issues regarding evaluation policy, including those raised by conference presentations of 2008 on this theme and by the other presentations in this panel. The paper describes how key issues regarding evaluation policy encompass different settings (e.g., domestic and international; academic and public sector) and foci (e.g., the theory of evaluation policy, overarching national policies, and policies specific to the organizational location of the evaluation function). Common and complementary themes are identified. The discussion includes suggestions for future directions in evaluation policy including: the need for an empirically-derived taxonomy of evaluation policy categories and how such might be created; the development of evaluation policy audits (including checklists and measures that might be used in accomplishing them); the need for evaluation policy development processes and methods (including further development of methods outlined in Trochim's 2008 Presidential Address); the need for evaluation policy archives and how they might be structured; and the need for organizational structures that support evaluation policy.
| |
|
Session Title: Evaluation of Family Health Program in Thailand: Multi-site and Multiple Evaluations
|
|
Multipaper Session 893 to be held in Suwannee 11 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Health Evaluation TIG
|
| Abstract:
Family Health Program in Thailand was managed by Family Network Foundation that received fund from Thai Health Promotion Foundation, Thailand. The objectives of this research were (1) to evaluate the success, value, efficiency, effectiveness and sustainable of Family Health Program (2) to evaluate the good governance and (3) to provide the information and recommendations to Thai Health Promotion. The samples comprised of 3,040 stakeholders and customers. The evaluation was conducted by multi-site evaluation and multiple evaluations such as discourse analysis and Fog analysis. Data collections were to documentary analysis, evaluative site visit, survey, observation, interview and focus group discussion. The instruments used for conducting this study consisted of semi-structure interview, focus group question, content analysis form and readability index form. Data analysis used descriptive statistic, cross tabulation, analysis of readability index, Fog index analysis, content analysis, analytical induction and cross site analysis. The effectiveness, efficiency and good governance were high level. The eight recommendations provided to Thai Health Promotion Foundation to support and plan the Family Health Program.
|
|
Evaluation of Family Health Program in Thailand: Multi-Site and Multiple Evaluations
|
| Haruthai Ajpru, Chulalongkorn University, ajpru19@gmail.com
|
| Suwimon Wongwanich, Chulalongkorn University, wsuwimon@chula.ac.th
|
|
Family Health Program in Thailand was managed by Family Network Foundation that received fund from Thai Health Promotion Foundation, Thailand. The objectives of this research were (1) to evaluate the success, value, efficiency, effectiveness and sustainable of Family Health Program (2) to evaluate the good governance and (3) to provide the information and recommendations to Thai Health Promotion. The samples comprised of 3,040 stakeholders and customers. The evaluation was conducted by multi-site evaluation and multiple evaluations such as discourse analysis and Fog analysis. Data collections were to documentary analysis, evaluative site visit, survey, observation, interview and focus group discussion. The instruments used for conducting this study consisted of semi-structure interview, focus group question, content analysis form and readability index form. Data analysis used descriptive statistic, cross tabulation, analysis of readability index, Fog index analysis, content analysis, analytical induction and cross site analysis. The effectiveness, efficiency and good governance were high level. The eight recommendations provided to Thai Health Promotion Foundation to support and plan the Family Health Program.
|
|
Qualitative Research Approach in Effectiveness Evaluation of The Project on Development of Strategy for the Single Parent Alliance Network
|
| Doungnetre Thummakul, Chulalongkorn University, doungnetre@yahoo.com
|
| Suwimon Wongwanich, Chulalongkorn University, wsuwimon@chula.ac.th
|
|
The Project on Development of Strategy for the Single Parent Alliance Network is a project developed under the Family Health Program by the Thai Health Promotion Foundation. This research serves the purpose on studying the development of strategy in families with single parents to achieve stasis, systematic operation, and work readiness. Also, this research studies the body of knowledge and the influxes of society by delving into the documentary analysis, evaluative site visits, in-depth interviews, and focus group interviews. In result, this research has found that strategy such as implanting service-mind volunteer and encouraging sense of belonging and participation; in terms of issues of knowledge, i.e. the single parent family network model is capable to preserve itself. It should also be noted that advances for the single parent alliance network have been able to establish itself as a faction, with increasing involvement from other provinces in the country.
|
|
Use Of Cognitive Interviewing In Evaluating The Effectiveness Of Media In The Media Project For Children, Youth And Families Under The Family Health Program
|
| Chutima Suebwonglee, Chulalongkorn University, chu322@hotmail.com
|
| Suwimon Wongwanich, Chulalongkorn University, wsuwimon@chula.ac.th
|
|
Cognitive interviewing is a technique used in examining and searching for problems with various questionnaires or materials in survey research as a methodology for studying cognitive processes, comprehension, and interpretation associated with words, sentences or concepts appearing in questionnaires or materials according to the cognition and comprehension of interviewees who are the target group of the research. This research was an application of cognitive interviews in the evaluation of the effectiveness of three different types of media for the media project for children, youth and families including the following: 1) The Family Happiness column in the Khaosod newspaper; 2) Family Network Journal and 3) the website at www.familynetwork.or.th which are media produced and published by the Family Network Foundation under the Family Health Program supported by the Thai Health Promotion Foundation Fund.
The research was conducted by holding cognitive interviews by using two sub-techniques: 1) the technique of thinking aloud in order to study the cognitive processes of the interviewees and 2) verbal probing to gather data on deeply opinions on vague issues or answers. The sample group comprised 10 representatives from consumers of all 3 media and the data was analyzed by content analysis and analytical induction. The research findings yielded conclusions on opinions of media issues, both in terms of the components of the content, design and structure of the media presentation, the language used, the type of production and publication of the media, the artistic elements such as the placement of accompanying images, font size and color, etc. and suggestions from the sample group toward improving the media for greater effectiveness and to generate outcomes to meet the objectives of the project. In-depth data that is detailed, clear, compatible with actual situations and reflective of the cognitive processes of the target group from this study to help affirm the benefit of the application of the cognitive interview technique for the evaluation and development of research materials.
|
|
Session Title: Evaluating Science Education Programs for Youth: Best Practices and Lessons Learned
|
|
Panel Session 894 to be held in Suwannee 12 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Kathy Dowell, Partners In Evaluation & Planning, kadowell@usa.net
|
| Abstract:
Increasing students' interest and enjoyment of science is an increasingly important priority for educators and policymakers alike. Educating students in the sciences and increasing the number of students who enter into careers in the science field will help ensure that the U.S. maintains its status as a global leader in science and technology. Evaluation of science education programs has flourished over the past several decades, as we try to learn what works and what doesn't in fostering a love of science among young people. This panel will focus on methods used to evaluate a variety of science education programs, including Science, Technology, Engineering, and Math (STEM) programs that seek to incorporate STEM subjects into existing science curricula, to an after-school STEM program for adolescent girls, to mobile science laboratory programs. Presentations will focus on methodologies, strengths and challenges of current methods, and lessons learned in evaluating science education programs.
|
|
Evaluating Mobile Science Laboratories: Successes, Challenges, and Lessons Learned
|
| Kathy Dowell, Partners In Evaluation & Planning, kadowell@usa.net
|
| Christina Lynch, Partners in Evaluation & Planning, colynch@verizon.net
|
|
Mobile science laboratories are becoming an increasingly popular way to bring science education to students. These programs are designed to increase student knowledge of science content, and increase their enthusiasm and interest in pursuing careers in the science field. Activities address a wide variety of science topics, including forensics, genetics, diseases, anatomy and physiology. One of the greatest benefits of mobile science labs is that students get the opportunity to use advanced scientific equipment and participate in hands-on activities that are designed to pique their interest in science. This presentation will present experiences from evaluating three mobile science labs programs. Evaluations have focused on measuring changes in student knowledge, attitudes toward science, and interest in science careers, as well as teacher satisfaction. This paper will focus on methods used, challenges and successes in evaluating mobile science labs, lessons learned, and issues that have yet to be resolved.
|
|
|
Using Science Notebooks to Embed Evaluation Into an After School Science Program
|
| Kristin M Bass, Rockman et al, kristin@rockman.com
|
|
Universe Quest is a multi-year afterschool STEM education program in which adolescent girls learn database-"enabled" astronomy and undertake game-authoring to engage in and acquire IT and science skills. This presentation describes how science notebooks have embedded assessment and evaluation into the program in an informative, meaningful way. Laboratory notebooks allow students to record information about investigations for later review, revision, and communication (Shepardson & Britsch, 1997, 2004). The contents of students' notebooks are heavily influenced by teacher practices and present varying opportunities for assessment (Baxter, Bass & Glaser, 2001). In Universe Quest, an interactive format allows students to reflect on what they've learned and encourages instructors to provide feedback. Evaluators are studying the notebooks for evidence of knowledge and skill development. In our presentation they'll talk about what is working well with the notebooks, what they've changed since they started and what the notebooks are contributing to the overall evaluation.
| |
|
Challenges in Evaluating a Middle School Science, Technology, Engineering, and Mathematics (STEM) Program Emphasizing Engineering and Technology
|
| Janet Matulis, University of Cincinnati, matulij@ucmail.uc.edu
|
| Nancy Knapke, Fort Recovery School District, fortnancy@bright.net
|
|
This presentation describes the contextual and instrumentation challenges in evaluating MAKE-it, a middle school STEM program incorporating engineering and technology in the teacher professional development and curricula of three rural Ohio school districts. Evaluation of K-12 STEM education historically has focused on science and mathematics, subjects typically mandated in a district's curriculum and "covered" by content standards. Engineering, in particular, and technology have been the marginalized components of STEM education and are not as easily identifiable, if present at all, in curricula. This presentation highlights the MAKE-it project's instrumentation process used to help determine 1) teachers' self-efficacy incorporating engineering and technology principles into instruction and assessing related student performance, 2) students' knowledge and skills reflecting engineering and technology principles, 3) students' perceptions of their courses in developing STEM knowledge and skills, and 4) students' awareness and interest related to engineering and other STEM careers.
| |
|
Defining Career Academies and Science, Technology, Engineering and Mathematics (STEM)-ocity
|
| Bridget A Cotner, University of South Florida, bcotner@cas.usf.edu
|
| Maressa L Dixon, University of South Florida, mdixon83@gmail.com
|
| Corinne Alfeld, Academy for Educational Development, calfeld@aed.org
|
| Tasha-Neisha Wilson, University of South Florida, twilson@cas.usf.edu
|
|
Career academies are small schools within schools focused on a broad career theme, rigorous course taking, and school-business partnerships. Complications occur when when career academies are considered from the perspectives of state, district and school actors. Issues influencing the selection and evaluation of career academies include: 1.)differences between registered and unregistered career academies; 2.) questions of registration versus attendance and performance; and 3.)the degree to which the career academies and the required courses can be identified as having a science, technology, engineering , or mathematics (STEM) focus- or the STEM-ocity level. In this paper, these issues are explored using the state of Florida as an example in an ongoing study sponsored by National Science Foundation.
| |
|
Session Title: Evaluating School Mathematics and Science Textbooks and Classroom Practices Using an Assessment for Learning Framework
|
|
Demonstration Session 895 to be held in Suwannee 13 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Presenter(s): |
| Steven Ziebarth, Western Michigan University, steven.ziebarth@wmich.edu
|
| Arlen Gullickson, Western Michigan University, arlen.gullickson@wmich.edu
|
| Jonathan Engelman, Western Michigan University, jonathan.a.engelman@wmich.edu
|
| Amy Bentz, Western Michigan University, amy.e.bentz@wmich.edu
|
| Abstract:
Since 1990, Assessment for Learning (AfL) has emerged as a guiding framework for classroom practice in school mathematics and science with the potential to dramatically increase student achievement. During this same timeframe, new mathematics and science curricula (textbooks), together with more investigative teaching and improved assessment practices, have been advocated by both the National Council of Teachers of Mathematics' (NCTM, 1989, 2000) and the National Science Teachers' Association (NSTA, 1996). This session examines research into the extent to which new school mathematics and science textbooks have incorporated AfL practices within their materials and, using a similar analysis framework, how we have developed an observation protocol to help gather data about AfL teaching practices within secondary classrooms. Participants will examine the analysis framework, discuss the textbook evaluation methodology, and both review and react to the observation protocols that have been created with potential for use in their own AfL-related research.
|
|
Session Title: Using a Longitudinal Mixed Method Approach to Evaluate a Professional Development Community: A Case Study of the Forum for Western Pennsylvania School Superintendents
|
|
Multipaper Session 896 to be held in Suwannee 14 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Cynthia Tananis, University of Pittsburgh, tananis@pitt.edu
|
| Abstract:
Evaluators often assist programs in developing theories of action and processes for dealing with change. In this session, we present our longitudinal mixed-method evaluation of The Forum for Western Pennsylvania School Superintendents. In its thirteenth year, The Forum provides practical, professionally relevant strategies to alleviate the sense of isolation often accompanying the position of superintendent and to appreciate the complexity of issues increasingly facing the field of education. The Forum is a relationship-oriented and socially-interconnected professional development program that understands and impacts both the social contexts and effective practices that impact the educational task at macro- and micro- levels. In this session, we display several components of the overall Forum evaluation plan. Through our evaluation, we not only describe and analyze outcomes, but also portray the patterns of culture and interaction that express the entirety of the program's story.
|
|
The Role of Evaluation in Developing Program Theory: Using Retrospective Logic Modeling To Tell a Program's Story
|
| Keith Trahan, University of Pittsburgh, kwt2@pitt.edu
|
| Cara Ciminillo, University of Pittsburgh, ciminillo@gmail.com
|
| Cynthia Tananis, University of Pittsburgh, tananis@pitt.edu
|
|
In this paper, we use a retrospective logic-modeling strategy to portray the Forum's theory of action, chronicle the evolution of the program, and inform future change. Our evaluation plan utilizes a longitudinal mixed-methods approach to help tell the program's story. Our meta-analysis of evaluation findings helps to position the Forum within the discourse of education professional development. We use the phrase community of educational practice and social action to identify the Forum as a hybrid of communities of practice and professional learning communities. The Forum is a relationship-oriented and socially-interconnected professional development program that understands and impacts both the social contexts and effective practices that impact the educational task at macro- and micro- levels. This paper illustrates the role of evaluation in developing program theory.
|
|
Organizational Evolution: Recognizing and Transferring Culture to Newcomers
|
| Cynthia Tananis, University of Pittsburgh, tananis@pitt.edu
|
| Keith Trahan, University of Pittsburgh, kwt2@pitt.edu
|
|
As the Forum has evolved over the last 14 years, it has matured as a special community of practice and learning. Members developed close, professionally intimate relationships that broke down the sense of isolation that is characteristic of the Superintendents. As long-timers age and retire from practice and newcomers are invited into the group, the nature of community within the organization changes. This paper examines these issues through the use of focus group data.
|
|
Telling the Forum's Membership Story: Portraying Change Using the Basics of Geographic Information Systems (GIS)
|
| Keith Trahan, University of Pittsburgh, kwt2@pitt.edu
|
| Justin Rodgers, University of Pittsburgh, jtr30@pitt.edu
|
|
In this paper, we discuss our use of geographic information systems to construct a visual narrative of program membership change. Since its founding in 1996, membership in The Forum for Western Pennsylvania School Superintendents has grown from a concentration around Pittsburgh to a broader regional presence. Our evaluation task is to research and display the Forum's changing presence in terms of member and district demographics and to highlight collegial networks that have grown out of the Forum. Thus, this work serves as a foundation for our future evaluation project in social network analysis. Using GIS we are able to create a more coherent portrayal of membership change than we are with standard report writing. While this application is a minimal use of the applicative power of GIS software, our aim is to show that evaluation practitioners with working knowledge of information systems can map membership change using GIS.
|
|
Leveraging Resources Through Evaluation
|
| Cynthia Tananis, University of Pittsburgh, tananis@pitt.edu
|
|
This paper explores the ways in which initiatives from member Superintendents, provided with small seed-funding from the Forum, used evaluation capacity building to refine planning and implementation in ways that leveraged additional funding and matching resources to expand initiatives. Evaluation can help organizations build a compelling case of impact and further potential that can elicit expanded support from decision-makers and funders.
|
|
Session Title: Evaluating Teacher Professional Development: Contexts and Methods
|
|
Multipaper Session 897 to be held in Suwannee 15 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Christine Emmons,
Yale University, christine.emmons@yale.edu
|
|
Placing Teachers' Career Development in Context: Revisioning Science, Technology, Engineering, and Mathematics (STEM) Professional Development
|
| Presenter(s):
|
| Darnella Davis, COSMOS Corporation, ddavis@cosmoscorp.com
|
| Abstract:
A mixed method examination of eight National Science Foundation (NSF)-supported training programs, Teacher Institutes for the 21st Century, sheds light on assessing reforms among K-20 partnerships with varying configurations. The study posits a framework for considering training activities, including embedded professional development and distributed leadership among learning communities, as part of nesting spheres of influence that range from small empowered groups to competitors at the global level. The varying configurations may signal the need for evaluators to reconsider existing career development routes to meet the nation's pressing need for teachers' mastery of science, technology, engineering, or mathematics (STEM) content. The eight sites provide training institutes under NSF's Math and Science Partnership (MSP) Program which encourages creative partnerships that engage STEM faculty in work with K-12 districts to improve teacher content knowledge. These new configurations require fresh perspectives on assessing program implementation. The author is an MSP Program Evaluation Co-PI.
|
|
Measuring the Effects of Collaboration and Professional Development on the Technology Integration in K-12 Classroom Instruction
|
| Presenter(s):
|
| Melinda Mollette, North Carolina State University, melinda_mollette@ncsu.edu
|
| Jason Osborne, North Carolina State University, jason_osborne@ncsu.edu
|
| Tricia Townsend, North Carolina State University, latricia_townsend@ncsu.edu
|
| Abstract:
IMPACT is a media and technology program, funded through the North Carolina Department of Public Instruction, designed to support and promote effective instruction that integrates technology. A mixed-methods approach was used to evaluate whether implementation of the IMPACT model in K-12 schools improves student achievement and technology skills, increases teacher use of technology during instruction, and increases teacher morale, as well as attitudes toward technology use. The model is currently being implemented as a district-wide initiative in seven school districts throughout the state, which include a total of thirty K-12 schools. In addition, the evaluation will determine if the collaborative environment and quality of professional development provided in the uses of various technological tools results in an increase in the depth and frequency of technology integration by classroom teachers in all grade levels and subject areas. Information about the measures, data collection/analysis methods and implementation issues will be addressed.
|
|
A Longitudinal Evaluation of the Impact of Professional Development on Science Teacher Self-efficacy and the iMplementation of Inquiry-based Methods in the Classroom
|
| Presenter(s):
|
| Aruna Lakshmanan, East Main Educational Consulting LLC, alakshmanan@emeconline.com
|
| Michael Elder, Onslow County Schools, michael.elder@onslow.k12.nc.us
|
| Aaron Perlmutter, East Main Educational Consulting LLC, aperlmutter@emeconline.com
|
| Barbara Heath, East Main Educational Consulting LLC, bheath@emeconline.com
|
| Abstract:
This paper discusses evaluation activities related to a DOE-funded Math and Science Partnership (MSP) in a county in North Carolina, in which professional development activities were aimed at increasing content knowledge and at improving pedagogy of elementary and middle grade science and math teachers. Several measures were used to assess the impact of these activities over four points in time. These included measures of teacher self-efficacy and outcome expectancy using the Science Teaching Efficacy Belief Instrument (STEBI), and of the extent to which teachers practice inquiry-based instruction using the Reformed Teaching Observation Protocol (RTOP). Professional development is an ongoing process, and change does not happen overnight (Glusac, 2008). Often, change in beliefs precedes change in practice. While prior studies have reported a correlation between teacher self-efficacy and the use of inquiry-based methods, very few are longitudinal investigations. This paper shares the findings of the longitudinal study and discusses implications.
|
|
I Need Structure: Evaluation Without a Framework
|
| Presenter(s):
|
| Jeffrey Wasbes, Research Works Inc, jwasbes@researchworks.org
|
| Abstract:
STEM focused initiatives in K-12 education are new endeavors. The absence of curricular standards or a framework of necessary content knowledge (in New York State) in this discipline creates confounding challenges to the evaluator. Using knowledge gained from empirical studies and supplemented by research, this paper explores issues that arise due to this lack of context for evaluation of STEM focused teacher professional development programs funded through Title II, Part B of NCLB. Some of these challenges are known to systemic evaluation practitioners; some are particular to the program that is the subject of this paper, which is set in the New York City School System. All are confounding to a relatively new evaluator. The author hopes that exploration of these issues stresses the need for the creation of a framework of STEM standards.
|
|
Using Mixed Methods to Assess the Transfer of Professional Learning to Classroom Practice in the Evaluation of Two Statewide Teacher Quality Initiatives
|
| Presenter(s):
|
| Thomas Horwood, ICF International, thorwood@icfi.com
|
| Rosemarie O'Conner, ICF International, ro'conner@icfi.com
|
| Sarah Decker, ICF International, sdecker@icfi.com
|
| Barbara O'Donnel, Texas Education Agency, barbara.odonnel@tea.state.tx.us
|
| Abstract:
This paper explores the processes evaluators took to assess teachers' acquisition and transfer of skills and knowledge to the classroom. Evaluations of two statewide (Texas) teacher quality initiatives will be discussed, the Beginning Teacher Induction and Mentoring Program and the Texas Adolescent Literacy Academies. The evaluations demonstrate the use of different data collection techniques to assess the transfer of learning to classroom practice of teacher participants: (a) classroom observations, (b) expert panel review of training curricula, and (c) didactic interviews with beginning teachers and their mentors. The session will describe the use of mixed methods evaluation methodologies to triangulate findings and how findings were translated into recommendations for program improvement and legislative action. This methodology and data from these evaluations will contribute to the design and delivery of teacher mentoring and literacy professional development. Furthermore, this methodology can be applied to future evaluations of any teacher professional development endeavor.
|
| | | | |
|
Session Title: Foundation Evaluation in a Dismal Economic Context
|
|
Panel Session 898 to be held in Suwannee 16 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Chair(s): |
| Deborah Bonnet, Fulcrum Corporation, dbonnet@fulcrum-corp.com
|
| Discussant(s):
|
| Hallie Preskill, FSG Social Impact Advisors, hallie.preskill@fsg-impact.org
|
| Abstract:
Foundations' endowments have been hit hard by the economic downturn, forcing choices between scaling back current grant making and compromising future levels of giving. Panelists will discuss how their foundations have coped with this conundrum, first describing the recession's impact on their foundations' finances, then turning to how their foundations have responded: By giving less (to protect long-term assets), or more (to meet growing demand)? By shifting grant making priorities to fulfill pressing human needs, or by standing steadfastly behind long-term missions or grantees?
Next, panelists will address how their foundations' evaluation functions have adjusted: By shrinking to save money? By expanding to ensure accountability? By elevating to higher levels of use as foundations discover evaluation's value in making hard choices? By doing something else altogether?
|
|
Fresh Findings From National Scans
|
| Deborah Bonnet, Fulcrum Corporation, dbonnet@fulcrum-corp.com
|
|
This presentation will summarize the latest research addressing the recession's effects on foundations' endowments, grant making, and evaluation functions.
|
|
|
Bill and Melinda Gates Foundation: Increasing Giving in Spite of Fallen Assets
|
| Kendall Guthrie, Bill & Melinda Gates Foundation, kendall.guthrie@gatesfoundation.org
|
|
As of this writing in March 2009, assets have fallen, but grant making will continue to increase - although at a smaller growth rate than expected. This is creating both opportunities and challenges for evaluation. Since there will be fewer new grants than initially planned, the foundation is putting increased emphasis on managing grants to results. Evaluation and organizational learning are key tools to support that effort. At the same time, the foundation is rescoping existing evaluations to better fit current priorities.
| |
|
The Atlantic Philanthropies: Still Planning to Retire in 2016
|
| John Healy, Atlantic Philanthropies, ja.healy@atlanticphilanthropies.org
|
|
About a decade ago, the Atlantic Philanthropies decided to break from the pack by not aiming for perpetuity, but rather, to spend down its endowment by 2016. Because the foundation has not restricted spending to the five percent requirement for some time, and its endowment has weathered the recent turmoil better than most, giving is expected to hold steady for now. However, the foundation's strategies are under review, with evaluation playing a key role.
| |
|
Marin Community Foundation: Balancing the Needs for Immediate and Longer-term Impact
|
| Tim Wilmot, Marin Community Foundation, twilmot@marincf.org
|
|
As of this writing in March 2009, assets are down, and MCF is in the midst of formulating its grant making responses in the context of both short- and longer-term community impact. Having a measurable impact in this time of increasing demands and declining resources requires MCF to be even more strategic, focused and accountable in its grant making. Therefore, MCF's evaluation function is playing a heightened role in building measurable outcomes with the foundation's programs and its partner grantees in order to meet both the short- and longer-term needs of Marin County and its residents.
| |
|
Lumina Foundation for Education: Holding the Course, for Now
|
| Mary Williams, Lumina Foundation for Education, mwilliams@luminafoundation.org
|
|
Relatively new, Lumina Foundation is still evolving its strategies for promoting access and success in postsecondary education. As of this writing in March 2009, assets are down, but implementation of planned transitions in strategy is continuing. Giving is expected to hold steady this year, but may need to retract in the future. The economic downturn occurred just as the Foundation established an "audacious goal" for postsecondary degree completion, launched a comprehensive strategic planning process, and found itself highly aligned with the higher education goals of the new federal administration. The role of evaluation is being re-examined and will surely include a greater emphasis on performance metrics.
| |
|
Session Title: Emerging Evaluation Practices in the Context of Higher Education Institutions
|
|
Multipaper Session 899 to be held in Suwannee 17 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Assessment in Higher Education TIG
|
| Chair(s): |
| Jacqueline Singh,
Indiana University Purdue University Indianapolis, jhsingh@iupui.edu
|
| Discussant(s): |
| Jennifer Reeves,
Nova Southeastern University, jennreev@nova.edu
|
|
Another Look at Methods of Evaluating Course Placement Systems
|
| Presenter(s):
|
| Howard R Mzumara, Indiana University Purdue University Indianapolis, hmzumara@iupui.edu
|
| Wen Linn, Indiana University Purdue University Indianapolis, lin39@iupui.edu
|
| Abstract:
The purpose of this presentation is to describe methods for evaluating the effectiveness of course placement systems based on a systematic approach that involves asking specific evaluation questions and using multiple methods or data sources to answer the questions posed in a validation study. Presenters will address an approach to system evaluation that promotes use of comprehensive validation studies (that involve use of decision theory and logistic regression approaches to provide validity evidence for assessing probabilities of success in particular courses and assess the appropriateness of placement cutoff scores), placement-enrollment comparisons, placement testing exit surveys (or pre-enrollment questionnaires), end-of-course evaluations, and feedback from academic advisors and instructors. The session will include an interactive discussion based on a review of a selected list of evaluation-related questions and methods used to facilitate validation studies involving the ACT COMPASS Mathematics Placement System for placing students in mathematics courses at a large Midwestern university.
|
|
Assessing Readiness for a 'Culture of Learning'
|
| Presenter(s):
|
| John Stevenson, University of Rhode Island, jsteve@uri.edu
|
| Melinda Treml, Northern Arizona University, melinda.treml@nau.edu
|
| Thomas Paradis, Northern Arizona University, thomas.paradis@nau.edu
|
| Abstract:
In colleges and universities, organizational context is critical for the use of assessment data in program improvement. This paper examines ways of assessing organizational readiness for the kind of learning-organization environment that features an institution-wide, ongoing, highly valued system for using student learning outcome data in curriculum design. We propose a five-stage developmental model, and illustrate its utility with two empirical strategies. A chairperson survey and an assessment report analysis are provided to demonstrate how this developmental perspective can aid internal evaluators in directing attention to the organizational context and to steps needed to promote systemic change from an externally driven, top-down, mandated assessment process to an internally valued, learning-centered process.
|
|
Evaluating Change in Interdisciplinary Collaboration
|
| Presenter(s):
|
| Jill Lohmeier, University of Massachusetts Lowell, jill_lohmeier@uml.edu
|
| Steven Lee, University of Kansas, swlee@ku.edu
|
| Abstract:
Evaluators are often asked to assess changes in the behaviors of members of an organization. One goal for many organizations is to increase the willingness of its members to work together. In academic settings, there is often a desire to increase interdisciplinary work. While evaluators are frequently asked to assess changes in collaboration, or interdisciplinary work, few, if any, measures of interdisciplinary work are available. This presentation will describe the creation and implementation of an evaluative Tool for Interdisciplinary Assessment (TIA). Twenty-two graduate students and faculty associated with an NSF grant at a Midwestern university took the TIA online, as well as a modification of the Levels of Collaboration Scale (LCS; Frey, Lohmeier, Lee & Tollefson, 2006). The effectiveness of using the TIA and LCS for evaluation, as well as the relationship between the two will be discussed. Challenges associated with evaluating interdisciplinary collaboration will also be discussed.
|
|
Engaging Constituents: Using Assessment Data to Inform Practice
|
| Presenter(s):
|
| Candace Lacey, Nova Southeastern University, lacey@nova.edu
|
| Barbara Packer-Muti, Nova Southeastern University, bpacker@nova.edu
|
| Jennifer Reeves, Nova Southeastern University, jennreev@nova.edu
|
| Abstract:
The Southern Association of Colleges and Schools (SACS) requires that all universities seeking accreditation implement a Quality Enhancement Plan (QEP) based on the core facet of how the university focuses on enhancing student learning. Based on the theme of enhancing student academic engagement, central administration at Nova Southeastern University (NSU) determined that a 360 degree assessment of all constituents would provide important insights into perceptions and drivers of engagement. To this end, NSU entered into a multi-year assessment of engagement with questionnaires distributed via a web-based modality to all students, alumni, faculty, staff, and administration. Findings from these assessments were distributed to the 14 colleges and nonacademic units within the university to assist in developing student, employee, and alumni engagement plans. This session will provide information about the methodology utilized for evaluation, assessment, instrumentation, and dissemination of findings.
|
| | | |
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
The Analysis and Interpretation of Focus Groups in Evaluation Research |
|
Roundtable Presentation 900 to be held in Suwannee 19 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Qualitative Methods TIG
|
| Presenter(s):
|
| Tom Massey, University of South Florida, massey@fmhi.usf.edu
|
| Abstract:
Focus groups have an established history of use in applied research and evaluation. The fundamental methods of the focus group technique have been well discussed, as have advantages of their use. Less guidance, however, tends to be provided for evaluators regarding the analysis of data resulting from focus groups or how to organize and defend conclusions drawn from the analysis. This roundtable will briefly review the methodology of the focus group with an emphasis on discussing thematic analysis of latent data at three distinct levels: articulated, attributional, and emergent. The three levels are described and illustrated with respect to their value and contribution to interpretation within the framework of the group method and qualitative standards of thematic analysis. Roundtable participants will be encouraged to share examples of their analysis of focus groups in evaluation. By discussing and sharing experiences, participants will gain new insights and enhance their skills in the use and interpretation of focus groups.
|
| Roundtable Rotation II:
Freehand Drawings as Visual Data: College Course Evaluations Using the Appreciative Inquiry Model |
|
Roundtable Presentation 900 to be held in Suwannee 19 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Qualitative Methods TIG
|
| Presenter(s):
|
| Corenna Cummings, Northern Illinois University, ccummings@niu.edu
|
| Lara Lyles, Northern Illinois University, llyles@niu.edu
|
| Abstract:
Evaluators often seek innovative alternatives to common evaluation issues such as course evaluations. This study investigates the efficacy of visual data - freehand drawings - in the appreciative inquiry model for the purpose of evaluating graduate level courses in program evaluation, research methods, and classroom assessment. Appreciative Inquiry has been contrasted with problem solving. In problem solving we identify the problem, analyze the cause, suggest solutions, and generate action plans; whereas, in appreciative inquiry, we consider the best of what exists, envision the possibilities, dialog about what should exist, and innovate regarding possible change (Hammond, 1998). Evaluators may be interested in the use of freehand drawings as visual data used within the context of course evaluations and the use of a model for guiding change that focuses on the positive aspects of the issue under consideration.
Hammond, S. (1998). The thin book of appreciative inquiry. Thin Book Publishing Company. Bend, OR.
|
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
Consistency and Change in Extension Participatory Evaluation: Reflections on Focus Groups About How Farmers Learn |
|
Roundtable Presentation 901 to be held in Suwannee 20 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Extension Education Evaluation TIG
|
| Presenter(s):
|
| Nancy Franz, Virginia Cooperative Extension, nfranz@vt.edu
|
| Joseph Donaldson, University of Tennessee at Knoxville, jdonald2@utk.edu
|
| Robert Richard, Louisiana State University, rrichard@agcenter.lsu.edu
|
| Abstract:
The purpose of this roundtable is to reflect on our efforts to balance consistency with change in our multi-year participatory action evaluation by adapting our focus group protocol to what were learning along the way. While both are important, we share several examples of how our flexibility and openness to adapt our protocol to our evaluation findings lead to methodological refinements and serendipitous learnings. We discuss implications for both Extension education and evaluation.
|
| Roundtable Rotation II:
Professional Development of Extension Employees During an Economic Downturn: Implications for Cost-Benefit Analysis, 'Just-In-Time' Resources, and Evaluation of Core Competency Training |
|
Roundtable Presentation 901 to be held in Suwannee 20 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Extension Education Evaluation TIG
|
| Presenter(s):
|
| Karen Ballard, University of Arkansas, kballard@uaex.edu
|
| Nikki Cooper, University of Arkansas, ncooper@uaex.edu
|
| Abstract:
The importance of a strong professional development system for Extension organizations has never been more critical due to the divergent demographics of new hires, the brain drain caused by early retirement incentives, and the loss of Extension positions due to budget cuts. The reduction in many professional development and training budgets increases the importance of multi-state collaboration, evaluation of historical training priorities, and the opportunity to access and utilize new educational technologies and products. This round table discussion will allow participants the opportunity to share current challenges and to explore new opportunities for collaboration and multi-state capacity building. Discussion points for participants will include identification of worst experiences and best practices which will provide a framework for modification considerations for existing professional development programs and systems. Participants will be provided with a list of on-line resources matched to recognized Extension educator competency area.
|
|
Session Title: Culturally Responsive Leaders: Toward Transforming and Adapting Communities for Public Good
|
|
Multipaper Session 903 to be held in Wekiwa 3 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Multiethnic Issues in Evaluation TIG
|
| Chair(s): |
| Shane Chaplin, Duquesne University, shanechaplin@gmail.com
|
| Discussant(s): |
| Karen Kirkhart, Syracuse University, kirkhart@syr.edu
|
| Ricardo Millett, Ricardo Millett & Associates, ricardo@ricardomillett.com
|
| Abstract:
The American Evaluation Association/Duquesne University Graduate Education Diversity Internship Program (AEA/DU GEDIP) is a pipeline development program designed to both increase the number of evaluators of underrepresented groups in evaluation and to develop future leaders from these underrepresented groups in the profession. In the context of leadership development, the program works on two levels reflecting a key distinction in the leadership literature made by Heifetz (1994) between technical problems requiring authority (technical know-how or expertise, i.e. changing a tire) and problems requiring leadership (that is, complex issues without a clear solution that require adaptation, i.e. eliminating poverty). During the internship, interns are taught specific evaluation skills as they work on an evaluation project. This helps them address any technical problems as these arise in their evaluation work, and begins the lifelong process of creating authorities/experts in the field. Also interns are simultaneously challenged to adapt to and confront problems that go beyond any technical knowledge, that is, complex problems (in this case surrounding cultural responsiveness) that require leadership and not simply authority. The papers in this session reflect the specific work and struggles of interns as they went through this twofold journey of addressing both technical and leadership challenges in the context of their evaluation projects.
Heifetz, R.A. (1994). Leadership Without Easy Answers. Cambridge, Mass: Belknap Press.
|
|
Developing Criteria for Addressing Diversity in Evaluation of Science, Technology, Engineering, and Mathematics (STEM) Programs
|
| Wanda Casillas, Cornell University, wandacasillas@gmail.com
|
|
This project is an initial step in the effort to 1) establish criteria for conducting an evaluation which addresses issues relevant to diversity and 2) to establish criteria for assessing cultural responsiveness in program planning as part of an evaluation protocol. A secondary objective of this study is to evaluate whether points 1 and 2 are mutually exclusive aspects of evaluation. We predicted that program staff and evaluators working with diverse populations would generate similar taxonomies of culturally competent behaviors and attitudes which may result in one generalizable checklist of criteria that is relevant across domains. In this study we recruited staff from various programs currently conducting evaluations throughout New York State. We also recruited evaluators from around the country that have a concern with diversity issues in evaluation as identified by their affiliation with American Evaluation Association topical interest groups with a diversity focus. Participants were asked to generate statements about behaviors and attitudes which address diversity in evaluation and program planning. By using concept mapping methodology (Trochim, 1989, Quinlan, Kane, & Trochim, 2008) we employed a participatory approach to establishing criteria for evaluating and program planning for diverse populations. This method invited various stakeholders to contribute to the development of criteria and also allowed us to analyze differences between stakeholders.
|
|
Examining Stakeholder Input: A Culturally Responsive Evaluation of the Women's Resource Center Student Volunteer Program
|
| Brandi Gilbert, University of Colorado at Boulder, brandi.gilbert@colorado.edu
|
|
This paper explores the experiences of college students participating in the Women's Resource Center (WRC) Student Volunteer Program. The mission of the WRC is to create a campus environment where women will thrive. Staff and students at the Center ground their work on seven key foundations; action, celebration, leadership, learning community, social justice, spirit of collaboration and support. Within the past few years, the role of the WRC has been changing. One of their new goals is to do a better job of equipping their volunteers for the changing role that they are now beginning to serve. In this context the WRC has moved away from serving as a referral bank and is now moving towards having student volunteers serve in a more active role to develop and facilitate programming that they are interested in pursuing. The evaluation was conducted using a culturally responsive framework, that incorporated a great deal of stakeholder participation. Data was collected through face-to-face interviews with program staff and participants and through document analysis. This assessment assisted the Women's Resource Center staff in restructuring the Student Volunteer Program.
|
|
Assessing the Influences of Evaluability Assessment: An Exploratory Study of Changes in Organizational Attitudes and Behaviors Towards Program Evaluation
|
| Syreeta Skelton, Georgia State University, sskelton@hotmail.com
|
|
The Early Assessment of Programs and Policies to Prevent Childhood Obesity is a collaborative project specifically aimed at identifying and assessing local programs and policies with noteworthy success in improving the eating habits and physical activity levels of children for their readiness to engage in full-scale rigorous program evaluation. The framework used for this study is the evaluability assessment (EA) method, coined by Joseph Wholey. By design, the EA process involved these childhood obesity prevention programs and policies in site visits, interviews, document reviews, logic modeling, and technical assistance activities. While the primary focus of the EA has been upon identifying those programs and policies that are the most viable candidates for rigorous outcome study, a secondary interest of this research is upon how EA process use, in particular, influences organizational attitudes and behaviors towards program evaluation. Using follow-up survey data collected electronically from participating childhood obesity prevention programs and policies, changes in attitudes and behaviors regarding program evaluation resulting from their experiences in EA will be explored.
|
|
Engaging Students, Engaging Success: An Evaluation of North Lawndale Chapter Prep High School
|
| Asma Ali, University of Illinois at Chicago, asmamali@yahoo.com
|
|
North Lawndale College Preparatory High School was founded in 1996 to provide educational and social supports for students from academically high-risk communities. In the North Lawndale community where the school is located and derives a majority of its students, only 17% of students graduate from high school. The purpose of this evaluation is to document and analyze NLCP's practices as well as the success and challenges of its students once they matriculate to college. More specifically, this evaluation seeks to explore the students' resiliency and its' influence on the students' aspirations and achievements (Freeman, 1997; Kozol, 1991; Ladson-Billings, 2006; McDonough, 2004). Together with information collected from current students and other school stakeholders, the alumni data collection strategies will help the team understand which factors enhance the creation, support, and development of African American and Latino students' personal, educational and social opportunities and achievements that fortify them to persevere during their postsecondary years. As a part of her AEA Internship work, Ali will be responsible for the design and execution of the alumni survey and contribute to the qualitative aspects of the alumni study.
|
|
Emerging Findings for Making Connections (MC)
|
| Donna Parrish, Clark Atlanta University, sistachristian_p11824@yahoo.com
|
|
Research on Atlanta neighborhoods in 2000 revealed that many of the most vulnerable families live in five of Atlanta's oldest neighborhoods located just south of Downtown. These neighborhoods comprise a once-thriving African-American community that has experienced a great deal of property disinvestment, population decrease, and general economic decline over the past 30 years. To be a catalyst to strengthen families in these neighborhoods, the Annie E. Casey Foundation has been working to promote neighborhood-scale programs, policies, and activities that contribute to strong, family supporting neighborhoods.
The Making Connections (MC) cross-site survey provides a unique opportunity to track the movement of families into and out of low-income, urban neighborhoods and explore ways in which neighborhoods may be supporting or undermining family well-being. I will present emerging findings from a cross-site analysis of family mobility and neighborhood change in the Making Connections sites. We hope this information can help the MC site teams adapt and strengthen their local strategies, and may also catalyze further discussion about what it means to implement a neighborhood-based family strengthening initiative.
|
|
Native American Perspectives of Gatekeeper Training in the Garrett Lee Smith Youth Suicide Prevention and Early Intervention Program
|
| Cynthia Williams, Georgia State University, cynthiawilliams@yahoo.com
|
|
The Garrett Lee Smith Memorial Act of 2004 was the first to provide funding for the development, evaluation, and improvement of suicide prevention programs administered by SAMHSA's Center for Mental Health Services. The cross-site evaluation of the GLS Suicide Prevention Program is among the largest systematic efforts to date to assess the ability of suicide prevention programs and one of the first national evaluation efforts to attempt to fill gaps in understanding and to establish benchmarks for future research in the area of youth suicide prevention.
The study has two components: one related to campus suicide prevention programs and one funding state and tribal programs. This paper specifically examines the tribal programs and suicide prevention training. It concentrates on the process stage of the evaluation, consisting of qualitative and quantitative data from tribal programs on post-training utilization, key milestones and activities related to implementation of suicide prevention plans, the number of trainings and individuals trained, and referral networks. This paper seeks to address questions specific to Native American training participants. It will also discuss the role the results might play in trainings modified for specific populations.
|
|
Session Title: Evaluation Capacity Building Best Practices in International Donor Non-Governmental Organization (NGO) Programs
|
|
Panel Session 904 to be held in Wekiwa 4 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the International and Cross-cultural Evaluation TIG
|
| Chair(s): |
| Adele Liskov, United States Agency for International Development, aliskov@usaid.gov
|
| Abstract:
International donors have many years of experience individually and collectively in support of worldwide development and humanitarian assistance programs. Despite the professionalization of non-governmental organizations (NGOs) in the U.S. and in developing countries, abundant recognition as well as rhetoric about unmet capacity needs at the organizational and technical levels continues to be raised. This is partially due to the expansion of NGO players during the 1990's and especially after 2000 when the NGO sector rapidly grew in response to globalization, political and technological advances and opportunities. A body of knowledge accumulated during this period of donor-sustained support to maturing NGO partners in and across development sectors and in humanitarian assistance programs. Donor investment in NGO strengthening, of which evaluation has been a critical element, has yielded results-focused knowledge and best practices in child survival, microenterprise, and civil society, including the innovation of integrating organizational capacity building into technical program implementation. Case studies regarding the role of evaluation will be presented.
|
|
The Importance of Evaluation in Development Programming: United States Agency for International Development's (USAID) Experience
|
| Tom Kennedy, United States Agency for International Development, tkennedy@usaid.gov
|
|
Tom Kennedy, Senior Microfinance Advisor for USAID, will discuss how the Agency is institutionalizing monitoring and evaluation and how the agency's approach in this area has evolved. Mr. Kennedy will also focus on the issue of monitoring and evaluation as a key organizational development tool and how it has been used to strengthen the capacity and effectiveness of USAID's microenterprise development partners and their programs. Mr. Kennedy will address initiatives the agency is undertaking in areas such as the Private Sector Development Impact Assessment Initiative and the generation of new knowledge and learning in evaluation and assessments of interventions designed to reduce poverty and reduce conflict or fragility.
|
|
|
Monitoring and Evaluation to Improve Non-governmental Organization (NGO) Capacity Building
|
| Kenneth Sklaw, United States Agency for International Development, ksklaw@usaid.gov
|
|
Ken Sklaw, Organizational Capacity Advisor in USAID's Implementation Support Division of Office of HIV/AIDS,will discuss the role of evaluation in the President's Emergency Plan for AIDS Relief (PEPFAR). Over the past five years, PEPFAR has invested in numerous programs designed to build capacity of NGOs in order to increase participation in local responses to the epidemic. Of these, the New Partners' Initiative (NPI) is among the largest and most visible. This $200 million initiative was designed to: Increase PEPFAR's ability to provide services through new PEPFAR partners and increase their capacity to provide services; and build community ownership by developing the technical and organizational capacity of local partners. To understand the program's success on each of these goals, monitoring and evaluation activities are key. Collecting data on service delivery is critical to understand NGO partners' success in reaching their ultimate targets. Understanding the importance of organizational capacity and of building organizational capacity in NPI and having the tools to measure evaluation and organizational capacity are also critical. Under NPI, capacity in both areas of monitoring and evaluation have been achieved and we have found that those most capable of monitoring their capacity building activities are the ones that are most successful in implementation.
| |
|
Learnings From Monitoring and Evaluation (M&E) Capacity Building Training With the South Asian Action Against Trafficking and Sexual Exploitation of Children Non-governmental Organization (NGO) Network
|
| Adele Liskov, United States Agency for International Development, aliskov@usaid.gov
|
| Molly Hageboeck, Management Systems International, mhageboeck@msi.inc.com
|
| Joan Goodin, Management Systems International, jgoodin@msi.inc.com
|
|
Adele Liskov, Chief of the Private and Voluntary Cooperation Division at USAID and Technical Officer of the Capable Partners NGO Strengthening Program, will present learnings from an evaluation training initiative for ATSEC network members to strengthen understanding and commitment to higher quality monitoring and evaluation in the five-country network. This work was undertaken with the objective of institutionalizing evaluation within each chapter's organizations. Discussion will focus on the effect of the training one and a half years later with organizations carrying out anti-trafficking programs related to awareness raising among leadership groups, rescue and repatriation, and shelter care for trafficking victims.
| |
|
Session Title: Strengthening Schools Through the use of Evaluation: Issues and Perspectives
|
|
Multipaper Session 905 to be held in Wekiwa 5 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
|
| Chair(s): |
| Anne Cullen,
Western Michigan University, anne.cullen@wmich.edu
|
|
School Culture Effects on Evaluation Capacity Building in the Context of a Mathematics Intervention
|
| Presenter(s):
|
| Anica Bowe, University of Minnesota, bowe0152@umn.edu
|
| Stacy Karl, University of Minnesota, karlx028@umn.edu
|
| Lesa Covington-Clarkson, University of Minnesota, lesa.covingtonclarkson@spps.org
|
| Frances Lawrenz, University of Minnesota, lawrenz@umn.edu
|
| Abstract:
Our research centered on how school culture affected the process of evaluation capacity building in the context of implementing an intervention. The evaluation capacity building context was the Collaborative Evaluation Community (CEC) project; a partnership formed between elementary school teachers, graduate students, and faculty from the University of Minnesota. The intervention around which the capacity building was conducted was designed to improve mathematics problem solving. Two culturally different urban elementary schools were involved in the CEC project and the mathematics intervention. There was variation in the cultural context at both schools in terms of curriculum, instructional practices, teacher engagement, teacher buy-in, administrative support, emphasis on PDPs, and school AYP status. These cultural differences affected the opportunities for evaluation capacity building through the evaluation design, commitment of the teachers, and the use of the evaluation data. Relationships between school culture, evaluation capacity building and the effectiveness of interventions will be discussed.
|
|
A Participatory Approach to Defining and Measuring School-Based Case Management for Pregnant and Parenting Girls
|
| Presenter(s):
|
| Nancy Leland, University of Minnesota, nancylee@umn.edu
|
| Barb McMorris, University of Minnesota, mcmo0023@umn.edu
|
| Rebecca Koltes, Broadway High School, becky.koltes@mpls.k12.mn.us
|
| Barbara Kyle, Minneapolis Public Schools, barbara.kyle@mpls.k12.mn.us
|
| Mary Pat Sigurdson, Broadway High School, msigurds@mpls.k12.mn.us
|
| Heather Palenschat, University of Minnesota, pale0019@umn.edu
|
| Abstract:
Devoted exclusively to young mothers and their children, Broadway High School in Minneapolis, Minnesota, uses an intensive, one-on-one case management (CM) model to provide services to every enrolled student. In 2007, a 5-year grant was awarded by the Office of Adolescent Pregnancy Programs, as part of their nationwide CARE program, to implement and evaluate the Broadway model. Due to a lack of published literature on school-based CM and Broadway case managers' desire to be more focused and efficient, the CM team and evaluator commenced a year long dialogue to define CM services at Broadway. This extended dialogue produced a number of exciting changes to Broadway's model, including: 1) a clear, shared definition of case management; 2) revisions of tools to capture CM services provided; 3) a more cohesive CM team; 4) professional development training requested by CMs; and 5) improved methods for measuring the impact of CM services.
|
|
Marketing of Science Teachers and Induction (MOSTI): An Eclectic Evaluation Approach to the Improvement of Science Teacher Recruitment, Education and Retention in the Middle School Grades
|
| Presenter(s):
|
| Bryce Pride, University of South Florida, pride@coedu.usf.edu
|
| Melinda Hess, University of South Florida, mhess@tempest.coedu.usf.edu
|
| Robert Potter, University of South Florida, potter@cas.usf.edu
|
| Abstract:
This paper will focus on lessons learned from two cohorts of teachers' recruited for a program to supplement training of second career middle school science teachers involved in an alternative certification process. Taking an Eclectic approach (Fitzpatrick, Sanders & Worthen, 2004), we evaluated the extent to which this program helped second career teachers prepare to be middle school science teachers and be retained as permanent teachers in the school system. To inform the report, quantitative and qualitative data were collected from surveys, content exams, observations, mentor logs and a focus group. Information and ideas for improvement were provided from the perspective of teachers, mentors and administrators. Multiple perspectives are used to gain an understanding of the effectiveness of program implementation and recommended suggestions for improvement. Findings have assisted MOSTI administrators in decisions with the incorporation of feedback from teachers and mentors about training sessions and needs to improve the program.
|
|
The Collaborative Development, Implementation and Impact of a Peer Quality Review System in Schools
|
| Presenter(s):
|
| Natalie Lacireno-Paquet, WestEd, npaquet@wested.org
|
| Sarah Guckenburg, WestEd, sgucken@wested.org
|
| Mary Cazabon, WestEd, mcazabo@wested.org
|
| Kristin Mallory, SIATech, kris.mallory@siatech.org
|
| Abstract:
This paper describes the collaborative development, implementation and early impact of a peer quality review system for SIATech (School for Integrated Academics and Technology) schools. This work builds on two years of formative evaluation work with SIATech, a network of charter schools for high school dropouts colocated on Job Corps centers in four states. The purpose of this initiative is overall school improvement by:
o Creating an effective Quality Review Process for SIATech Schools
o Building the capacity of central and school staff to implement the Quality Review Process;
o Implementing the Quality Review Process for sites so that the system gains critical information needed for supporting improvement in each of the participating schools and each school engages fully in the process, learning from it, and taking appropriate action to improve;
o Creating a culture where school and central staff work together in a constructive way to make needed improvements.
|
| Presenter(s):
|
| Rebeca Diaz, WestEd, rdiaz@wested.org
|
| Abstract:
This presentation will discuss the role of social justice in the evaluation of school programs. The impetus for this topic stems from partnerships with personnel from school districts nationwide as they attempt to implement reforms in low-performing schools. This presentation will focus on the benefits and challenges of creating an evaluation that is collaborative in nature and which holds all stakeholders responsible for addressing social justice in our decision-making practices. This presentation will explore questions such as: To what extent is social justice a priority in reform initiatives in low-performing schools? What is the responsibility of the evaluator in maintaining a commitment to social justice in school reform programs?
|
| | | | |
|
Session Title: Out of Control? Selecting Comparison Groups for Analyzing National Institute of Health Grants and Grant Portfolios
|
|
Panel Session 906 to be held in Wekiwa 6 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Research, Technology, and Development Evaluation TIG
|
| Chair(s): |
| Christie Drew, National Institutes of Health, drewc@niehs.nih.gov
|
| Abstract:
Evaluators at the U.S. National Institutes of Health (NIH) are often called upon to assess the progress of a group of grantees or individuals who have received NIH support. To do this we select additional sets of grantees or individuals to serve as comparison groups. The purpose of this session is to explore recent approaches used to select meaningful comparison groups for analytical questions common at NIH and other science management environments. The examples are drawn from different NIH entities to provide a variety of contexts. The session focuses primarily on the methodology of comparison group selection rather than results of particular evaluations. Each speaker will briefly review their evaluation design, comparison group selection process, and analytical approach, paying particular attention to methodological challenges that affected the design or results. A discussion with the audience about the strengths and weaknesses of different approaches will follow the presentations.
|
|
Establishing a Comparison Set for Evaluating Unsolicited P01s at the National Institute of Environmental Health Sciences
|
| Christie Drew, National Institutes of Health, drewc@niehs.nih.gov
|
| Jerry Phelps, National Institutes of Health, phelpsj@niehs.nih.gov
|
| Martha Barnes, National Institutes of Health, barnes@niehs.nih.gov
|
|
The unsolicited P01 mechanism at the National Institute of Environmental Health Sciences (NIEHS) is intended to fund multi-project investigator initiated research. In general, P01s are expected to be "greater than the sum of their parts." P01 projects have been funded for a wide range of years (with several over 35 years long) on a diverse range of scientific topics. A typical P01 consists of approximately three sub-projects roughly equal in cost to an R01 grant. Is it valid to expect the P01s to have three times the publications and citations per year of funding compared to comparable R01s? What is the best set of R01s for comparison? To address these questions, a mathematical matching algorithm was developed to identify scientifically relevant R01s for the recently active P01 portfolio. Program officers assisted in the final selection of comparison R01s. Key challenges were the varying lengths of the P01s, the range of scientific topics addressed, and the unique nature of several P01s.
|
|
|
It's A Small World After All: Describing and Assessing National Institutes of Health (NIH)-Funded Research in the Context of A Scientific Field
|
| Sarah Glavin, National Institutes of Health, glavins@mail.nih.gov
|
| Jamelle Banks, National Institutes of Health, banksj@mail.nih.gov
|
| Paul Johnson, National Institutes of Health, pjohnson@mail.nih.gov
|
|
Although the U.S. National Institutes of Health (NIH) is the largest supporter of biomedical research in the world, most published research is not supported by NIH. Recent evaluations of NIH research centers programs have compared publications of NIH-supported researchers with publications across the same scientific field. Such an approach can allow the NIH to answer questions such as: (1) how do the specific research types and subareas supported by NIH compare with research being published in the field generally? (2) what journals are used to disseminate research results from the NIH program, and how do those journals compare with those used in the field as a whole? and (3) what other organizations are supporting research in this area, and how does their research compare with the NIH program? However, identifying "the world" as a comparison group is a challenge. The presentation offers considerations for implementing this approach, including searching and sampling strategies and issues of how to interpret the results of the comparisons.
| |
|
NIH Loan Repayment Program: Applying Regression Discontinuity to Assess Program Effect
|
| Milton Hernandez, National Institutes of Health, mhernandez@niaid.nih.gov
|
| Laure Haak, Discovery Logic, laurelh@discoverylogic.com
|
| Rajan Munshi, Discovery Logic, rajanm@discoverylogic.com
|
| Matt Probus, Discovery Logic, mattp@discoverylogic.com
|
|
NIH's Loan Repayment Program (LRP) repays educational loan debt for individuals who commit to conduct biomedical or behavioral research. A recent evaluation examined whether LRP awards are effective in their broad purpose of recruiting and retaining early-career health professionals in biomedical research careers. New LRP applicants between FY2003 and FY2007 were defined as the study cohort. Applicants and awardees on the "funding bubble" (the part of the distribution where applicants have a 50% chance of being funded or not funded) were identified. Regression discontinuity design was then used to examine the impact of receiving an LRP on subsequent involvement in the extramural NIH-funded workforce for funded and not funded LRP applicants. Subsequent involvement outcomes measured for the study included grant applications, grant awards, participation in grants in roles other than the Principal Investigator, and publications. This presentation will focus mainly on the strengths and weaknesses of the methods used to define the "funding bubble" and apply the regression discontinuity approach.
| |
|
The Use of Propensity Scores in a Longitudinal Science Study of Minority Biomedical Research Support From the National Institute of General Medical Sciences
|
| Mica Estrada-Hollenbeck, California State University San Marcos, mestrada@csusm.edu
|
| Anna Woodcock, Purdue University, awoodcoc@psych.purdue.edu
|
| David Merolla, Kent University, dmerolla@ken.edu
|
| P Wesley Schultz, California State University San Marcos, psch@csusm.edu
|
|
The National Institute of General Medical Sciences (NIGMS) has promoted Minority Biomedical Research Support through a variety of mechanisms for many years. This presentation reports on a longitudinal evaluation of the Research Initiative for Scientific Enhancement (RISE). The goal of the evaluation was to determine the efficacy of the RISE program. A key challenge in quasi- experimental studies is estimating causal effects of the program because random assignment of participants to programs is not possible. Generating propensity scores allows researchers to correct for selection bias by only comparing treated subjects to comparable control subjects, thereby achieving unbiased estimates of treatment effects. This paper will describe how propensity scores provide a flexible method for determining intervention effects, and describe how scores were calculated from a variety of relevant predictor variables (e.g., gender, ethnicity, GPA, intention to stay in the sciences, etc.), which were then used to select a matched sample comparison group of non RISE participants.
| |
|
Session Title: Outcome and Impact Evaluations: Brazil, Korea, and European Union
|
|
Multipaper Session 907 to be held in Wekiwa 7 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Research, Technology, and Development Evaluation TIG
|
| Chair(s): |
| Neville Reeve,
DG Research - European Commission, neville.reeve@ec.europa.eu
|
|
Meta-Analysis of Economic Studies of Public Research and Development (R&D) Programs
|
| Presenter(s):
|
| Jiyoung Park, Korea Institute of Science and Technology Evaluation and Planning, jypak@kistep.re.kr
|
| Abstract:
Economic analysis is an important element in deciding investment on large-scale R&D programs. The study should be performed in a comprehensive and systematic way, and has the meaning of analyzing the impact of a program by comparing the benefits from the program and its projected cost. For the program that the effect can be measured in currency terms, the benefit is analyzed from the efficiency perspective, and for the program that cannot be measured in currency terms technically and practically, the meanings and impacts of the program is explained quantitatively and qualitatively and analyzed in terms of social and economic efficiency. Given that national R&D programs are varied in terms of characteristics and goals, economic study needs to be conducted by the program-specific standard.
In this study, several cases of economic studies are reviewed in its methodologies and scenarios used, and propose proper approaches of economic analysis in R&D programs.
|
|
Multidimensional Ex Post Evaluation of Research and Development (R&D) Programs: The Case of an Oil Refining R&D Program in Brazil
|
| Presenter(s):
|
| André Furtado, Department of Science and Technology Policy Brazil, furtado@ige.unicamp.br
|
| Adalberto Azevedo, Department of Science and Technology Policy Brazil, adalba@ige.unicamp.br
|
| André Rauen, Department of Science and Technology Policy Brazil, andrerauen@ige.unicamp.br
|
| Edilaine Camillo, Department of Science and Technology Policy Brazil, edilaine@ige.unicamp.br
|
| Abstract:
This paper presents the application of the ESAC R&D evaluation methodology to a technology program in the Brazilian oil industry. The ESAC methodology was developed by researchers at the Department for Science and Technology Policy of the University of Campinas. This methodology evaluates ex-post impacts through a two step process. Firstly, by building up an impact structure in each of the four dimensions: economic, social, environmental, and on capabilities. Secondly, by applying questionnaires based in the impacts identified to program participants in order to gather information from different groups impacted by the program. Each dimension gathers a list of indicators, which is related to a question. The questions are answered in a Lickert scale, allowing the measurement and presentation of impacts in a quantitative way. The results of the pilot evaluation of an oil refining program carried on by Petrobras, Brazil's major oil company, are analyzed in the paper.
|
|
Trends and Evolution of the Information and Communication Technology (ICT) Research and Deployment Landscape in Europe
|
| Presenter(s):
|
| Nicholas Vonortas, George Washington University, vonortas@gwu.edu
|
| Franco Malerba, Luigi Bocconi University, franco.malerba@unibocconi.it
|
| Nicoletta Corrocher, Luigi Bocconi University, nicoletta.corrocher@unibocconi.it
|
| Abstract:
This paper will present the results of an European evaluative study in the field of information and communication technology (ICT). The main objective of this evaluative study is to assess how effectively ICT research and technology deployment activities are being exploited in systems of innovation at the regional level. In particular, the objectives of the study are to assess whether and to which extent ICT research and technology development (RTD) activities in the sixth and seventh Framework Programs and ICT policy support program activities are integrated into the eco-systems of innovation and deployment of Information Society initiatives at the regional level, thus helping strengthen European competitiveness. In addition, the study examines whether ICT RTD activities are linked to knowledge and innovation networks such as INTERREG and URBACT networks, and are bridged with pre-commercial public procurement in policy target areas such as eGovernment, eHealth, and eInclusion.
|
| | |
|
Session Title: Real World Challenges for Mental Health and Substance Abuse Evaluation Implementation and Analysis
|
|
Multipaper Session 908 to be held in Wekiwa 8 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
|
| Chair(s): |
| Diana Seybolt,
University of Maryland Baltimore, dseybolt@psych.umaryland.edu
|
|
Maximum Individualized Change Analysis: Evidence Supporting its Use
|
| Presenter(s):
|
| Eric Brown, University of Washington, ricbrown@u.washington.edu
|
| Roger Boothroyd, University of South Florida, boothroy@fmhi.usf.edu
|
| Abstract:
At the 2007 American Evaluation Association Conference (Boothroyd, Banks, & Brown, 2007), we described an analytic procedure, the maximum individualized change score method (Boothroyd, Banks, Evans, Greenbaum, & Brown, 2004), which we argued was potentially superior to traditional MANOVA approaches in making program comparisons in which substantial heterogeneity existed among the clients served, the services they received, and the outcomes they attained. We have since received funding from the National Institute of Mental Health (R03MH082445-01) to systematically assess the merits of this approach under various 'real world' data assumptions. The presentation will describe this analytic approach and summarize the findings from various simulation studies as well as the application of this method in a secondary analysis of data from a multi-site federally-funded study examining the impact of mental health managed care.
|
|
Examining the Context and Determining Evaluation Questions in Alcohol and Drug Prevention Programs
|
| Presenter(s):
|
| Robert LaChausse, California State University San Bernardino, rlachaus@csusb.edu
|
| Abstract:
Evaluators have long been encouraged to involve stakeholders in program evaluation activities to increase evaluation 'buy in' and subsequent use of evaluation information. An examination of the context in which the program operates can influence the type of evaluation questions and methods used. Many approaches to program evaluation emphasize the importance of examining the context but fail to articulate how this should be conducted. The use of stakeholder interviews and checklists can be useful to evaluators in understanding program context and improve how program evaluations are planned and conducted. An innovative approach to examining context and selecting evaluation questions will be presented. This paper will increase evaluators' competency in examining the context of alcohol and drug prevention programs and determining evaluation questions and methods while fostering evaluation utilization. An example from a drug prevention program for an ethnically diverse population will be used to illustrate these concepts and lessons learned.
|
|
Coping With the Quasi in Your Quasi-experimental Evaluation: Lessons Learned From a Mental Health Program Evaluation With Consumers and Case Managers
|
| Presenter(s):
|
| Lara Belliston, Ohio Department of Mental Health, bellistonl@mh.state.oh.us
|
| Susan Missler, Ohio Department of Mental Health, misslers@mh.state.oh.us
|
| Abstract:
In an effort to make mental health care consumer- and family-driven, Ohio is evaluating a program for mental health consumers and case managers that utilized outcomes feedback to foster more person-centered, collaborative, empowering and recovery-oriented treatment planning and case management. This evaluation study is funded via the SAMHSA Mental Health Transformation State Incentive Grant (MH-TSIG). The quasi-experimental wait-list-control evaluation design included case managers and consumers in four mental health agencies. Due to the nature of research in the real world, many quasi-experimental studies experience challenges with recruitment, attrition or mortality, and integrating archival data. Results will be presented from the evaluation and will show how statistical analyses may be used to adjust for threats to validity.
|
|
Longitudinal Examination of Facilitator Implementation: A Case Study Across Multiple Cohorts Of Delivery
|
| Presenter(s):
|
| Cady Berkel, Arizona State University, cady.berkel@asu.edu
|
| Melissa Hagan, Arizona State University, melissa.hagan@asu.edu
|
| Sharlene Wolchik, Arizona State University, wolchik@asu.edu
|
| Tim Ayers, Arizona State University, tim.ayers@asu.edu
|
| Sarah Jones, Arizona State University, sarahjp@asu.edu
|
| Irwin Sandler, Arizona State University, irwin.sandler@asu.edu
|
| Abstract:
Evaluation studies of evidence-based programs rarely include important implementation information that would enable valid conclusions about program outcomes. It is assumed that facilitators will fall victim to 'program drift,' with lower fidelity and greater adaptations over time and that program effects will weaken as a result (Kerr et al, 1985). The Concerns-Based Adaptation Model has been used to provide a framework for understanding facilitator implementation over time (Ringwalt et al, under review). The framework predicts that facilitators' implementation will become more fluid and responsive to the needs of participants with repeated delivery. We present results of an observational study of one facilitator's implementation across five waves of the Family Bereavement Program (FBP). Fidelity and adaptations will be coded by two coders. Based on the C-BAM framework, authors hypothesize that fidelity and responsive adaptations will increase as the facilitator becomes more familiar with the program content.
|
| | | |
|
Session Title: Evaluating the Effectiveness of Policy and Advocacy Coalitions
|
|
Panel Session 909 to be held in Wekiwa 9 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Advocacy and Policy Change TIG
|
| Chair(s): |
| Astrid Hendricks, The California Endowment, ahendricks@calendow.org
|
| Abstract:
Building successful coalitions is widely recognized as way to leverage nonprofit performance, especially in the area of advocacy. This session will provide the insights of a funder, evaluator and coalition member on evaluating and facilitating coalitions. Panelists will present first-hand experience as well as present information about what works in effective coalitions derived from an extensive review of academic research that draws from business, community psychology, political science, sociology and management sources. The session will discuss ways for measuring, building and sustaining effective coalitions.
|
|
Discrete Indicators of Coalition Effectiveness: A Model and Tool
|
| Jared Raynor, TCC Group, jraynor@tccgrp.com
|
|
This presentation will explain a framework for evaluating coalition capacity and "readiness" to maximize the scarce resources that go into coalition building. The framework includes a high-level self-assessment tool that can be used by coalitions and funders to do quick "check-ups" on coalition health against best practices and can serve as the basis for evaluating the capacity element of effective coalitions'one aspect of an effective coalition evaluation.
|
|
|
A Funder's Perspective on Evaluating the Capacity of Coalitions: What to Look For, When and Why?
|
| Gigi Barsoum, The California Endowment, gbarsoum@calendow.org
|
|
The California Endowment has been a leader in evaluating advocacy efforts and its new strategic approach focused on communities makes the issue of advocacy coalitions extremely relevant. TCE has funded numerous coalitions; some have worked and some have had less success. As a funder they will discuss how using the framework can help evaluate the effectiveness of coalitions on both the front and back ends'deciding when and how to invest in a coalition and what the impact of that funding was.
| |
|
Session Title: Perspectives on Effectiveness in Teaching and Learning Evaluation
|
|
Multipaper Session 910 to be held in Wekiwa 10 on Saturday, Nov 14, 3:30 PM to 5:00 PM
|
|
Sponsored by the Teaching of Evaluation TIG
|
| Chair(s): |
| Vanessa Dennen,
Florida State University, vdennen@fsu.edu
|
|
Building Evaluation Capacity Among Community and Health Sector Workers in New Zealand
|
| Presenter(s):
|
| Pauline Dickinson, Massey University, p.m.dickinson@massey.ac.nz
|
| Jeffery Adams, Massey University, j.b.adams@massey.ac.nz
|
| Abstract:
The funders of community-based initiatives are increasingly emphasizing the importance of evaluation to enhance the effectiveness of programs/projects. While much evaluation is professional and external, there are expectations that workers in organizations will undertake evaluation of their programs/projects. However, one impediment is the limited understanding of evaluation among workers. We describe an initiative from New Zealand which aims to increase evaluation capacity through training courses, support to individual workers and to community organizations. We report on process and outcomes evaluation findings of a core part of the intervention (3-day Easy Evaluation course), as well as on formative issues that underpin the design of the workshops and the wider capacity-building intervention. Case studies illustrate the impact of the intervention. Overall, our assessment is that while the training is successful in building knowledge and confidence, building personal capacity alone is not sufficient to enable workers complete evaluations of their programs/projects.
|
|
Training Critical Consumers: Reframing an Introduction to Evaluation Course
|
| Presenter(s):
|
| Vicki Schmitt, University of Alabama, vschmitt@bamaed.ua.edu
|
| Aaron Kuntz, University of Alabama, akuntz@bamaed.ua.edu
|
| Abstract:
Educational research programs often include some coursework in evaluation, however, these courses tend to be limited to a single-course that provides only a basic introduction to the field of evaluation. Given the limitations associated with the single-course delivery, this paper proposes a 'critical consumers' approach to evaluation training where the focus shifts from training students to become evaluators to educating students to critically engage with the evaluation methods, design, analysis and interpretation they may encounter in the future. Participants are members of an executive education doctorate program, the majority of who work as educational administrators. In these roles, they are often asked to solicit, engage with, and interpret the findings of external evaluations. By helping students learn the skills associated with good evaluative practice as well as the interpretation and implementation of evaluative findings, the goal shifts from training students to become evaluators to students becoming 'critical consumers' of evaluation.
|
|
How to Apply Learning Theories for Designing and Delivering an Effective Evaluation Course?
|
| Presenter(s):
|
| Koralalage Jayaratne, North Carolina State University, jay_jayaratne@ncsu.edu
|
| Abstract:
Preparing graduate students as future evaluators is important for continuous fostering of evaluation profession. This important educational purpose can be achieved if we deliver quality evaluation courses for helping graduate students meet their educational expectations. The quality of evaluation courses depends on the quality of planning and delivering instruction. Planning and delivering instruction are guided by learning theories. This paper discusses learning theories in view of implications for teaching enhancement and presents a framework for designing and delivering effective evaluation courses. This presentation contributes to evaluation practice by integrating teaching theories into evaluation course development. The paper is practically significant for educators who teach evaluation courses.
|
|
Zen and the Art of Evaluation Practice: Reflections From a Novice Evaluator
|
| Presenter(s):
|
| Judith Sunderman, University of Illinois, jsunderm@illinois.edu
|
| Abstract:
This paper explores the process of learning about evaluation practice and the effects of that experience on the novice evaluator. PhD students are normally taught theory and told about reality. Opportunities are often limited to practice thoughtful integration of the two. The University of Illinois at Urbana-Champaign offers a full-scale evaluation practice experience, in a group-oriented evaluation practicum. Following the practicum, I directed my own year-long evaluation project, with faculty supervision. This project gave me the opportunity to apply academic and technical knowledge while practicing nontechnical skills that go beyond theory. Highlights of that experience are presented in an autobiographical account -the diary of a single experience, my own. This paper provides insights on the learning-experiencing cycle and provides unique perspectives for budding evaluators, faculty who want to enhance practical experiences for their students, and experts working with new professionals in the field.
|
|
A Stakeholder-driven Process for Selecting Evaluation Questions
|
| Presenter(s):
|
| Mark Hansen, University of California Los Angeles, hansen.mark@gmail.com
|
| Abstract:
There are innumerable evaluative questions that could be asked concerning any program. Developing a program theory or logic model can help to clarify some of the options by identifying key program activities and intended outcomes. However, stakeholders may still struggle with the task of focusing the evaluation design. The purpose of this paper is to describe a process to help stakeholders identify and prioritize evaluation questions. Central to this process are considerations of utility, perceived importance, and feasibility. An example of the application of this process in the development of an evaluation plan will be provided, along with a discussion of the benefits and limitations of the approach.
|
| | | | |