|
Session Title: Advocacy Evaluation: Identifying and Using Interim Outcomes to Tell the Whole Story
|
|
Panel Session 537 to be held in Panzacola Section F1 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Advocacy and Policy Change TIG
|
| Chair(s): |
| Kathy Brennan, Innovation Network Inc, kbrennan@innonet.org
|
| Abstract:
Measuring an advocacy or policy change effort solely in terms of win or loss is neither useful for telling its full story nor helpful in terms of devising the next campaign or deciding how to give funding. Therefore advocates, funders and evaluators alike are increasingly interested in understanding the interim, shorter-term outcomes of such efforts and how they may link to longer term successes. This session will discuss interim measures from a foundation perspective and a nonprofit perspective, using examples that illustrate why there is a need for such indicators and drawing on what the current research has to say about such indicators.
|
|
Using Interim Indicators to Measure Success of Advocacy Efforts: Examples From the United States Human Rights Fund Project
|
| Kathy Brennan, Innovation Network Inc, kbrennan@innonet.org
|
|
Kathy will use the examples from her current work as an evaluator with a group of advocacy grantees to demonstrate the need for interim indicators in the field and show a framework she is using with these grantees to think about interim indicators.
|
|
|
A Philanthropic Perspective on Assessing Progress Towards Policy Change
|
| Kristi Kimball, William and Flora Hewlett Foundation, kkimball@hewlett.org
|
|
Kristi will give the Philanthropic perspective on why interim indicators are so important to understanding policy change efforts. She will discuss some of the work that the William and Flora Hewlett Foundation is currently doing to better understand which indicators are meaningful.
| |
|
The Advocacy Progress Planner and Interim Outcomes: Lessons From Users of the APP Tool
|
| David Devlin-Foltz, Aspen Institute, ddf@aspeninst.org
|
|
David will talk about the Advocacy Progress Planner tool http://planning.continuousprogress.org/ he developed, the research behind it, and how it helps advocates determine interim indicators. He will give examples on how advocates have used the tool to develop interim indicators.
| |
|
Session Title: Starting and Succeeding as an Independent Evaluation Consultant
|
|
Panel Session 538 to be held in Panzacola Section F2 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Independent Consulting TIG
|
| Chair(s): |
| Amy Germuth, EvalWorks LLC, agermuth@mindspring.com
|
| Abstract:
Independent Consultants will share their professional insights on starting and maintaining an Independent Evaluation Consulting business. Panelists will describe ways of building and maintaining client relationships and share their expertise related to initial business set-up and lessons they have learned. Discussions will include the pros and cons of having an independent consulting business, the various types of business structures, methods of contracting and fee setting, as well as the personal decisions that impact on having your own business. They will examine some consequences of evaluation in the context of conducting independent consulting in diverse settings. The session will include ample time for audience members to pose specific questions to the panelists.
|
|
Moving (and Shaking): From Employee to Consultant
|
| Jennifer Williams, J E Williams and Associates LLC, jew722@zoomtown.com
|
|
Dr. Jennifer E. Williams is President and Lead Consultant of J. E. Williams and Associates, an adjunct professor, licensed counselor, and Independent Consultant. She has extensive experience conducting education, social and market research and program evaluation. She will share her experience of moving from being an employee to a consultant and the impact it has had on her both personally and professionally.
|
|
|
Staying Afloat in Turbulent Waters
|
| Kathleen Haynie, Kathleen Haynie Consulting, kchaynie@stanfordalumni.org
|
|
Dr. Kathy Haynie has directed Kathleen Haynie Consulting (specializing in educational evaluation) since 2002. In this time of slashed budgets and constrained funding opportunities, many of us fear our viability as independent consultants in the evaluation field. Dr. Haynie will discuss her challenges and strategies during this economic downturn - how she continued to expand her company, as well as her core value to clients during this time. She will discuss topics such as smart risk-taking, professional growth, proposal writing, and identifying your "widget".
| |
|
Getting Started: What Questions Do I Need to Ask and Answer Along the Way
|
| Judah Viola, National-Louis University, judah.viola@nl.edu
|
|
Dr. Judah J. Viola is an assistant professor of community psychology at National-Louis University in Chicago. He has been working part-time as an independent evaluation consultant for the past six years while maintain his ties to academia. He consults with a variety of school systems, museums, and small non-profits in the mid-west. He recently (published in June 2009) wrote, "Consulting and evaluation with community based organizations: Tools and strategies to start & build a practice". His presentation will focus on what questions new consultants need to ask and answer for themselves before they start their businesses.
| |
|
Sharing the Fun
|
| Amy Germuth, EvalWorks LLC, agermuth@mindspring.com
|
|
Dr. Amy Germuth, one of four co-founders of Compass Consulting Group, LLC will discuss starting an independent evaluation consulting firm. Since being founded in 2003, Compass Consulting Group has grown by one additional person and tripled in revenue. They have moved from the local/state sphere to conducting program evaluations at the national level and working with such groups as the US Education Department, Westat, the Pew Foundation, and the Bill and Melinda Gates Foundation. Her presentation will focus on her experience sharing a business with three others, including potential issues, benefits, and decisions to consider, a well as ways in which as a group they were better able to grow their business.
| |
|
Traveling and Working: International Evaluation Consulting - One Woman's Perspective
|
| Tristi Nichols, Manitou Inc, tnichols@manitouinc.com
|
|
Dr. Tristi Nichols is a program evaluator and owner of a sole proprietorship consulting business. Her work focuses primarily on international issues, which provides a unique lens through which to view independent consulting. Her reflections about consulting, international travel, the types of decisions she makes, and their impacts on her professionally and personally as a wife and mother will be of interest to novice, veteran, or aspiring independent consultants.
| |
Reflections From 30 Years of Evaluation Experience
| Mary Ann Scheirer, Scheirer Consulting, maryann@scheirerconsulting.org
|
|
Dr. Mary Ann Scheirer has been an evaluator for three decades, working in a variety of settings including higher education, government agencies, large consulting firms, and now, independent consulting. Her presentation will focus on how and why she moved into independent consulting and lessons learned form this move. She will provide a contrasting perspective as her move came after many years of service in multiple organizations.
| |
|
Session Title: A Multi-Case Study of Organizational Capacity to Do and Use Evaluation
|
|
Multipaper Session 541 to be held in Panzacola Section G1 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Evaluation Use TIG
and the Organizational Learning and Evaluation Capacity Building TIG
|
| Chair(s): |
| J Bradley Cousins, University of Ottawa, bcousins@uottawa.ca
|
| Discussant(s): |
| J Bradley Cousins, University of Ottawa, bcousins@uottawa.ca
|
| Abstract:
We know more and more evaluation capacity building (ECB) but our knowledge base is limited by a paucity of empirical research and a failure to integrate the capacity to do evaluation and the capacity to use evaluation into a coherent framework for understanding. In this session original findings will be presented from diverse case organizations participating in a multi-case study. Research at each case organization was guided by a common conceptual framework and sought to:
1.Clarify the nature of the capacity to do and use evaluation.
2.Understand the factors and conditions influencing the integration of evaluation into the organizational culture.
The mutli-case study will be introduced by the Cousins followed by highlights of each of the six case studies presented by respective researchers. The session will conclude with cross-case analyses presented by Cousins. Implications for ongoing research and practice will be discussed.
|
|
Evaluation Capacity at Human Resources and Social Development Canada
|
| Robert Lahey, REL Solutions, relahey@rogers.com
|
| Isabelle Bourgeois, National Research Council Canada, isabelle@storm.ca
|
| J Bradley Cousins, University of Ottawa, bcousins@uottawa.ca
|
|
The organization studied in this case is one of the largest federal departments in Canada, responsible for a variety of social programs aimed at a variety of clients and stakeholders (special needs, children, women, aboriginals, etc.). Many of its programs are grants and contributions programs that are managed by the department but are delivered by third party organizations or that are interdepartmental in nature. The organization was of high interest to us because it is probably the federal department with the longest history of formalized evaluation (some 30+ years), and has the largest internal evaluation unit in the federal government.
|
|
Evaluation Capacity at the Canadian Cancer Society
|
| Steve Montague, Performance Management Network, steve.montague@pmn.net
|
| Anna Engman, , afengman@gmail.com
|
|
The Canadian Cancer Society (CCS), founded in 1938, is the largest health charity in Canada. It raises in excess of $180 million per year through various fundraising activities and special events. The funds are spent on research, information, prevention, advocacy and support services targeted to the reduction of incidence and mortality rates due to cancer and the enhancement of the quality of life of those living with cancer. The CCS does not have an internal team dedicated to evaluation but is unique in among our cases for two reasons. First, it is a national not-for-profit organization, and second, it relies extensively on partnerships as an approach to developing evaluation capacity.
|
|
Evaluation Capacity at the Ontario Trillium Foundation
|
| Keiko Kuji-Shikatani, Ontario Ministry of Education and Training, kujikeiko@aol.com
|
| Catherine Elliott, University of Ottawa, elliott.young@sympatico.ca
|
|
The Ontario Trillium Foundation (OTF), one of Canada's leading grant-making foundations, is an agency of the Ontario Ministry of Culture. OTF's mission is "building healthy and vibrant communities throughout Ontario by strengthening the capacity of the voluntary sector, through investments in community-based initiatives." With grant volume increasing, OTF needs to continue to make judgment calls based on evidence and clear indications of impact. OTF requires evaluation from all agencies and programs it funds and uses evaluation for a variety of internal purposes. OTF is also actively examining the issue of evaluating Foundation performance incorporating the Centre of Effective Philanthropy approach to examine issues such as board governance, defining outcomes strategically, and importance of evaluation as an organizational learning tool.
|
|
Evaluation Capacity at the International Centre for Development Research
|
| Courtney Amo, Social Sciences and Humanities Research Council of Canada, courtneyamo@hotmail.com
|
| J Bradley Cousins, University of Ottawa, bcousins@uottawa.ca
|
|
IDRC is a large Canadian research funding agency in international development. It is a Crown corporation created 1970 to help developing countries use science and technology to find practical, long-term solutions to social, economic, and environmental problems. We recruited IDRC because we knew it to be highly developed in terms of its capacity to do and use evaluation. IDRC recognizes the essential role that evaluation plays in managing research projects effectively and producing relevant results from the research process. The Centre's overall approach to evaluation prioritizes equally the use and adoption of evaluation findings obtained through the application of rigorous methods, and the development of evaluative thinking capacities through evaluation processes.
|
|
Evaluation Capacity at the Canadian Mental Health Association, Ottawa Branch
|
| Tim Aubry, University of Ottawa, taubry@uottawa.ca
|
|
The Ottawa Branch of the Canadian Mental Health Association is a non-profit organization that has been in operation since 1953. It began with a group of Ottawa volunteers concerned about the mental health needs of the citizens of Ottawa. Today the agency offers employment to approximately 100 case workers and other mental health professionals, and serves upwards of 700 clients on any given day. The organization has had a longstanding commitment to evaluation as evidence by an integration between services and evaluation; the development and use of program logic models and the development of partnerships with universities and community of practice stakeholders. CMHC is committed to evaluating all new funded initiatives, and in the absence of the availability of such funding, will often seek it elsewhere.
|
|
Evaluation Capacity at the Canada Revenue Agency
|
| Swee Goh, University of Ottawa, goh@telfer.uottawa.ca
|
| Catherine Elliott, University of Ottawa, elliott.young@sympatico.ca
|
|
The organization studied in this case is a large federal government agency in Canada, responsible for the administration of the Canadian Tax Act which involves the collection of tax revenues and the enforcement of the act. Under the Director-General (DG) of Audit and Evaluation is a group of auditors and evaluators separated into two groups each under a Director. The case was of high interest to us because of its strong interest in ECB, in the context of a single office of Audit and Evaluation. The evaluation office is comparatively small and many of the evaluators are reasonably new and lack strong experience in either evaluation or in program management. Although there has been a concerted effort at recruiting new evaluators CRA been experiencing difficulty in retaining these new employees.
|
|
Session Title: Improving External Validity: A Realist Understanding of the Role of Context in Theories of Change
|
|
Panel Session 542 to be held in Panzacola Section G2 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Program Theory and Theory-driven Evaluation TIG
|
| Chair(s): |
| Patricia Rogers, RMIT University, patricia.rogers@rmit.edu.au
|
| Abstract:
Realist evaluation theory is based on the understanding that programs work differently in different contexts by generating different change mechanisms; therefore it can’t be assumed that applying the same intervention in a different context will result in the same outcomes. Understanding the mechanisms triggered by interventions in differing contexts, commonly expressed as ‘knowledge about what works for whom in what circumstances’, improves the external validity of the findings of a single evaluation or a synthesis of multiple evidence sources. This session presents the key features and processes of realist evaluation and realist synthesis with examples of each of these – a realist evaluation of a program to improve the health of farming families and a realist synthesis of early intervention programs. It suggests ways to respond to the challenges in both undertaking realist evaluations and supporting uptake of more complex findings.
|
|
Key Features and Processes of Realist Evaluation and Realist Synthesis
|
| Patricia Rogers, RMIT University, patricia.rogers@rmit.edu.au
|
|
Realist approaches ask "What is it about this program that works for whom in what circumstances?" The essential elements of the logic of the realist approach; understanding what is meant by context, mechanisms and outcomes, and the relationships between them, are outlined.
Realist evaluation and realist synthesis processes draw on quantitative and qualitative evidence, successes and failures and similarities and differences in context and outcomes. Techniques for conducting realist program evaluations and for generating theories of change through a realist synthesis of evaluations and research from multiple sites and sources are discussed including examples of how to develop initial context-mechanism-outcome configurations.
Realist approaches generate knowledge of the if/then or yes/but kind rather than all encompassing judgements and recommendations about what works. The external validity and benefits and challenges for utilising the lessons learnt about theories of change from realist evaluations and synthesis are explored.
|
|
|
Using Realist Synthesis to Explore Context and Inform Policy
|
| Kaye Stevens, RMIT University, kaye.stevens@rmit.edu.au
|
|
The Sustainable Farming Families program, a three year intervention to improve the health and wellbeing of farming families includes annual health checks, educational workshops and individual action planning. The program has been implemented with varied farming sectors and was rolled out to dairy farmers at eleven sites in Victoria, Australia. The presentation discusses how we sought to make sense of puzzling patterns in the data by retrospectively exploring: why the pattern of results differed for dairy and broad acre farmers; differential effects for participants with different levels of risk at the beginning of the program, that were hidden in the analysis of average effects; the impact of differences in implementation contexts across sites. The evaluation findings raise the possibility that different mechanisms may be operating in different contexts and identify questions that future evaluations could explore to deepen understanding of what works for who, when and why.
The evaluation explored the influence of context by:
1) assessing the external validity of findings from previous evaluations of bushfire case management services
2) reflecting on the development and implementation of the service in different contexts - different communities and pre-existing service systems and different individual needs.
3) identifying lessons learnt about VBCMS interventions in different contexts to inform future programs
| |
|
Using Realist Synthesis to Explore Context and Inform Policy
|
| Gill Westhorpe, Community Matters, gill.westhorp@communitymatters.com.au
|
|
Realist synthesis (Pawson, 2006) extends the insights of realist evaluation into the realm of literature review. This paper will demonstrate how realist synthesis can be used to understand how and why programs generate different outcomes in different contexts. It will explain how realist synthesis differs from other review methodologies and demonstrate particular analytic processes.
The example is early intervention programs. Many early intervention programs that "work" on average do not work for their most disadvantaged participants - and some appear to make things worse (including Early Head Start). The paper will examine how various features of context - at personal, family, program, organizational and social levels - operate to influence the outcomes that programs generate for different sub-groups. It concludes by suggesting ways that the findings from realist evaluation and realist synthesis can be translated back to policy and funding bodies and to service providers.
| |
|
Session Title: International, Cross-cultural, and Multicultural Context in Evaluation: Lessons From Near and Far
|
|
Panel Session 543 to be held in Panzacola Section H1 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Presidential Strand
|
| Chair(s): |
| Rodney Hopson, Duquesne University, hopson@duq.edu
|
| Abstract:
The recent movement to incorporate culture in the evaluation field might be traced to two seminal New Directions for Evaluation special issues (Madison, 1992; Patton, 1985), which reflected on the international/cross-cultural perspectives and the domestic/multi-ethnic issues that evaluators face in their work. Since then, a flurry of discussions, conference meetings, proceedings, and published work (Frierson, et.al, 2002; Hood, et.al, 2004; Kirkhart, 1995; National Science Foundation, 2000, 2001; Orlandi, 1992; Thompson-Robinson, et.al, 2004) provide substantial consideration of recognizing the centrality of cultural context in evaluation. Signaling a paradigm shift in the field, very few opportunities exist to bring together the international/cross-cultural and the domestic/multi-ethnic perspectives. In moving beyond narrow culture-free assumptions, the culmination of this work in the last two decades suggests new ways of thinking about and bringing together these issues for the evaluation field.
This panel will discuss how notions of multiple international and domestic, cultural contexts can influence and be integrated into the evaluation field. Selected evaluator scholars and practitioners, both emerging and seasoned, will share their contributions on cultural context as an attempt to provide lessons from international and local theory and practice evaluation work in advancing our notions of this emerging discussion.
|
|
An Umbrella That Covers International and Domestic Culturally Complex Contexts
|
| Donna Mertens, Gallaudet University, donna.mertens@gallaudet.edu
|
|
Evaluators have multiple opportunities to learn about and struggle with the meaning of culture in their work. A synergistic relationship exists in the evaluation community in that evaluators who work internationally can learn from those situated in domestic settings and vice versa. Some theoretical perspectives focus on single dimensions of culture (such as race or gender); some theoretical perspectives emanated from international contexts (such as indigenous or post-colonial theories). In this presentation, Mertens will critically examine the common strands that unite those who struggle to understand how to improve both the theory and practice of evaluation in the context of cultural complexity.
|
|
|
International Perspectives in Evaluation Practice: Points of View on the Importance of Cultural Context
|
| Liliana Rodriguez-Campos, University of South Florida, lrodriguez@coedu.usf.edu
|
|
This paper presentation discusses how notions of multiple cultural contexts can influence and be integrated into the evaluation field. Rodriguez draws upon her work in Asia, North America, and South America to ask questions about cultural context and to illuminate it within international evaluation practice. Specifically, she shares the perceptions of selected evaluator practitioners on the contextual elements that might be considered when conducting an evaluation in specific international settings. The goal of this presentation is to further increase awareness for the need of appropriate evaluations for each unique group.
| |
|
The Value-added Dimension of Culturally Responsive Evaluation
|
| Melvin Hall, Northern Arizona University, melvin.hall@nau.edu
|
|
The AEA Guiding Principles for Evaluators, posits a general but meaningful obligation of evaluators to "seek a comprehensive understanding of the important contextual elements of the evaluation." For evaluators whose stance or disposition is to be culturally responsive, context includes important matters of cultural orientation, self image, and community values as they affect the way people make meaning out of their daily lives. For culturally or contextually responsive evaluators, great effort is devoted to understanding the program experience through the eyes of participants as well as program planners and implementers. But how do we know when cultural, community or self image issues are salient and must be included in the evaluation? How can we understand the contribution to participant experiences resulting from their relative vantage points and world views? This presentation will engage these questions drawing upon the experiences gained in a multiyear NSF funded culturally responsive evaluation project.
| |
|
Evaluative Attention to Culture
|
| Maurice Samuels, University of Illinois at Urbana-Champaign, msamuels@uiuc.edu
|
| Katherine Ryan, University of Illinois at Urbana-Champaign, k-ryan6@illinois.edu
|
|
A failure by evaluators to understand and recognize the value of culture in evaluations can have significant ramifications (a) on groups that have been traditionally underrepresented in evaluations, (b) on social programs achieving desired improvements and outcomes, and (c) for advancing the evaluation field in the 21st century. The authors of this paper argue the importance of attending to culture especially in educational evaluation. They begin by situating evaluative attention to culture within the evaluation field by elaborating on Culturally Responsive Evaluation and its roots in responsive evaluation; its congruence with democratic principles and traditions in evaluation including school-based internal evaluation and deliberative democratic evaluation. The paper will conclude with an illustration of one case example which involves School-Based Reflection, an approach to educational evaluation that attends to culture and also incorporates democratic principles.
| |
|
Session Title: Evaluation Practice in Curriculum and Instruction
|
|
Multipaper Session 545 to be held in Panzacola Section H3 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Assessment in Higher Education TIG
|
| Chair(s): |
| Thomas Horwood,
ICF International, thorwood@icfi.com
|
| Discussant(s): |
| Thomas Horwood,
ICF International, thorwood@icfi.com
|
|
Understanding the Context of Teaching English Learners: Evaluating a National Professional Development Project in a Teacher Education Program
|
| Presenter(s):
|
| Cindy Shuman, Kansas State University, cshuman@ksu.edu
|
| Martha Foote, Texas A&M University, martha_foote@tamu-commerce.edu
|
| Chris Green, Texas A&M University Commerce, chris_green@tamu-commerce.edu
|
| Abstract:
This paper will focus on the external evaluation activities of a National Professional Development grant at a Midwestern university. The project was designed to help bilingual, ESL, and general education teachers meet the needs of English Learners (ELs). To do this, one of the project's main goals is to infuse content about the teaching of ELs into courses taken by all undergraduates seeking K-12 teaching certificates enrolled in the teacher education curriculum at the university. Participants in the project include university faculty, pre-service and in-service teachers in the teacher education program, and teachers and administrators in partner school districts. The paper will focus on the mixed method design of the evaluation used to collect baseline information as well as documenting progress and measuring impact as the project moves into its third year of implementation.
|
|
The Use of In-test Mnemonic Aids (a.k.a. Cheat-sheets) in Higher Education To Improve Student Learning and Performance
|
| Presenter(s):
|
| David Larwin, Kent State University Salem, dlarwin@kent.edu
|
| Karen Larwin, University of Akron, drklarwin@yahoo.com
|
| Abstract:
Few things are dreaded more than college-level exams. The goal of the present evaluation is the first attempt to investigate the impact of In-test Mnemonic Aids (IMAs), or what some call 'cheat sheets,' on students' learning and performance. The present investigation explores several hypotheses that have been proposed to explain the potential benefit of using IMAs during examinations. The findings presented here suggest that IMAs can improve student performance and learning. Specifically, the creation of IMAs in preparation for an exam, rather than simply their use during an exam, seemed to be responsible for the beneficial effects of IMAs on the dependent measure. Consistent with the student engagement hypothesis, the preparation of IMAs provided students with an additional opportunity to explore and master course materials. A follow-up survey of student participants suggests that the use of IMAs served to offer students a greater sense of comfort and sense of preparedness for their exam.
|
|
Studying Teacher Education Reconsidered: Contributions From Evaluation
|
| Presenter(s):
|
| Xiaoxia Newton, University of California Berkeley, xnewton@berkeley.edu
|
| Heeju Jang, University of California Berkeley, heejujang@berkeley.edu
|
| Nicci Nunes, University of California Berkeley, nunesn@berkeley.edu
|
| Elisa Stone, University of California Berkeley, emstone@berkeley.edu
|
| Rick Ayers, University of California Berkeley, rick-ayers@earthlink.net
|
| Abstract:
The No Child Left Behind (NCLB) legislation has created an unprecedented interest in using high-stakes testing to monitor the performance and accountability of teachers and schools. The ripple effect of this singular focus on using pupil test scores as measures of teacher effectiveness has also reached higher education institutions, which face increasing pressure to demonstrate their effectiveness through pupils' learning gains in classrooms where the graduates of the programs teach. The link between the two (i.e., teacher candidates' learning in education programs and pupil learning in classrooms) implicit in the policy discourse suggests a straightforward one-to-one correspondence. In reality, the logical steps leading from what teacher candidates learned in their programs to what they are doing in classrooms that may contribute to their pupils' learning are anything but straightforward. In this paper, we illustrate how critical concepts from scholarship of program evaluation have guided a team of evaluators and program staff to collaboratively design a longitudinal evaluation approach to studying the process and impact of an undergraduate math and science teacher education program.
|
|
Program Assessment in Nonprofit Management Academic Programs: The State of the Field
|
| Presenter(s):
|
| Ann Breihan, College of Notre Dame of Maryland, abreihan@ndm.edu
|
| Marvin Mandell, University of Maryland Baltimore County, mandell@umbc.edu
|
| Abstract:
This empirical study will ascertain which approaches to program assessment are currently in use in nonprofit management programs in institutions of higher education the US. The findings will be derived from surveys of randomly selected programs, stratified by program type (full graduate degree programs, graduate programs with concentrations in nonprofit management, and undergraduate majors), region, funding pattern (public and private institutions), and size. The data will be used to address the questions: what approaches to program assessment are currently in use, and what are the predictors of program assessment approaches?
|
|
Making Educational Engagement Visible: Toward Practical Means for Evaluating Engagement in Higher Education
|
| Presenter(s):
|
| Rick Axelson, University of Iowa, rick-axelson@uiowa.edu
|
| Arend Flick, Riverside Community College, arend.flick@rcc.edu
|
| Abstract:
Engagement has been widely regarded as an essential condition for student learning. There appears to be near unanimity in the teaching and learning literature that many of our educational system's ills could be cured by making learning more 'engaging' for students. It is far less clear, however, precisely what that means and how it could be accomplished. The multiple meanings of the term engagement and its rather sketchy linkages to learning theory make it difficult for practitioners to design and effect engaging learning environments. In this session, we propose conceptual frameworks for studying engagement in different teaching-learning contexts (e.g., adult education, communities of practice) and show how they can be represented in logic models. By incorporating theory-based engagement mechanisms in logic models, we aim to enhance discussions about engagement with stakeholders and facilitate efforts to cumulate findings about how engagement can be promoted across various educational contexts.
|
| | | | |
|
Session Title: Evaluation to Inform Learning and Adaptation in Grant Making Initiatives
|
|
Panel Session 546 to be held in Panzacola Section H4 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Chair(s): |
| Marilyn Darling, Signet Consulting & Research LLC, mdarling@signetconsulting.com
|
| Abstract:
Frequently, the initial hypotheses underlying a strategy that is intended to bring about change in a complex system are imperfect. The faster a grant maker can learn what is and is not working and improve its strategy, the greater its impact is likely to be. Evaluation can be one vital source of information to support this learning. This session will examine challenges, as well as promising practices, in using evaluation to inform learning and adaptation. Participants will hear and discuss stories about experience of Lumina Foundation for Education, the Ontario Trillium Foundation, and the David and Lucile Packard Foundation, all of which are part of peer-learning group convened by Grantmakers for Effective Organizations to strengthen learning practices. The session also will present results of research, by Signet Research & Consulting, examining grant maker learning practices, and identifying practices and tools that participants can test in their own organizations.
|
|
Making Learning Intentional at the Packard Foundation
|
| Gale Berkowitz, David and Lucile Packard Foundation, gberkowitz@packard.org
|
| Liane Wong, David and Lucile Packard Foundation, lwong@packard.org
|
|
This past year the Packard Foundation began a cross-foundation learning group to identify and share the learning practices across the foundation's programmatic areas in order to accelerate our effectiveness. As one example, the Children's Health Insurance subprogram uses learning to inform and improve strategy effectiveness, by continually tracking a set of indicators that are grounded in a theory of change and logic model, and well-validated in this field. The evaluation team works in partnership with grantees to learn from the front line what is and is not working, and shares this with grantees and other stakeholders through a variety of modalities. This presentation will examine how program staff, evaluators, and grantees use a variety of information, from indicators to anecdotal experience both within the Foundation and in the field, to improve strategy effectiveness.
|
|
|
Evaluation and Learning With Grantees at The Ontario Trillium Foundation
|
| Blair Dimock, The Ontario Trillium Foundation, bdimock@trilliumfoundation.org
|
| Patricia Else, The Ontario Trillium Foundation, pelse@trilliumfoundation.org
|
| Maja Saletto Jankovic, The Ontario Trillium Foundation, msjankov@trilliumfoundation.org
|
| Samantha Burdette, The Ontario Trillium Foundation, sburdette@trilliumfoundation.org
|
|
When it launched the Future Fund two years ago, the Ontario Trillium Foundation departed from its historic grant making approach. Aimed at building the capacity of Ontario's nonprofit sector, this initiative makes larger, longer-term grants to collaboratives, and entails more intensive engagement with grantees. Learning how to make this new approach to grant making work required a shift from the foundation's typical research and evaluation approach. Historically, the foundation has conducted impact research periodically, focusing on segments of its portfolio. Although this approach has enabled the foundation to discover ways to increase its impact, including how to increase grant making in aboriginal communities, it is not sufficient for the rapid learning necessary to succeed with a radically different approach. This presentation will explore how program and research staff work in partnership, with each other and with grantees, to assess whether and how this new way of doing business is working.
| |
|
Integrating Evaluation and Learning at Lumina Foundation for Education
|
| Mary Williams, Lumina Foundation for Education, mwilliams@luminafoundation.org
|
|
Achieving the Dream, a five-year-old initiative of Lumina Foundation for Education, is built on an elegant theory of change. A central hypothesis was: if we can create a culture of evidence within community colleges, then the colleges can identify achievement gaps, create and evaluate interventions to reduce those gaps, and institutionalize the strategies that prove effective. Five metrics were derived from the theory of change and tracked from the outset, providing baseline data and feedback on the implementation strategies colleges selected and experimented with. Although participating colleges agreed to track data and make them public, there have been challenges in building colleges' capacity to provide the data. The evaluation was narrower in scope than the initiative's goals and objectives, and was complemented by other ways of assessing strategy effectiveness. This presentation will explore how evaluation can be integrated with other methods of gauging results to inform learning and adaptation.
| |
|
Emergent Learning to Increase Strategy Effectiveness and Impact
|
| Marilyn Darling, Signet Consulting & Research LLC, mdarling@signetconsulting.com
|
|
Emergent Learning was created in the mid 1990s to address obstacles organizations experienced to learning in the midst of complex challenges. It is designed to build the capacity of an organization or network to achieve intended results by continually testing and improving its strategies and the hypotheses that underpin them. With support from the David and Lucile Packard Foundation, Signet Research & Consulting, LLC conducted a research project framed by the question: "How can Emergent Learning practices and tools be adapted and instituted to enable a grant maker to increase the impact of its investments over the course of a grant program, avert failures through in-course correction, and increase the transparency of portfolio management?" Based on case studies of North American foundations, this presentation will explore insights concerning common challenges grant makers confront in using learning to increase their impact and practical approaches to addressing these challenges.
| |
|
Session Title: Advances in Measurement
|
|
Multipaper Session 548 to be held in Sebastian Section I2 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Chair(s): |
| Raymond Hart,
Georgia State University, rhart@gsu.edu
|
|
Implicit Attitude Measures: Avoiding Social Desirability in Evaluations
|
| Presenter(s):
|
| Joel Nadler, Southern Illinois University at Carbondale, jnadler@siu.edu
|
| Abstract:
Implicit methods allow the assessment of automatic attitudes using indirect measures. Implicit measures of attitudes, or automatic reactions, have been researched extensively outside of evaluation. Implicit measures include word completion, response time measures, non-verbal behavior, and most recently the Implicit Association Test (IAT). Implicit results are often weakly related to explicit (self- report) measures when social desirability concerning the attitude is involved. However, when there is no socially expected 'right' answer, there is a stronger relationship between the two methodologies. The advantage to implicit measures is they are more resistant to deception and the effect of social desirability compared to self-report measures. Disadvantages include issues of construct validity and interpretation. Types of implicit methodologies will be reviewed with specific focus on how implicit measures can add to traditional attitudinal measures used in evaluations. Theoretical application, practical concerns, and possible appropriate use will be discussed.
|
|
The Validity of Self-Report Measures: Comparisons From Four Designs Incorporating the Retrospective Pretest
|
| Presenter(s):
|
| Kim Nimon, University of North Texas, kim.nimon@gmail.com
|
| Drea Zigarmi, Ken Blanchard Companies, drea.zigarmi@mindspring.com
|
| Abstract:
This study compared data resulting from four evaluation designs incorporating the retrospective pretest, analyzing the interaction effect of pretest sensitization and post-intervention survey format on a set of self-report measures. Validity of self-report data were assessed by correlating results to performance measures. This study detected differences in measurement outcomes across the four designs. This study found designs in which the posttest and retrospective pretest were administered as two separate questionnaires produced the most valid results. Conversely, designs in which the posttest and retrospective pretest were administered with a single questionnaire produced the least valid results.
|
|
Quantifying Impact and Outcome: The Importance of Measurement in Evaluation
|
| Presenter(s):
|
| Ann Doucette, George Washington University, doucette@gwu.edu
|
| Abstract:
To ignore the implications of measurement is tantamount to conceptualizing outcomes research as a house of cards, subject to the vagaries of measurement artifacts. This paper examines the measurement properties of the University of Rhode Island Change Assessment Scale (URICA), a scale addressing readiness to change behavior, characterized by four discrete stages ranging from resistance and ambivalence about engaging in treatment to behavioral changes and strategies to maintain change behavior and treatment goals. The URICA assumes a unidimensional construct where individuals move back and forth across the four stages in an ordered fashion. IRT models will be used to examine the measurement properties of the URICA, using sample data from Project Match, a multi-site clinical trial examining patient/client-treatment interactions. The dimensionality of the URICA, its precision in assessing 'readiness to change' will be examined using both conventional factor analysis and bi-factor models. The assumed measurement model assumptions will be tested and the implications of model degradation will be discussed.
|
|
(Re)Defining Validity in Effectiveness Evaluation
|
| Presenter(s):
|
| Tanner LeBaron Wallace, University of Pittsburgh, twallace@pitt.edu
|
| Abstract:
This paper argues for a redefinition of validity in effectiveness evaluation. Integrating traditional validity typologies associated with experimentally-designed evaluations, validity theory derived from the discipline of psychometrics, and validity theory emerging from within the community of evaluation scholars, this paper advances a definition of validity that considers validity ideally conceptualized as an argument about research procedures in terms their ability to create 'ideal epistemic conditions' to investigate effectiveness and within a framework that considers the social consequences of effectiveness evaluation. First, the evolution of validity theory within psychometrics and evaluation is discussed. Next, three particular facets of a comprehensive validity argument are detailed: (1) the utility and relevance of the focus, (2) how diverse values are incorporated into and represented throughout the evaluation process and (3) the ability of the evaluation to support the move from low-level to high-level generalizations. The paper ends with a discussion of the design and methods implications of this redefinition of validity.
|
| | | |
|
Session Title: The Communities in Schools National Evaluation: Using a Comprehensive Evaluation Strategy to Understand the Value-Added of Integrated Student Supports
|
|
Multipaper Session 549 to be held in Sebastian Section I3 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
|
| Chair(s): |
| Yvette Lamb, ICF International, ylamb@icfi.com
|
| Discussant(s): |
| Allan Porowski, ICF International, aporowski@icfi.com
|
| Abstract:
The Communities In Schools National Evaluation is a multi-level, multi-method study that was designed to identify the most successful strategies for preventing students from dropping out of school. This five-year study includes secondary data analyses, a quasi-experimental study, eight case studies, a "natural variation" study, an external comparison study, and three randomized controlled trials. At the conclusion of the evaluation, all of the findings will be compiled so that the overall impact of the CIS model of integrated student services can be analyzed and replicated. In this presentation, we present both methods and findings from the first four years of the evaluation, and demonstrate how a multiple components from comprehensive evaluation design can be brought together to inform both policy and practice.
|
|
Communities In Schools National Evaluation: Year 2 Results for Student-Level Randomized Controlled Trials
|
| Christine Leicht, ICF International, cleicht@icfi.com
|
| Felix Fernandez, ICF International, ffernandez@icfi.com
|
| Heather Clawson, ICF International, hclawson@icfi.com
|
| Susan Siegel, Communities In Schools, siegels@cisnet.org
|
|
Communities In Schools, Inc. (CIS) is a nationwide initiative to connect community resources with schools to help at-risk students successfully learn, stay in school, and prepare for life. CIS is currently in the midst of a comprehensive, rigorous five-year national evaluation, culminating in a multi-site randomized controlled trial (RCT) to ascertain program effectiveness. In this presentation, we will draw from our experience working with Austin, TX, Jacksonville, FL, and Wichita, KS public schools, present our overall study design, and the process involved in conducting a student-level RCT. Preliminary results from Year Two will also be discussed.
|
|
Communities In Schools: Elementary, Middle, and High School Models and Their Implications for Integrated Student Supports
|
| Kelle Basta, ICF International, kbasta@icfi.com
|
| Sarah Decker, ICF International, sdecker@icfi.com
|
| Jessica DeFeyter, ICF International, jjohnson@icfi.com
|
| Dan Linton, Communities In Schools, lintond@cisnet.org
|
|
Communities In Schools (CIS) is a nationwide initiative to provide community-based integrated student supports, interventions that improve achievement by connecting community resources with the academic and social needs of students. As part of a comprehensive national evaluation, we completed a quasi-experimental study that compared 602 CIS schools with 602 matched comparison schools on a number of outcomes, including academic achievement, attendance, dropout, and graduation rates. Through propensity score matching, we were able to obtain a precisely-matched comparison group and achieve a highly rigorous study design. This presentation will report findings from the evaluation that specifically address the effect of integrated student supports on various school levels. It will include an assessment of how CIS models differ at the elementary, middle, and high school levels, and whether these disparities in service delivery contribute to differential outcomes for students.
|
|
The Communities In Schools Natural Variation Study: Providing Context for Performance
|
| Jing Sun, ICF International, jsun@icfi.com
|
| Julie Gdula, ICF International, jgdula@icfi.com
|
| Aikaterini Passa, ICF International, apassa@icfi.com
|
| Dan Linton, Communities In Schools, lintond@cisnet.org
|
|
Communities In Schools, Inc. (CIS) is a nationwide initiative to connect community resources with schools to help young people successfully learn, stay in school, and prepare for life. To measure the services delivered and coordinated by CIS and the variation in local program operations, ICF International, the National Evaluation Team, has developed a comprehensive, multi-level, multi-phased evaluation model. One evaluation component, the Natural Variation Study, examines the degree to which program models differ between high performing sites and lower performing sites (as defined by outcomes such as dropout and graduation). The Natural Variation analyses allow the evaluation team to exploit the natural variation between sites to determine specific contexts that are particularly conducive to the achievement of positive outcomes. Findings indicate that the degree and intensity of CIS services varies across different subgroups and outcomes.
|
|
The CIS External Comparison Study: Organizational Benchmarking to Improve Operations
|
| Yvette Lamb, ICF International, ylamb@icfi.com
|
| Susan Siegel, Communities In Schools, siegels@cisnet.org
|
|
In July 2008, a two-year study of five national youth-serving federations was completed as part of the CIS National Evaluation. The goal of the study was to improve and strengthen the CIS federation, as well as use the information gathered to contribute to the field of youth development. The organizations interviewed for this study include America's Promise Alliance, Boys & Girls Clubs of America, The Children's Aid Society, City Year, and YouthBuild USA. Five elements considered to be integral to successful implementation of a high-impact federation were included: (1) branding, (2) public policy, (3) quality management of service delivery, (4) innovation, and (5) brokering of services. The information resulting from this study provides insight into the complexity of federation organizational operations and practical advice regarding activities that can be utilized to create impact in the five areas explored in this study.
|
|
Session Title: Using Emerging Technologies to Improve Evaluation Practice Across Contexts
|
|
Panel Session 550 to be held in Sebastian Section I4 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Research on Evaluation TIG
|
| Chair(s): |
| Stewart Donaldson, Claremont Graduate University, stewart.donaldson@cgu.edu
|
| Discussant(s):
|
| Susan Kistler, American Evaluation Association, susan@eval.org
|
| Abstract:
These panel presentations look beyond familiar off-the-shelf web-based applications to a variety of emerging technologies that are changing the way evaluators work. Findings from investigations and applications of specialized software tools will be described and demonstrated, including (1) software that supports the development of complex conceptual frameworks, theories of change, and program theories to guide evaluation practice; (2) software that enables evaluation communication, reporting, and teaching at a distance across dispersed contexts, (3) applications of user-friendly (and free) GIS tools for improving evaluation, (4) research on a web-based visual survey tool that collects program theories that are used to characterize and better understand diverse perspectives; and (5) findings from the application of technologically-oriented tools in alignment with the principles of empowerment evaluation, fostering improvement, capacity building, and accountability.
|
|
Using New Technologies to Better Address Context, Complexity, and To Build Evaluation Capacity
|
| Shabnam Ozlati, Claremont Graduate University, shabnam.ozlati@cgu.edu
|
| Stewart Donaldson, Claremont Graduate University, stewart.donaldson@cgu.edu
|
|
As evaluation as taken on more complex interventions and evaluands, parsimonus logic models, program theories, and conceptual frameworks have become more difficult to develop. In this presentation, we will illustrate some of the latest breakthroughs in capturing context and multiple levels of complexity in evaluation practice. Insights and findings from investigating the use of complex, interactive conceptual frameworks enhanced by software to improve complex evaluations will be discussed. In addition, new software to improve communicating and reporting evaluation findings, as well as teaching evaluation from a distance across diverse contexts will be explored. A special emphasis will be placed on demonstrating how these new technologies can be used to build evaluation capacity.
|
|
|
How to Utilize User-Friendly and Free Geographic Information System (GIS) Tools for Evaluation Practice
|
| Tarek Azzam, Claremont Graduate University, tarek.azzam@cgu.edu
|
|
This presentation will focus on the application of user-friendly (and free) GIS tools for evaluation practice. Many of the existing GIS software packages require large commitments of time and money to fully utilize, but there are accessible web-based programs such as MSN Live(tm) Maps and Google(tm) Maps that can provide evaluators with powerful mapping abilities to represent complex contexts. Although this type of software is constantly evolving with new features being added, many evaluators have yet to tap into its full potential. These tools can be used to geographically represent program impact, program evolution, and the stories that emerge from different locations. Examples will be offered to demonstrate the various applications of these tools, along with information on how access and utilize each of them.
| |
|
A Visual Program Theory Survey Tool
|
| John Gargani, Gargani + Company, john@gcoinc.com
|
|
Each of us has our own explanation for why a program or policy will work (or fail). In many respects, it is the degree of difference in those explanations that determine the degree of controversy that program staff and policymakers face. This presentation describes an investigation of a web-based visual survey tool that collects program theory diagrams from a large number of stakeholders. The resulting diagrams are then used as data in order to characterize the diverse perspectives of individuals, the degree of consensus within and across groups, and other cognitive characteristics of respondents. An example from its application in the field will be provided.
| |
|
Empowerment Evaluation: Technological Tools of the Trade
|
| David Fetterman, Fetterman & Associates, fettermanassociates@gmail.com
|
|
Empowerment evaluation is an approach to evaluation that is designed to facilitate participation, collaboration, and empowerment. There are many tools used to conduct an empowerment evaluation. A few technological tools of the trade are discussed in this presentation. They include: collaborative web sites, videoconferencing on the Internet, blogging, online surveys, Google Docs and Spreadsheets, and sharing digital photographs on the Internet. The overriding principle guiding the use of technologically-oriented tools is that they are in alignment with the principles of empowerment evaluation, fostering improvement, capacity building, and accountability.
| |
|
Session Title: Appropriate Methods in International Evaluation in Agriculture, Media, Child Protection, and Across Sectors
|
|
Multipaper Session 551 to be held in Sebastian Section L1 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the International and Cross-cultural Evaluation TIG
|
| Chair(s): |
| Mary Crave,
University of Wisconsin-Extension, crave@conted.uwex.edu
|
|
Context Matters: Longitudinal and Comparative Analysis of Monitoring Data on Organizational Strengthening Generated With the Agriterra Profiling Tool
|
| Presenter(s):
|
| Giel Ton, Agricultural Economics Research Institute, giel.ton@wur.nl
|
| Abstract:
The organizational performance of 58 farmers' organizations across 30 countries was measured with quantitative indicators largely derived from panels that assessed organizational capabilities. The paper analyses the usefulness of the data for two information needs of the commissioning donor organization: 1) longitudinal assessment of capacity building; and 2) comparative analysis to discover room for cross-organizational learning. The longitudinal analysis of the data points to a bias related with the evaluation context. Data collected by panels that differed in time highlight other strengths and weaknesses than those derived from scores of panels that assessed both actual and past performance at the same evaluation moment. The comparative analysis indicates that panel scores are correlated with the organizations' geographical location, type of service provisioning and level of aggregation. Differences in panel scores seem to reflect context specific functions and not necessarily differences in organizational performance.
|
|
Evaluation and Survey Design in Developmental Contexts
|
| Presenter(s):
|
| Marc Shapiro, Millennium Challenge Corporation, shapiromd@mcc.gov
|
| Abstract:
Since the Paris Declaration, many donors have pledged to improve use of monitoring and evaluation of their programs. The contexts for these evaluations and the use of surveys in international projects differ in several respects from those in most developed contexts and often lag in quality, but underlying rationales for good evaluations remain the same. These differences affect how evaluations should be designed to measure outcomes and impacts of these types of interventions as well as affecting procedures to conduct the evaluations. This presentation is intended to spark discussion of learnings across countries, sectors, and donors. It uses the sectors of education, knowledge management, and information technology for development as examples but more broadly focuses on key differences that affect the design and implementation of surveys specifically and evaluations more generally.
|
|
Evaluation Practice in Developing Countries: The Case of 'Protection and Justice for Children Project 'in Ethiopia
|
| Presenter(s):
|
| Simon Molla Gelaye, Ministry of Finance and Economic Development, simonmolla@yahoo.com
|
| Abstract:
Background
"Protection and Justice for Children Project" is a development project funded by UNICEF and implemented by Federal Supreme Court, Ministry of Labor and Social Affairs, Ministry of Information, Human Rights Commission and Ministry of Finance and Economic Development. Looking through the evaluation process in the project, this paper intends to gloom light on evaluation experience in developing countries.
Objectives
The objectives of the study are:
-To analyze the methodologies the implementing partners adopted to monitor end evaluate the project and the problem solving techniques they use;
-To look at the major achievements of the project;
-To assess if and how the political context shapes the methods of evaluation;
-To find out challenges met in the monitoring and evaluation process.
Methodology
-Desktop review of all the relevant secondary resources;
-In-depth interview with key informants /decision makers at various levels in the implementing partners;
-Participant observation
|
|
Contexts for Evaluating Media Work on Women and Agriculture: A Multi-sited Project in Africa
|
| Presenter(s):
|
| Patty Hill, EnCompass LLC, phill@encompassworld.com
|
| Jiames Stansbury, EnCompass LLC, jstansbury@encompassworld.com
|
| Abstract:
Varying contexts matter for the evaluation efforts we can bring to bear on projects, as well as for the implementation of project work itself. This presentation focuses on an evaluation of a media project aimed at enhancing coverage of agriculture and women in rural development in three African countries: an arid Francophone country, an equatorial nation notable for its productivity and high agricultural engagement, and a southern African country challenged with expanding its economy away from mining. The paper will explore how the evaluation addresses an implementation model that is consistent across sites, while capturing the variations in national and local contexts related to news and feature content, the sources and angles journalists engage, and the overall ways in which the project proceeds. It will also explore the wider context of the challenges of the global economic downturn and spreading food crises.
|
| | | |
|
Session Title: Evaluation in Contested Spaces
|
|
Panel Session 552 to be held in Sebastian Section L2 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the International and Cross-cultural Evaluation TIG
|
| Chair(s): |
| Ross VeLure Roholt, University of Minnesota, rossvr@umn.edu
|
| Abstract:
International and humanitarian and other aid agencies require evaluation for accountability and program improvement. Increasingly, evaluation has to be undertaken in communities under conditions of violent division. There is practice wisdom about how to conceptualize and implement this work, but it is not easily available, as is social research under such conditions. This panel will offer a public, professional space for describing, clarifying and understanding this work, suggesting practical strategies, tactics and tools. Also, research on evaluation practice under these conditions will be covered. A relevant bibliography will be distributed.
|
|
Conducting Evaluation in Contested Spaces: Describing and Understanding Evaluation Under These Conditions
|
| Ross VeLure Roholt, University of Minnesota, rossvr@umn.edu
|
|
Ross VeLure Roholt worked and lived in Belfast, Northern Ireland and in Ramallah and Gaza . During this time, he designed and worked on several evaluation studies for youth programs, youth services, museum exhibitions, and quality assurance in Belfast, Northern Ireland, and Ramallah and Gaza, Palestine. His evaluation experience under violence and post-violence conditions will be described and joined to other evaluation studies under similar conditions gathered from practitioners and researchers for a completed special full issue of Evaluation and Program Planning, co-edited by Ross VeLure Roholt and Michael Baizerman. Its focus is describing the challenges and strategies for evaluation work under these conditions, using case studies and analytic essays.
|
|
|
Crafting High Quality Evaluation in Contested Spaces: Lessons From Practice
|
| Barry Cohen, Rainbow Research Inc, bcohen@rainbowresearch.org
|
|
Barry Cohen has been Executive Director of Rainbow Research, Inc. since 1998. He has 35 years of experience in research, evaluation, planning and training in fields such as public health and eliminating health disparities; alcohol, tobacco and other drugs; violence prevention; after-school enrichment; school desegregation; systems advocacy, mentoring, social services, and welfare reform. His case study of evaluating programs in a contested space in the United States provides insights into how evaluation is shaped by local conditions and what evaluators must do to craft high quality evaluations studies under these conditions.
| |
|
Analyzing Evaluation Practice From the Perspective of Contested Spaces
|
| Michael Baizerman, University of Minnesota, mbaizerm@umn.edu
|
|
Michael Baizerman has over 35 years of local, national, and international evaluation experience. Over the last seven years he has worked with governmental and non-governmental organizations in Northern Ireland, South Africa, Israel, Palestine, and the Balkan region to document and describe youth work in contested spaces and to develop effective evaluation strategies to document, describe, and determine outcomes of this work. Responding to and expanding on the descriptions provided earlier, evaluation practice as typically described in the North and West will be interrogated. It will be shown that the very nature of such spaces makes difficult to impossible the use of normative best practices. We use this finding to suggest practical strategies, tactics, and tools for designing and conducting program evaluation in violent and post-violent contexts.
| |
|
Session Title: No Evaluator Left Behind: Organizational Learning in K12 Settings
|
|
Multipaper Session 553 to be held in Sebastian Section L3 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
and the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Mark Zito,
University of Massachusetts, markzito14@gmail.com
|
| Discussant(s): |
| Rebecca Gajda,
University of Massachusetts, rebecca.gajda@educ.umass.edu
|
|
Building Evaluation Capacity Among School Leaders: The Development and Formative Evaluation of 'A School Leader's Guide to Evaluating Research'
|
| Presenter(s):
|
| Tamara M Walser, University of North Carolina at Wilmington, walsert@uncw.edu
|
| Abstract:
Given requirements for the implementation of research-based educational programs, it is more important than ever that school leaders be able to evaluate the research that textbook publishers, program developers, and research organizations conduct to provide the 'research-based' stamp of approval needed for program adoption. Schools and school districts must build the capacity to evaluate research appropriately. The purpose of this presentation is to describe the development and formative evaluation of 'A School Leader's Guide to Evaluating Research,' including: the established need for the guide, particularly given No Child Left Behind legislation; the research basis for development of the guide; the process and results of a formative expert review of the guide; future development and evaluation plans; and implications for practice. The presentation will focus on the contents of the guide and the expert review process and results as formative evaluation of the guide.
|
|
A Development of the Professional Learning Community Building Benchmark for Schools
|
| Presenter(s):
|
| Narongrith Intanam, Chulalongkorn University, narongrith.int@gmail.com
|
| Suwimon Wongwanich, Chulalongkorn University, wsuwimon@chula.ac.th
|
| Nuttaporn Lawthong, Chulalongkorn University, lawthong_n@hotmail.com
|
| Abstract:
The purposes of this study were (1) to develop the factors and indicators of the professional learning community (PLC), (2) to develop the criterion and indicators of the PLC building benchmark, and (3) to examine the qualities of the PLC building benchmark. This study employed the research & development methodology and undertook in two phases, i.e. (1) a documentary research and survey research at 180 schools that randomized by using two-stage random sampling, and (2) a case study research by using multi-site evaluation approach at five schools that the best practice in Thailand. The instruments were a coding record, a semi-structured interview, and a questionnaire. Multilevel factor mixture model of the PLC was conducted by using Mplus program. The qualitative data analysis was conducted by using content analysis. The result of model were conveyed the key information to evaluate the PLC, and to develop the PLC building benchmark for schools.
|
|
Organizational Learning Propelled by the Developing Stages of Evaluation: An Oriental Context of Investigation
|
| Presenter(s):
|
| Hui-Ling Pan, National Taiwan Normal University, joypanling@yahoo.com.tw
|
| Abstract:
Organizational learning is a significant topic investigated in the literature of process use. More empirical studies in the school context are needed since the findings would lead to positive implications for school personnel to lessen resistance toward evaluation. Since 2006, Taiwan government has been launching a pilot program of teacher evaluation for assisting teacher professional development. However, the program is not commonly welcomed by teachers. In order to give schools a different angle to think about evaluation, a case study linking organizational learning and evaluation was conducted. The study aims to inform schools that a pilot program of teacher evaluation may enhance organizational learning and furthermore to bring about school improvement. A high school was selected for investigation and a constructive perspective was used to analyze the data. It was found that three stages of teacher evaluation program were implemented in the case school according to their developing needs. And organizational learning, going through the impulsive, egocentric, interpersonal, institutional and critical stages was observed in an oriental school context.
|
| | |
|
Session Title: Evaluation Theories in Action
|
|
Multipaper Session 554 to be held in Sebastian Section L4 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Theories of Evaluation TIG
|
| Chair(s): |
| Bianca Montrosse,
University of North Carolina at Greensboro, bmontros@serve.org
|
|
European Union Rural Development Program 2007-13 in Italy: Meta Evaluative Narratives of Rural Development Programs Ex Ante Evaluations- Focus on Methodologies
|
| Presenter(s):
|
| Vincenzo Fucilli, University of Bari, v.fucilli@agr.uniba.it
|
| Abstract:
Planning and implementation of the European Union policy for rural areas are based on rural development programs. In current regulations, a primary role is also given to evaluation, the aim of which is to improve the quality, efficiency and effectiveness of programs. The paper stems from the observation that a multitude of evaluations of programs are undoubtedly available, whereas there doesn't seem to be the same number of studies and analyses of 'the evaluation' itself, and in particular analyses of evaluations as complex as those of rural development programs. It is a real meta-evaluative exercise that will try to reconstruct the methodological path adopted by evaluators. Through analyses of ex-ante evaluation reports of the 21 Rural Development Programs in Italy, narratives of these evaluations are reconstructed with specific reference to methodological aspects, models and procedures adopted by evaluators. Central interest is on how judgements have been constructed.
|
|
Measuring Empowerment: A Model for Use in Evaluation of Organizations Who Aim to Increase Their Clients' Power
|
| Presenter(s):
|
| Lauren Cattaneo, George Mason University, lcattane@gmu.edu
|
| Aliya Chapman, George Mason University, arazvi@gmu.edu
|
| Abstract:
The compelling concept of empowerment is employed across many fields, including program evaluation. While the meaning of this concept within empowerment evaluation is clear, its use among a wide array of organizations and programs is far less precise. This murkiness in definition means that when an organization includes the empowerment of clients in its mission, the evaluator of this organization is left without a clear roadmap for conceptualization and measurement. This paper describes a model of the process of empowerment, and explains how the evaluator might employ it. The model identifies core components of the empowerment process: personally meaningful and power-oriented goals, self-efficacy, knowledge, competence, action, and impact. Individuals move through the process with respect to particular goals, doubling back repeatedly as experience promotes reflection. When an organization aims to empower its clients, its work related to each component of this process should be the evaluator's focus.
|
|
The Nexus Between Measurement and Evaluation: A Process Evaluation of a Standard Setting Procedure Using Rasch Measurement Theory and Evaluation Theory
|
| Presenter(s):
|
| Jade Caines, Emory University, jcaines@emory.edu
|
| George Engelhard Jr, Emory University, gengelh@emory.edu
|
| Abstract:
Standard setting is an area within applied measurement that involves the specification of a minimal level of performance on a particular task (Cizek, 2001; Crocker and Algina, 1986; Glass, 1978). Within diverse professional fields, panelists consider various levels of examinee performance on an assessment and then determine a cut score that represents minimum proficiency. It is, however, a highly evaluative and policy-driven process that needs further examination within a measurement and evaluation context. Since, human judgment is at the heart of the standard setting process, it follows that evaluation theories should shed light on this oftentimes high-stakes, black box procedure. Therefore, the purpose of this study is to explore various evaluation theories as a context for conceptualizing and improving evaluative judgments obtained from a particular standard-setting method called Body of Work. Data are obtained from a large Midwestern state and cut score judgments are made for two elementary writing assessments.
|
| | |
|
Session Title: Stakeholder Engagement in Extension Education
|
|
Multipaper Session 555 to be held in Suwannee 11 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Extension Education Evaluation TIG
|
| Chair(s): |
| Karen Ballard,
University of Arkansas, kballard@uaex.edu
|
|
Approaching Extension Evaluation From the Beneficiaries Point of View
|
| Presenter(s):
|
| Virginia Gravina, Universidad de la republica, virginia@fagro.edu.uy
|
| Virginia Rossi, Universidad de la republica, cairossi@adinet.com.uy
|
| Pedro De Hegedus, Universidad de la republica, phegedus@adinet.com.uy
|
| Abstract:
This research study took place in Uruguay, South America, in order to evaluate the effect of an extension program, oriented towards rural development of a small community of beef cattle family farmers. The program was implemented during six years by a multidisciplinary extension team from the Universidad de la Republica. The program goal was generating social capital among beneficiaries.
This paper's purpose is to contribute to enrich extension evaluation practice by introducing Q 'methodology, as a powerful tool. Regarding extension, Q methodology, allowed us to bring out not only the cognitive domain, but also the affective one, which is, in the end the one that is going to determine what beneficiaries are really going to do.
Three different perceptions of the program results came up, that can be considered as the way beneficiaries understood their world; and can also be regarded, as their own evaluation, of the extension program.
|
|
Engaging Youth in Program Evaluation: Using Clickers to Make Evaluation Fun!
|
| Presenter(s):
|
| Lynne Borden, University of Arizona, bordenl@ag.arizona.edu
|
| Joyce Serido, University of Arizona, jserido@email.arizona.edu
|
| Christine Bracamonte-Wiggs, University of Arizona, cbmonte@email.arizona.edu
|
| Abstract:
There are many challenges in collecting program evaluation data from high risk and disenfranchised youth. Lack of trust and concerns about how the information will be used deters many youth from participating in focus groups or interviews. Many young people find filling in surveys, whether online or in paper/pencil format, to be boring and meaningless. Moreover, a growing number of our youth lack the reading skills needed to understand and accurately respond to the questions. Technology, specifically, wireless student response systems, or 'clickers,' may provide a promising alternative to collecting data with young people. In this session, we present our findings using clicker technology to evaluate community programs serving at-risk and disadvantaged youth populations in both urban and rural settings. We will administer an interactive survey during the session to provide participants a hands-on demonstration of the approach.
|
|
Evaluation Planning for 4-H Science, Engineering and Technology Programs: A Portfolio From Cornell Cooperative Extension
|
| Presenter(s):
|
| Monica Hargraves, Cornell University, mjh51@cornell.edu
|
| Abstract:
Science, Technology, Engineering and Mathematics education has become a national priority in many arenas, including the 4-H system. The Cornell Office for Research on Evaluation (CORE) is conducting an NSF-funded research project that established 'Evaluation Partnerships' with twenty Cornell Cooperative Extension Offices to develop Evaluation Plans for 4-H Science, Engineering and Technology (SET) programs. The Evaluation Partnerships follow a Systems Evaluation Protocol developed by CORE that includes stakeholder and lifecycle analyses, and logic and pathway modeling. The selected 4-H SET programs cover a range of science topic areas and program delivery modes. This paper describes and analyzes the evaluation plans developed by this cohort, and examines the types of evaluation questions, measures, and designs identified in these plans. Of particular interest are questions of commonality of needs and transferability of solutions. The paper considers the implications of this research for evaluation theory and practice.
|
| | |
|
Session Title: How Practitioners and Evaluators Can Use Binary Logistic Regression and Other Methods in a Realist Evaluation of What Interventions Work and in What Circumstances
|
|
Demonstration Session 556 to be held in Suwannee 12 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Social Work TIG
|
| Presenter(s): |
| Mansoor Kazi, University at Buffalo - State University of New York, mkazi@buffalo.edu
|
| Rachel Ludwig, Chautauqua County Department of Mental Health, mesmerr@co.chautauqua.ny.us
|
| Melody Morris, Chautauqua County Department of Mental Health, morrisml@co.chautauqua.ny.us
|
| Connie Maples, ICF Macro International, connie.j.maples@orcmacro.com
|
| Patricia Brinkman, Chautauqua County Department of Mental Health, brinkmap@co.chautauqua.ny.us
|
| Mary McIntosh, University at Buffalo - State University of New York, mary_mcintosh@msn.com
|
| Doyle Pruitt, University at Buffalo - State University of New York, dpruitt14@hotmail.com
|
| Ya-Ling Chen, University at Buffalo - State University of New York, yc96@buffalo.edu
|
| Abstract:
This demonstration will illustrate new data analysis tools drawn from both the efficacy and epidemiology traditions to investigate patterns in relation to outcomes, interventions and the contexts of practice. The demonstration will include the use of binary logistic regression models and regression discontinuity designs from real practice examples drawn from the SAMHSA funded System of Care community mental health services for children with serious emotional disturbance and their families in Chautauqua County, New York State. The presenters and facilitators include a combined team of evaluators and system of care project workers who will use datasets from their completed evaluations and discuss real-world applications of the analyses and their contribution to learning from evaluation. The didactic approach will be interactive, guiding the workshop participants through the requirements and limitations of each method, and demonstrating its use from practice examples, e.g. from Department of Mental Health.
|
|
Session Title: Proven and Creative Strategies and Techniques for Teaching Evaluation
|
|
Demonstration Session 557 to be held in Suwannee 13 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Teaching of Evaluation TIG
|
| Presenter(s): |
| Katye Perry, Oklahoma State University, katye.perry@okstate.edu
|
| Rama Radhakrishna, Pennsylvania State University, brr100@psu.edu
|
| Gary Skolits, University of Tennessee at Knoxville, gskolites@utk.edu
|
| Katye Perry, Oklahoma State University, katye.perry@okstate.edu
|
| Mwarumba Mwavita, Oklahoma State University, mikanjuni@yahoo.com
|
| Kristi Lekies, The Ohio State University, lekies.1@cfaes.osu.edu
|
| Cheryl Meyer, Wright State University, cheryl.meyer@wright.edu
|
| Jennifer Jewiss, University of Vermont, jennifer.jewiss@uvm.edu
|
| Theresa Murphrey, Texas A&M University, t-murphrey@tamu.edu
|
| John LaValle, Claremont Graduate University, john.lavelle@cgu.edu
|
| Theresa Murphrey, Texas A&M University, t-murphrey@tamu.edu
|
| José Maria Diaz Puente, Universidad Polite' Cnica De Madrid, jm.diazpuente@upm.es
|
| Michael Newman, Mississippi State University, mnewman@humansci.mmstate.edu
|
| Kylie Hutchinson, Community Solutions Planning & Evaluation, kylieh@communitysolutions.ca
|
| Zandra Gratz, Kean University, zgratz@aol.com
|
| Discussant(s): |
| Randall Davies, Brigham Young University, randy.davies@byu.edu
|
| Abstract:
This Demonstration Session has as its target those who teach program evaluation or have an interest in doing so. It is unique in that it will allow for multiple demonstrations from those who teach program evaluation courses. Specifically, it will provide an opportunity and time to share strategies/techniques that have worked well for those who teach program evaluation to students who represent the same or varied disciplines. Handouts will be shared with those in attendance.
|
|
Session Title: Contexts Matter: Contextual Factors Shaping Educational Program Evaluation
|
|
Multipaper Session 558 to be held in Suwannee 14 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Xuejin Lu,
Children's Services Council of Palm Beach County, kim.lu@cscpbc.org
|
| Discussant(s): |
| Jacqueline Stillisano,
Texas A&M University, jstillisano@tamu.edu
|
|
Rural Schools and Uncle Sam: Context Matters
|
| Presenter(s):
|
| Zoe Barley, Mid-continent Research for Education and Learning, zbarley@mcrel.org
|
| Sandra Wegner, Wegner Consulting, sandrawegner611@hotmail.com
|
| Abstract:
This presentation reports on nine case studies of rural schools with low student participation in Supplemental Educational Services (SES) and describes the context that complicates rural school implementation. NCLB mandates that Title I schools in the second year of school improvement provide SES (tutoring), for low income students. Funding at local and state levels comes out of current Title I funds (20% for districts). The state role is to establish an approved service provider list (eligible schools may not be providers), monitor, and evaluate the program. Parent choice is the driver of student participation. In these nine schools, rural parents distrust outsiders and question the value of their programs. Parental poverty limits use of online program delivery and/or transportation to services. Providers struggle to find cost-effective solutions. One more successful school indicates that more nuanced adaptations can make implementation successful. Funding for a stronger state assistance role is recommended.
|
|
Context Matters: How Context Impacts Implementation of the Federal Smaller Learning Communities Program Across Sites
|
| Presenter(s):
|
| Beth-Ann Tek, Brown University, beth-ann_tek@brown.edu
|
| Abstract:
The U.S. Department of Education's Smaller Learning Communities (SLC) program provides funding to assist districts as they plan, implement, or expand SLC activities and components in schools. It is believed that SLC components (i.e., block scheduling, Advisory, and cohorts of teachers with shared students) will improve school climate, create personalized relationships between teachers and students, and foster collaboration among teachers. However, the federal program's flexible guidelines along with the varying needs from district to district contribute to the varied implementation of the SLC program across sites. Context matters, as the implementation of components ranges from simply changing the scheduling of the academic day into a block schedule; to adding an Advisory; to grouping students into academies; to combination of all three and more. Three case studies are presented highlighting the role of context and the methods used to capture this important element and its impact on implementation.
|
|
Adding Value to the Value-added Evaluation Approach: Linking to the Context of Educational Programs
|
| Presenter(s):
|
| Chi-Keung Chan, Minneapolis Public Schools, alex.chan@mpls.k12.mn.us
|
| David Heistad, Minneapolis Public Schools, david.heistad@mpls.k12.mn.us
|
| Mary Pickart, Minneapolis Public Schools, mary.pickart@mpls.k12.mn.us
|
| Colleen Sanders, Minneapolis Public Schools, colleen.sanders@mpls.k12.mn.us
|
| Jon Peterson, Minneapolis Public Schools, robert.peterson@mpls.k12.mn.us
|
| Abstract:
This paper illustrates how to link the value-added evaluation findings on student achievement to the context of three educational programs implemented in Minneapolis Public Schools (MPS). The first example is the evaluation of the instructional practices of the kindergarten literacy teachers. The presenters will illustrate how to combine the value-added findings with the video-taped observations to validate the best practices of early literacy instruction. The second example is the evaluation of one of the NCLB compliance program - Supplemental Educational Services (SES). The presenters will illustrate how to use the value-added results with the on-site observations to identify the strengths and weaknesses of various SES providers. The last example is the evaluation of a federal-funded after-school enrichment program - 21st Century Community Learning Centers. The presenters will illustrate how to integrate the value-added data with staff and participants' survey data and program documentation to modify and improve the program components.
|
|
Critical Issues When Starting a Center for Assessment in Education: Taking Advantage of a Challenging Context
|
| Presenter(s):
|
| Alexis Lopez, University of Los Andes, allopez@uniandes.edu.co
|
| Juny Montoya, University of Los Andes, jmontoya@uniandes.edu.co
|
| Diogenes Carvajal, University of Los Andes, dio-carv@uniandes.edu.co
|
| Gary Cifuentes, University of Los Andes, gcifuent@uniandes.edu.co
|
| Abstract:
Educational evaluation in the Colombian context is usually used as a tool for selection, exclusion, classification and/or control, and it often has a negative impact on students as well as on educational institutions. The Center for Research and Development in Education, at the University of Los Andes, created the Center for Assessment in Education. Its basic principles include promoting formative, authentic, participatory, and improvement-focused evaluation. In this paper, we present the way in which this Center will take advantage of some critical context issues in an effort to make an impact in the Colombian educational evaluation culture, and change the negative perception that stakeholders usually have about assessment and evaluation. Previous evaluation processes, both successful and unsuccessful, are used as reference to understand the challenges that the Center faces.
|
| | | |
|
Session Title: Benchmarking to Improve Student Evaluation Standards and Practices
|
|
Demonstration Session 559 to be held in Suwannee 15 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Presenter(s): |
| Katharine Cummings, Western Michigan University, katharine.cummings@wmich.edu
|
| Lindsay Noakes, Western Michigan University, lindsay.noakes@wmich.edu
|
| Emily Rusiecki, Western Michigan University, emily.rusiecki@wmich.edu
|
| Hazel L Symonette, University of Wisconsin Madison, hsymonette@odos.wisc.edu
|
| Abstract:
The familiarity and use of the Student Evaluation Standards is surprisingly limited considering the influence that the Joint Committee has had on the field of evaluation. In an attempt to increase awareness of the Standards, link the Standards with Assessment for Learning, and improve the overall quality of student evaluation practices, the Joint Committee on Standards for Educational Evaluation and the National Science Foundation have sponsored the development of a benchmarking tool kit for educators. This tool kit, which is being piloted in a variety of contexts across the nation, should also prove useful in the upcoming revision of the Student Evaluation Standards. This session will provide a step-by-step guide to benchmarking student evaluation practices, discuss examples of current and future research studies, and engage participants in a discussion of ways to use benchmarking research to improve the quality and use of the Student Evaluation Standards.
|
|
Session Title: Connections: People, Data and Technology
|
|
Multipaper Session 560 to be held in Suwannee 16 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Chair(s): |
| JC Smith,
University of South Florida, jcsmith6@mail.usf.edu
|
|
Moving From Here to Where? Using Evaluation to Support Nonprofit Network Development
|
| Presenter(s):
|
| Barbara Acosta, George Washington University, bacosta@ceee.gwu.edu
|
| Janet Orr, Teal Services, jkorr@tealservices.net
|
| Nicole Campbell, Deutsche Bank Americas Foundation, nicole.campbell@db.com
|
| Karen Kortecamp, George Washington University, karenkor@gwu.edu
|
| Abstract:
This presentation reports on an evaluation conducted for a foundation and its emerging network of grantees in a large Eastern city. The challenge for evaluators was to support the emergence of a shared vision for ten grantees with somewhat related yet highly specific goals that differed widely in terms of target populations and strategies. It will focus on lessons learned from the collaborative development of a theory of change through use of a logic model, and the ways in which this kind of evaluation process can support the development of nonprofit networks.
|
|
Multi-Method Participatory Evaluation of a Community Beautification Program
|
| Presenter(s):
|
| Thomas Reischl, University of Michigan, reischl@umich.edu
|
| Susan Franzen, University of Michigan, sfranzen@umich.edu
|
| Susan Morrel-Samuels, University of Michigan, sumosa@umich.edu
|
| Julie Allen, University of Michigan, joallen@umich.edu
|
| Alison Grodzinski, University of Michigan, alisonrg@umich.edu
|
| Marc Zimmerman, University of Michigan, marcz@umich.edu
|
| Abstract:
This presentation will describe the development, implementation, and preliminary results of a collaborative, multi-method evaluation of the Ruth Mott Foundation's beautification program in Flint, MI. We will describe how we engage stakeholders in a participatory process to guide the evaluation, facilitate critical deliberations about effective strategies, document and disseminate the results of multi-method evaluation study, and increase the capacity of the Ruth Mott Foundation and its grantees to plan and complete beautification programs that achieve measurable outcomes. The multiple methods include GIS mapping of beautification project sites, internet-based project reporting for beautification grantees, community-wide outcome assessments, and case studies of successful projects.
|
|
Connective Tissue: The Learning Opportunities of Online Communities
|
| Presenter(s):
|
| Gayle Peterson, Headwaters Group Philanthropic Services, gpeterson@headwatersgroup.com
|
| Emily Warn, SocialQuarry, emily_warn@yahoo.com
|
| Abstract:
Online technologies and social networks are helping run and win political and public advocacy campaigns, and helping implement and evaluate efforts to achieve an organization's mission. They have fundamentally changed how organizations, grantees, and the communities they serve communicate, collaborate, and learn together. Increasingly, organizations are exploring how to build effective social networks that help them generate grantmaking strategies, coordinate activities among grantees working on a common goal, and transform evaluation into an ongoing process. However, online technologies and social networks can also strain an organization's capacity and budget, hampering its use as a learning tool by amplifying conflicts and negative perceptions. This paper outlines the steps to fostering an online community that becomes a place where an organization and its wider community can collaboratively learn. It also offers an overview of how to transform the shared digital histories that such communities create into key tools for identifying and analyzing which strategies and programs work or don't work.
|
|
Collecting Sensitive Data From an Unstable Population
|
| Presenter(s):
|
| Courtney Brown, Indiana University, coubrown@indiana.edu
|
| Alisha Higginbotham, Indiana University,
|
| Abstract:
This paper focuses on the challenges and solutions of collecting longitudinal sensitive data from an unstable population. Examples will be provided from an evaluation of a multi-site five-year U.S. Department of Health and Human Services grant to promote and encourage responsible fatherhood. The biggest challenge for this evaluation was collecting impact and longer-term outcome data of the father participants. Two main contextual factors were at play in this evaluation; the sensitive data needs and a transient unreliable population. The methods for working through these challenges will be discussed. Some of these methods included creating standard intake and outtake instruments, hiring data collectors at each of the sites, and creating a system of follow-up with each of the participating nonprofits.
|
| | | |
|
Session Title: The Adolescent Family Life Care Demonstration Project: How Context Matters When Evaluating Programs for Adolescent Parents and Their Children
|
|
Multipaper Session 561 to be held in Suwannee 17 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Human Services Evaluation TIG
|
| Chair(s): |
| Dana Keener, ICF Macro, dana.c.keener@macrointernational.com
|
| Abstract:
The Office of Adolescent Pregnancy Programs (OAPP) in the Department of Health and Human Services currently funds 31 Adolescent Family Life Care Demonstration Projects (Care Projects) across the nation in a variety of settings. Care Projects serve pregnant and parenting adolescents, their children, male partners, and family members to prevent repeat pregnancy, increase immunization adherence, and increase educational achievement among adolescent mothers. Every Care Project is required to conduct a rigorous independent evaluation of its program to contribute to the growing evidence-base of what works in serving young mothers and their children. Likewise, each Care Project, and its evaluation, faces unique challenges and opportunities that are influenced by the setting in which is operates and the population it serves. This session will describe a sample of Care Projects operating in a range of settings and the implications of those settings on the evaluation process.
|
|
Overview of the Adolescent Family Life Care Demonstration Program
|
| Dana Keener, ICF Macro, dana.c.keener@macrointernational.com
|
| Alicia Richmond, United States Department of Health and Human Services, alicia.richmond@hhs.gov
|
|
Adolescent Family Life Care Demonstration Projects (Care Projects) are funded by the Office of Adolescent Pregnancy Programs to serve pregnant and parenting adolescents, their children, male partners, and family members. All Care Projects share common goals to reduce the number of repeat teen pregnancies, improve infant immunization, and increase educational attainment of adolescent parents. Every program that receives grant funds to implement a Care Project is required to include an external evaluation component. Rigorous evaluation is highly valued and supported by the funder via grant funds, training and technical assistance. The intent of the evaluation component is to reveal findings that can serve as a basis for designing future programs and to share lessons learned across projects for program improvement. This session will provide an overview of the Care program, including the range of services provided across projects and the types of settings in which the projects are implemented.
|
|
Challenges and Advantages of Evaluating Medically-Based Social Services for Pregnant and Parenting Adolescents
|
| Carol Lewis, University of Texas at Austin, carolmlewis@gmail.com
|
| Megan Scarborough, University of Texas at Austin, megan@mail.utexas.edu
|
|
Context and setting provide challenges and advantages in evaluating a medically-based, social services collaborative program that provides intensive case management for pregnant and parenting adolescents. The People's Community Clinic, a medical home for limited income, uninsured families in Travis County, Texas, has been an ideal setting for recruiting pregnant adolescents into case management, especially once clinic personnel understood how the program complemented clients' primary care. The medical home also provides an avenue for retaining clients in services. However, recruiting participants for the evaluation is more challenging due to time constraints among medical personnel and lack of understanding about evaluation among young, non-English speaking clients. Another asset of the program is collaboration between four agencies that meet weekly and work to coordinate case management, counseling, and medical services for these clients. The downside to the multi-agency collaboration is lack of name recognition for the program and added complexity for the evaluation.
|
|
Evaluating Services for Pregnant and Parenting Adolescents in a School Setting
|
| Diane Mitschke, University of Texas at Arlington, dianemitschke@uta.edu
|
| Holli Slater, University of Texas at Arlington, holli.slater@mavs.uta.edu
|
| Tori Sisk, Arlington Independent School District, tsisk@aisd.net
|
|
Healthy Families: Tomorrow's Future is a school-based comprehensive support program for adolescent pregnant and parenting students in Arlington, Texas. A mixed method evaluation compares participants who are randomly assigned to the intervention or comparison groups and complete pre and post assessments comprised of the AFL Core Evaluation as well as a Developmental Assets Inventory and Happiness Scale. Participants also complete a follow-up assessment when their child is six months of age. Because the program is school-based, both the delivery of the program itself and the implementation of the evaluation face numerous challenges as well as several strengths. This presentation will address challenges and strengths associated with recruitment, obtaining informed consent and assent, retention, and longitudinal data collection in a school-based setting.
|
|
The Choctaw Nation's Adolescent Family Life Care Program: Unique Challenges and Opportunities
|
| Judith McDaniel, McDaniel Bonham Consulting, dkrelations@aol.com
|
| Zoe E Leimgruebler, Choctaw Nation of Oklahoma, drzoe1@cox.net
|
| Angela Dancer, Choctaw Nation of Oklahoma, angelad@choctawnation.com
|
|
The Choctaw Nation's Adolescent Family Life (AFL) Care staff conducts semi-monthly home visits year-round for pregnant/parenting teenagers who live throughout the Tribe's 11,000 square-mile southeastern Oklahoma service area. The purpose of the program is to ensure that teen clients: (1) access needed health care services, (2) succeed in writing and achieving their personal goals, and (3) receive home-based instruction in prenatal, labor and delivery, and postpartum care; positive Indian parenting skills; and relationship skills. Annual quasi-experimental studies compare client progress with a comparison group matched on all variables except AFL participation. The 2008 analysis revealed statistically significant differences in clients' completion of their written goals, learning progress in all curriculum areas, participation of their babies' fathers in home-based visits, receipt of information on how to access needed health and human services, and the Choctaw Nation's success in helping clients improve relationships with their family members and babies' fathers.
|
|
Evaluating a Pregnant and Parenting Program for Homeless Youth
|
| Elizabeth Calhoun, University of Illinois at Chicago, ecalhoun@uic.edu
|
|
The Night Ministry's program serves youth who are homeless or precariously housed and pregnant and/or parenting. This care demonstration project holistically serves pregnant and or parenting adolescents ages 13-18 within Chicago's city limits with interventions designed to address housing, strengthening support/family systems, and life skills including education, employment, parenting, and subsequent pregnancies. The evaluation plan is conducting a two-arm, quasi-experimental trial using repeated measures comparing the groups on rates of safe and stable housing, repeat pregnancy, social support, parenting skills, job readiness/attainment and school retention/graduation. The evaluation lessons learned will address the strengths and challenges associated with recruitment, consent, and longitudinal data collection with homeless youth.
|
|
Serving Pregnant and Parenting Adolescents in a Community-Based Setting: Implications for Evaluation
|
| Dana Keener, ICF Macro, dana.c.keener@macrointernational.com
|
| Tina Gaudiano, Middle Tyger Community Center, tina.gaudiano@spart5.net
|
| Carolyn Turner, Middle Tyger Community Center, carolyn.turner@spart5.net
|
|
The Middle Tyger Community Center (MTCC) serves a rural population of adolescent parents and their children in Lyman, South Carolina. The program provides a comprehensive array of services to participants including weekly parenting classes, monthly home visits, monthly case management sessions, daily early childhood education, and semi-annual family forums. Although the program is community-based, it relies heavily on partnerships with the school district and other community agencies in order to reach its target population. This presentation will focus on the challenges and opportunities with implementing and evaluating a community-based program for adolescent mothers and their children. Specifically, strategies for recruiting and retaining participants, collecting evaluation data, and tracking services will be discussed.
|
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
Culture and Context in Educational Assessment |
|
Roundtable Presentation 563 to be held in Suwannee 19 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Indigenous Peoples in Evaluation TIG
|
| Presenter(s):
|
| Ormond Hammond, Pacific Resources for Education and Learning, hammondo@prel.org
|
| Abstract:
This session will address the issues and dilemmas underlying assessing educational outcomes in culturally diverse classrooms. It will be based upon observations from two different projects. In the first the goal was to develop a meaningful set of outcome indicators for a system of Native Hawaiian educational programs. The second was a project to collect and make available online a set of indigenous assessment instruments.
Issues that were addressed in both of these efforts included important questions related to the impact of cultural context in education. For example,
- Can classroom assessment make valid use of culturally meaningful qualitative outcome measures?
- Should an education system based on indigenous culture accept and make use of non-indigenous outcome indicators?
The goal of the session is to identify and discuss the critical issues, potential approaches to resolving them, and directions for future work in this area.
|
| Roundtable Rotation II:
Guided Action Research: A Culturally-Responsive Evaluation Methodology |
|
Roundtable Presentation 563 to be held in Suwannee 19 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Indigenous Peoples in Evaluation TIG
|
| Presenter(s):
|
| Katherine Tibbetts, Kamehameha Schools, katibbet@ksbe.edu
|
| Wendy Kekahio, Kamehameha Schools, wekekahi@ksbe.edu
|
| Abstract:
Recent research and program evaluation literature (Hood, Hopson, & Frierson, 2005; Smith, 1999; Thompson-Robinson, Hopson, & SenGupta, 2004) strongly documents the need for evaluation methods that are responsive to the values and perspectives of minority and indigenous communities. The subject of this paper is an evaluation approach that is responsive to the cultural context of the Native Hawaiian education community. In addition to being culturally respectful and responsive, the approach extends the conventional purpose of evaluation to prove or improve, by employing a meta-action-research strategy to support the transfer of skills learned at the training and assess their impact on teaching and learning through synthesis of findings across multiple action research projects. The paper reports on the successes and challenges encountered as the approach was implemented and concludes with recommendations for future practice.
|
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
Character Development in Middle School Students: Culture Matters! |
|
Roundtable Presentation 564 to be held in Suwannee 20 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Presenter(s):
|
| Stephanie Schneider, Orange County Department of Education, sschneider@ocde.us
|
| Abstract:
In an ongoing program to support character development in middle schools, students are asked to complete the Character in Action Survey measuring perspectives on school climate, character development, and relevant faculty behaviors. Approximately 2200 students from 8 middle schools complete the instrument each year. Using ANOVA, statistically significant differences were found both overall and for specific subscales for racial/ethnic groups, by school site, and for the interaction of ethnicity by school. In particular, responses of students identifying themselves as 'Hispanic' varied significantly from responses of students who self-identified as 'white' or 'Asian'. In this session, the data will serve as a springboard to discuss the following:
1.What factors at school sites create different (or similar) perspectives of school climate for various cultural groups?
2.What are the cultural expectations of students that are not being met?
3.How do teacher and staff behaviors and perspectives impact student perspectives?
|
| Roundtable Rotation II:
Evaluating Short-Term Impacts on Student Achievement: What Does Motivation Tell Us? |
|
Roundtable Presentation 564 to be held in Suwannee 20 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Presenter(s):
|
| Elise Arruda, Brown University, elise_arruda@brown.edu
|
| Stephanie Feger, Brown University, stephanie_feger@brown.edu
|
| Abstract:
This roundtable proposal seeks to contribute to the discussion of the context of academic achievement outcomes in evaluation. We propose to highlight two evaluation studies conducted within Rhode Island where student motivation was used as an outcome measure. The roundtable discussion will focus on the reception of motivation as an indicator of impact among stakeholders and the usefulness of this measure in terms of the evaluation context. With evidence and data from two evaluation studies we will share; (1) the challenges of informing stakeholders of the research on student motivation and the importance of this outcome as a means to student achievement, (2) the statistical success of adapting student motivation surveys from the literature to a particular content (i.e., science), (3) the process at which teachers easily integrated the student survey with the school day, and (4) the interpretation and dissemination of evaluation findings on students' motivation.
|
| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes. |
| Roundtable Rotation I:
Exploring Collaborative Planning and Evaluation With ConnectFamilias-Little Havana Community Partnership |
|
Roundtable Presentation 565 to be held in Suwannee 21 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
|
| Presenter(s):
|
| Teresa Nesman, University of South Florida, nesman@fmhi.usf.edu
|
| Betty Alonso, Dade Community Foundation, betty.alonso@dadecommunityfoundation.org
|
| Myriam Monsalve-Serna, Center for Community Learning Inc, mlmonsalveserna@mac.com
|
| Antonina Khaloma, Center for Community Learning Inc, mlmonsalveserna@mac.com
|
| Maria Elena Villar, University of Miami, mvillar@miami.edu
|
| Abstract:
This roundtable discussion will be led by members of ConnectFamilias-Little Havana Community Partnership, and evaluators from the University of South Florida. The presentation will be based on experiences in collaboratively developing a theory of change and evaluation plan for ConnectFamilias, a partnership of natural helpers, providers, and community residents that addresses family and community safety within the unique context of Little Havana, Miami. Challenges and opportunities for partnership evaluation within the context of urban immigrant communities will be discussed, focusing on aspects such as capacity building, community outreach and engagement, and cultural competence.
|
| Roundtable Rotation II:
Moving Community Organizations Towards Adoption and Evaluation of Evidence-based Practices |
|
Roundtable Presentation 565 to be held in Suwannee 21 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
|
| Presenter(s):
|
| Marizaida Sanchez Cesareo, University of Puerto Rico, marisanchez@rcm.upr.edu
|
| Betzaida Santiago Rodriguez, University of Puerto Rico, santiagob@rcm.upr.edu
|
| Abstract:
Decisions to implement new programs or treatments are embedded within financial, political, and organizational contexts. Recent mandates related to grant accountability have re-shaped the selection and evaluation of community intervention programs. Organizations are urged to adopt evidence-based practices (EBPs) which have demonstrated efficacy and effectiveness, as well as to utilize rigorous outcome evaluations. However, this level of sophistication is not easily employed by many community-based organizations. This roundtable will present three examples of projects which supported adoption and/or rigorous evaluation of evidence-based practices, such as dissemination of EBPs through web-based applications, with diverse partners (e.g. the Puerto Rico Department of Children and Family Services, a Chicago community-based counseling center ). Discussion will be encouraged regarding; 1) the process of collaborating with organizations to establish EBPs; 2) ethics of collaborating with programs with limited or inconsistent evidence, 3) the meaning of culture and context within EBPs and 4) funding EBP evaluation research.
|
|
Session Title: The Context of Evaluation: Culture as a Defining Factor
|
|
Panel Session 566 to be held in Wekiwa 3 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Multiethnic Issues in Evaluation TIG
|
| Chair(s): |
| Elmima Johnson, National Science Foundation, ejohnson@nsf.gov
|
| Discussant(s):
|
| Katrina Bledsoe, Walter R McDonald and Associates Inc, kbledsoe@wrma.com
|
| Abstract:
The context of evaluation includes attention to a range of personal, environmental and situational factors. Two of the most important are program setting and participant characteristics, including those of stakeholder groups and the evaluator. The theoretical and methodological literature details the impact of these factors on the evaluation process, from design through outcomes and impact. Because multiculturalism is the norm in this global society some propose that the validity of the evaluation, especially of people of color or poverty, is in question, if cultural context is not addressed. (Hood, Hopson and Frierson, 2005)
This panel will explore the use of evaluation strategies that are culturally relevant within a gender context and for educational evaluations. The discussion will begin with a definition of cultural context followed by an explanation of how two dimension (gender and education context) influence evaluation questions, methods, analysis, dissemination and the application of results. Examples of the role of context in actual evaluations will be provided, along with guidelines on how to effectively apply this concept.
|
|
Cultural Context in Evaluation: A Consideration of Ethnic Group
|
| Ruth Greene, Johnson C Smith University, rgreene@jcsu.edu
|
|
Culturally sensitive evaluation researchers recognize the importance of conducting culture-centered research among persons from ethnic, racial, and underrepresented minority groups. This presentation will explore relevant theoretical frameworks and models that inform culture in theory driven inquiry. Further, the presentation will review the current standards and principles which guide culturally competent evaluation and the ways in which racial and ethnic group membership has implications for the methodologies we use. Finally, this presentation will explore how culture within context affects evaluation practice.
|
|
|
The Contexts of Educational Evaluations: The Case of Urban Settings
|
| Veronica Thomas, Howard University, vthomas@howard.edu
|
|
Urban settings represent the venue where a large proportion of our nation's children are educated, particularly economically disadvantaged, minority, and immigrant students. Educational programs in these settings are intimately linked to and affected by a myriad of complex contextual forces. Evaluations of educational interventions in urban areas must be informed by the social, behavioral, political, and economic factors affecting the urban context. This presentation will argue that critical theory (CT) is a useful paradigm for guiding educational evaluators practice in urban contexts. Further, this presentation will examine contextual issues in urban settings and offer a set of conceptual, methodological, and strategic recommendations for better ensuring that educational evaluations address these issues in ways that are meaningful, relevant, and promote social justice agendas.
| |
|
Gender as a Context in Evaluation
|
| Denice Hood, University of Illinois at Urbana-Champaign, dwhood@illinois.edu
|
|
The role of cultural context in educational evaluation methods has been addressed in the literature (Frierson, H.T., Hood, S., & Hughes, G.B., 2002). However, apart from the literature on feminist methodologies or GLBT issues, it is uncommon for gender to be specifically addressed within the discussion of cultural context. The various markers of identity (gender, race, etc.) are interconnected and these intersections must be considered as significant social locations which impact both the evaluator and the program participants. Culture impacts all aspects of the evaluation process and as part of each individual's culture, gender should also impact what questions are asked, what data sources are accepted as evidence, the data analysis, and how the results are communicated to various stakeholders. As Kirhart (2005) noted, evaluative understandings and the judgments we make are culturally contextualized. Utilizing an example of the evaluation of a technology program for girls, this paper will address ways in which the evaluation community might respond to the challenges through the use of evaluation strategies that are culturally relevant within a gender context.
| |
|
Session Title: Conceptual, Practical, and Evaluation Issues in Implementing Programs With Quality
|
|
Multipaper Session 568 to be held in Wekiwa 5 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
|
| Chair(s): |
| Pamela Imm, Lexington Richland Alcohol and Drug Abuse Council, pamimm@windstream.net
|
| Discussant(s): |
| Abraham Wandersman, University of South Carolina, wanderah@gwm.sc.edu
|
| Abstract:
Promising programs generated in research settings have intended impact if they are implemented with quality and devote appropriate attention to contextual features related to the setting in which a program is being placed. Quality and context-sensitive implementation, then, is an important bridge between science and practice. This presentation will discuss both theoretical models of implementation and practical, applied examples of field-based work. Three applied examples of ongoing implementation will be provided, each by a different panelist. These will include description of an underage drinking prevention initiative driven by the Getting to Outcomes(tm) model, a school-based transitions program, and a statewide utilization of a screening tool for child and adolescent treatment services. Theoretical and practical elements of the presentation will be tied together by the discussion of an Implementation Checklist, a theory-driven checklist of practical implementation steps developed for use by practitioners and researchers alike. Evaluation issues will be addressed.
|
|
A Checklist for Practical Implementation Steps and Key Considerations for Evaluating and Supporting Implementation Within Organizations
|
| Duncan Meyers, University of South Carolina, meyersd@mailbox.sc.edu
|
| Abraham Wandersman, University of South Carolina, wanderah@gwm.sc.edu
|
| Jason Katz, University of South Carolina, katzj@mailbox.sc.edu
|
| Victoria Chien, University of South Carolina, victoria.chien@gmail.com
|
|
Current evidence links implementation to positive outcomes, underscoring its importance (Durlak & DuPre, 2008). Implementation has also received heightened attention as a possible mechanism to lessen the well-publicized gap between research and practice. A systematic literature review sought to answer the question: what specific strategies do diverse implementation frameworks suggest practitioners utilize when integrating new practices, processes, and/or technologies into communities and/or organizations? Strategies suggested in the frameworks were categorized into one or more of five broad categories of practical implementation steps(steps for Assessment, Inclusion, Capacity Building, Implementation Preparation, and Implementation Support) and will be presented as a checklist. This theory-driven checklist provides practical strategies to improve implementation; to help program designers, evaluators, researchers, and funders proactively plan systematic quality implementation; and to suggest future directions for research. This presentation will discuss a) the methods and results of the literature review and b) implications for research and evaluation practice.
|
|
Implementing the GTO(tm) Model to Reduce Underage Drinking in a Statewide Initiative
|
| Pamela Imm, Lexington Richland Alcohol and Drug Abuse Council, pamimm@windstream.net
|
| Annie Wright, University of South Carolina, patriciaannwright@yahoo.com
|
|
Researchers in South Carolina, in collaboration with the RAND Corporation, have received a grant from the Centers for Disease Control and Prevention to evaluate the implementation of the Getting To Outcomes(tm) (GTO) system to reduce underage alcohol use. Three intervention counties are receiving the GTO system (e.g.,tools, training, technical assistance) and are being compared to three comparison counties on a variety of outcome variables. The implementation of the GTO system includes training and ongoing technical assistance in all 10 steps of the GTO system and includes ongoing tracking of the technical assistance being provided. The implementation issues to be discussed will focus on challenges to having the sites conduct environmental strategies (e.g., compliance checks), monitoring and making improvements in their activities, and tracking media opportunities to foster community awareness, concern, and action. Suggestions for how high quality implementation of activities can contribute to sustainability will also be discussed.
|
|
Lessons Learned in the Implementation of a School-based High School Transitions Program
|
| Annie Wright, University of South Carolina, patriciaannwright@yahoo.com
|
|
The transition from middle into high school can be a time of vulnerability, where students' academic and social well-being can decline sharply. It can, however, also be a time of important change and growth for an adolescent. Recognizing the potential pitfalls and benefits of this transition period, many high schools have begun implementing transitions programs during the ninth grade year. This presentation will describe the first two years of implementation of a comprehensive high school transition and drop-out prevention program and will include a description of planning for higher quality implementation in the upcoming school year. Lessons learned include identifying program champions, generating faculty and administrative collaboration on the implementation and evaluation of the program over time, and developing strategic and creative measures of implementation fidelity and effectiveness. Experiences from this school setting will be integrated with the overall theoretical presentation of emerging implementation promising and best practices.
|
|
Implementation Challenges in Utilizing a Statewide Screening Tool in Child and Adolescent Treatment Services
|
| Jason Katz, University of South Carolina, katzj@mailbox.sc.edu
|
| Pamela Imm, Lexington Richland Alcohol and Drug Abuse Council, pamimm@windstream.net
|
| Ritchie Tidwell, Tidwell & Associates Inc, tidwell@grantmaster.org
|
| Kennard DuBose, South Carolina Department of Mental Health, ked25@scdmh.org
|
| Susie Williams-Manning, South Carolina Department of Alcohol and Other Drug Abuse Services, swilliamsmanning@daodas.state.sc.us
|
|
As part of a SAMHSA-funded Child and Adolescent State Infrastructure grant, mental health, alcohol and drug, juvenile justice, and child welfare agencies in four South Carolina pilot communities have been involved in the implementation of a universal screening tool. These efforts, which were initiated in June 2008, are part of a larger strategy across the state of integrating services and enhancing access to care for adolescents in need. While there have been benefits to the initiative including identification of problems in adolescents that might otherwise have gone undetected, there have been several implementation challenges. These challenges include having fewer than expected clients walking through the door in small rural communities, clients refusing to share their health information with other agencies, and problems with computer access. These and other challenges to implementation will be discussed within the context of a framework for promising and best practices in implementation.
|
|
Session Title: Evaluating High-Risk, High-Reward Research Programs: Challenges and Approaches
|
|
Panel Session 569 to be held in Wekiwa 6 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Research, Technology, and Development Evaluation TIG
|
| Chair(s): |
| Stephanie Philogene, National Institutes of Health, philoges@od.nih.gov
|
| Discussant(s):
|
| Gretchen Jordan, Sandia National Laboratories, gbjorda@sandia.gov
|
| Abstract:
Experts believe that targeted programs and policies to support high-risk, high-reward research are needed to preserve U.S. leadership in science and technology. Programs that fund such research take many forms. Some support carefully crafted technology-driven activities with hard milestones, while others select highly creative individuals and fund them to pursue unfettered blue sky research. In most cases, such research has a long incubation period, and in any case cannot be confused with "high quality" research that receives mainstream exposure and citations. Given these and other complications, how can such programs properly be evaluated? In this panel, we showcase different high-risk, high-reward research programs supported in the US government, explain how each is being evaluated, and examine if the evaluation is particularly suited to the program and its underlying philosophy. We look at four programs: NIH Director's Pioneer Award, NSF's Small Grants for Exploratory Research, DARPA, and NIST's Technology Innovation Program.
|
|
Evaluating the National Institutes of Standards and Technologies' Technology Innovation Program
|
| Stephen Campbell, National Institutes of Standards and Technology, stephen.campbell@nist.gov
|
|
The America Creating Opportunities to Meaningfully Promote Excellence in Technology, Education, and Sciences (COMPETES) Act was enacted on August 9, 2007, to invest in innovation through research and development, and to improve the competitiveness of the United States. The COMPETES Act established the Technology Innovation Program (TIP) for the purpose of assisting U.S. businesses and institutions of higher education or other organizations, to support, promote, and accelerate innovation in the United States through high-risk, high-reward research in areas of critical national need. In this talk, Steve will discuss how the process of establishing areas of critical national need shape and were shaped by evaluation of funding efforts in these areas, and the fielding of a customer survey for TIP's initial competition and data collection efforts of the new program. He will also highlight the importance of making data available to the larger research community.
|
|
|
Evaluating the National Institutes of Health's Director's Pioneer Award Program
|
| Mary Beth Hughes, Science and Technology Policy Institute, mhughes@ida.org
|
|
The National Institutes of Health Director's Pioneer Award (NDPA) was initiated in Fiscal Year 2004 to support individual investigators who display the creativity and talent to pursue high-risk, potentially high-payoff ideas in biomedical and behavioral sciences and to fund new research directions that are not supported by other NIH mechanisms. The Science and Technology Policy Institute (STPI) has been requested by the NIH to perform an outcome evaluations this program. The OE study design is based on two questions: (1) Are NDPA awardees conducting pioneering research with NDPA funds? And (2) what are the "spill-over" effects of the NDPA program on the Pioneers, their labs and universities, NIH, and the biomedical community? In this presentation, we discuss the NDPA program, the design of the outcome evaluation (based on in-depth interviews with the awardees and an expert panel review), the challenges associated with such an evaluation, and some preliminary results.
| |
|
Evaluating the Department of Defense's Defense Advanced Research Projects Agency
|
| Richard Van Atta, Science and Technology Policy Institute, rvanatta@ida.org
|
|
Dr. Van Atta, Core Research Staff at the Science and Technology Policy Institute (STPI), has been an official in the Department of Defense, where he was Assistant Deputy Under Secretary for Dual Use and Commercial Programs, and a researcher at the Institute for Defense Analyses (IDA) conducting technology policy research centering on the development and implementation of emerging technologies to meet national security needs. He has conducted studies for DARPA on its past research programs and their implementation, including Transformation and Transition: DARPA's Role in Fostering and Emerging Revolution in Military Affairs, (IDA, 2003), and was invited to write the introductory chapter, "Fifty Years of Innovation and Discovery," for DARPA's 50th anniversary publication. Thus, Dr. Van Atta brings both a background in assessing the technical accomplishments and research management practices of DARPA, and the DoD more broadly, and experience in conceiving and assessing "high-risk, high payoff" concepts for security applications.
| |
|
Evaluating the National Science Foundation's Small Grants for Exploratory Research Program
|
| Caroline Wagner, SRI International, caroline.wagner@sri.com
|
|
This talk will highlight findings from a recently completed evaluation of NSF's 15-year Small Grants for Exploratory Research program. The process involved a survey and interviews conducted with NSF staff and Principal Investigators. The results revealed that the grant process has been overwhelmingly successful in meeting the goals set out for the program. A small number of grants have led to spectacular results, while the majority has not led to transformations, and some have "failed" in the sense of not producing publishable results. The SGER mechanism was valued highly as an alternative to the peer review system according to NSF staff. In addition to enabling the funding of iconoclastic ideas that might be denied or overlooked by reviewers, the mechanism also gave junior faculty, minorities, and women opportunities to establish a track record so they can become competitive later on, many of whom have used the boost to good effect.
| |
|
Session Title: New Direction of Government Research and Development Program Evaluation in Korea
|
|
Multipaper Session 570 to be held in Wekiwa 7 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Research, Technology, and Development Evaluation TIG
|
| Chair(s): |
| June Seung Lee, Korea Institute of Science and Technology Evaluation and Planning, jslee@kistep.re.kr
|
| Abstract:
The goal of this session is to provide participants with insight into current best practices and challenges of evaluating national R&D program and government-funded research institutes, etc. that can be used for comparative, management and policy purposes in Korea. Panelists will share Korea's expertise and experience in R&D performance evaluation, in-depth evaluation of R&D Program and the effect of institutional evaluation system on the performance of government-funded research institutes and discuss current approaches used in Korea to measure and compare the performance of R&D programs and policies.
|
|
Reformation of National Research and Development (R&D) Program Evaluation System
|
| Dong-Jae Lee, Ministry of Strategy and Finance Korea, lee4326@mosf.go.kr
|
|
Apart from general finance program, the unique performance evaluation at R&D program has been conducted since 1999. In Feb. 2008, the function of performance evaluation at R&D program transferred from National Science and Technology Council (NSTIC) to Ministry of Strategy and Finance (MOSF) due to amendment of the government organization law and related laws. In the meantime, with the continued increases of government R&D investment, the importance of performance perspectives on government is increasingly emphasized. Responding to this situation, MOSF lays and promotes a scheme of reorganizing R&D program performance evaluation system of 'Practical R&D program performance evaluation' through 'Choice and Concentration'. It has reformed the R&D performance evaluation system in order to support the R&D investment efficiency through simplification of evaluation, reinforcement of department self-evaluation, in-depth analysis.
|
|
In-depth Evaluation of Research and Development (R&D) Program in Korea
|
| Seung Jun Yoo, Korea Institute of Science and Technology Evaluation and Planning, biojun@kistep.re.kr
|
| Boo-jong Kil, Korea Institute of Science and Technology Evaluation and Planning, kbjok@kistep.re.kr
|
| Woo Chul Chai, Korea Institute of Science and Technology Evaluation and Planning, wcchai@kistep.re.kr
|
|
The purpose of R&D program evaluation is to improve efficiency and effectiveness of the program. In-depth evaluation is a way of program evaluation addressing evaluation questions using logic model with corresponding methods.
In in-depth evaluation, appropriate methods should be applied depending on the types of R&D program and performance information. Each year about 10 R&D programs are selected for in-depth evaluation after reviewing and surveying issues to evaluate by evaluation group, national assembly, NSTC, BAI, etc.
Evaluation is more valuable when the results are appropriately used according to types of the results (corrections). Currently, there are four types of correction as follows, 1) improve program delivery system, 2) coordinate budget allocation, 3) improve research environment, 4) consult program planning. In-depth evaluation reinforces the qualitative analysis to find out the in-efficient or in-effective issues and to correct the problems for the corresponding program to be efficient and effective.
|
|
Evaluation of National Science and Technology Innovation Capacity
|
| Seung Ryong Lee, Korea Institute of Science and Technology Evaluation and Planning, leesr7376@kistep.re.kr
|
| Chi Yong Kim, Korea Institute of Science and Technology Evaluation and Planning, cykim@kistep.re.kr
|
| Dong Hoon Oh, Korea Institute of Science and Technology Evaluation and Planning, smile@kistep.re.kr
|
|
As science & technology(S&T) has become a source of global competitiveness in knowledge-based economy, the level of S&T capacity determines a nation's competitive power. Countries, therefore, have been enhancing investment and political supports to strengthen S&T capacity.
Most of all, accurate analysis and assessment of the level of nation's S&T ability is needed to make effective policy measures.
On the basis of the framework of the NIS(National Innovation System), this paper suggests indexes to cover the entire cycle of S&T innovation. And it creates models to measure S&T capacity comprehensively, and tries to appraise 30 OECD members
Although IMD and WEF competitiveness reports, which are regarding S&T as just one of components of nation's competitiveness, include S&T domains, these survey are insufficient to measure a nation's S&T capability synthetically and systematically.
|
|
The Effect of Institutional Evaluation System of GRIs on the Receptivity of Evaluation Result and the Performance of GRIs: Focusing on Economic and Human Society Research Council
|
| Byung Yong Hwang, Korea Institute of Science and Technology Evaluation and Planning, byhwang@kistep.re.kr
|
| Soon Cheon Byeon, Korea Institute of Science and Technology Evaluation and Planning, sbyeon@kistep.re.kr
|
| Byung Tae Yoo, Hanyang University, btyoo@hanyang.ac.kr
|
|
The purpose of the study is to propose plans to improve evaluation systems of GRIs of research councils, whose purpose is to strengthen the performance of GRIs, especially focusing on the National Research Council.
Through the analysis of the 23 research institutes and their 551 employees, it is confirmed that the Council's institutional evaluation system for GRIs has a positive influence on performances of research institutes, and receptivity of evaluation result is related to performances of research institutes. In addition, it is also confirmed that institutes with high performances, in general, have high level of receptivity of evaluation result along with positive recognition of institutional evaluation system.
As a result, this study drew up several ways to improve institutional evaluation system: switch the current evaluation system to in-depth performance evaluation, motivate GRIs, and focus on optimizing performances. It also suggested ways to enhance receptivity of evaluation result and improve performances.
|
|
Session Title: Using Qualitative Inquiry to Address Contextual Issues in Evaluation
|
|
Multipaper Session 571 to be held in Wekiwa 8 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Qualitative Methods TIG
|
| Chair(s): |
| Janet Usinger,
University of Nevada, Reno, usingerj@unr.edu
|
| Discussant(s): |
| Sandra Mathison,
University of British Columbia, sandra.mathison@ubc.ca
|
|
Evaluation Context and Valuing Focus Group Evidence
|
| Presenter(s):
|
| Katherine Ryan, University of Illinois at Urbana-Champaign, k-ryan6@illinois.edu
|
| Abstract:
The notion of context and its varied meanings in evaluation are central to evaluation designs, methods, and data collection strategies, influencing what is 'possible, appropriate, and likely to produce actionable evidence' (AEA Call for Papers, 2009). In this paper, I examine three focus group approaches to determine how particular evaluation contexts influence the quality and credibility of the evidence gathered from these approaches. My examination includes a brief discussion of their respective theoretical foundations (epistemology, etc. ) and implementation (structure, setting, etc.). I present a case vignette for each, illustrating how these focus group approaches are utilized in evaluation.
Notably, the value and the quality of evidence differs depending on such factors as the nature of evaluation questions, characteristics of the studied phenomena, evaluation constraints, etc. (Julnes & Rog, 2008). To study the relationship between these contextual factors and soundness of evidence from these focus group approaches, I draw on Guba & Lincoln's standards for judging the quality of qualitative data including these criteria: credibility, transferability, dependability, and confirmability.
|
|
Teaching/Learning Naturalistic Evaluation: Lived Experiences in an Authentic Learning Project
|
| Presenter(s):
|
| Melissa Freeman, University of Georgia, freeman9@uga.edu
|
| Deborah Teitelbaum, University of Georgia, deb.teitelbaum@yahoo.com
|
| Soria Colomer, University of Georgia, scolomer@uga.edu
|
| Sharon Clark, University of Georgia, jbmb@uga.edu
|
| Ann Duffy, University of Georgia, ann.duffy@glisi.org
|
| San Joon Lee, University of Georgia, lsj0312@uga.edu
|
| Dionne Poulton, University of Georgia, dpoulton@uga.edu
|
| Abstract:
This paper describes the inherent complexities of teaching and learning naturalistic evaluation using authentic learning projects. Using our lived experiences as teacher and student as we make sense of the lived experiences of stakeholders and, thus, the context and effect of the evaluand, we explore three points of juncture between doing and learning that characterize the essential features and challenges of naturalistic evaluation: 1) personal experiences as intentional relationships to programs, 2) letting go of prejudgments in order to grasp what is and what might be, and 3) working inductively as a group while avoiding the blind person's elephant. We draw on Saville Kushner's Personalizing Evaluation to contrast naturalistic evaluation approaches that emphasize stakeholders' emic perspectives with one that favors a critical interpretation of meaning as residing in experience notwithstanding stakeholders' perspectives. We conclude with an overview of the parallels between authentic learning and naturalistic evaluation.
|
|
To Mix or Not to Mix: The Role of Contextual Factors in Deciding Whether, When and How to Mix Qualitative and Quantitative Methods in an Evaluation Design
|
| Presenter(s):
|
| Susan Berkowitz, Westat, susanberkowitz@westat.com
|
| Abstract:
Drawing on 20+ years of experience in a contract research setting, this paper will extrapolate lessons learned about contextual factors most propitious to mixing qualitative and quantitative methods in a given evaluation design. It will discuss the role of different levels and types of contextual factors in informing the decision as to whether, when, and how to combine methods. These factors include: a) the setting of and audiences for the research; b) the funder's goals, objectives, expectations, and knowledge and understanding of methods; c) the fit between the qualitative and quantitative design components as reflected in the framing of the evaluation questions and underlying conceptual model for the study; and, d) the evaluators' skills, sensibilities, expertise and mutual tolerance, including shared 'ownership' of the design and the ability to explain the rationale for mixing methods to diverse, sometimes skeptical, audiences.
|
| | |
|
Session Title: Learning Evaluation Through Apprenticeship: A Continuing Conversation of Evaluation Practice and Theory From the Trenches
|
|
Panel Session 573 to be held in Wekiwa 10 on Friday, Nov 13, 1:40 PM to 3:10 PM
|
|
Sponsored by the Teaching of Evaluation TIG
|
| Chair(s): |
| Cynthia Tananis, University of Pittsburgh, tananis@pitt.edu
|
| Abstract:
The Collaborative for Evaluation and Assessment Capacity (CEAC) at the University of Pittsburgh's School of Education provides collaborative workspace for evaluation work in education (preK-16), child advocacy and support, and related initiatives through faculty and graduate students. Graduate students come to CEAC with no or little evaluation experience and often faculty serve as content-specialist consultants rather than bringing any particular evaluation expertise. This panel will explore the ways in which apprenticeship (specifically for graduate students) can provide for their own field-based training in evaluation to augment their primary studies in other areas of education such as comparative studies, international development, educational leadership, and research methodology and simultaneously provide high-quality evaluation services for CEAC clients. The panel will consist of a faculty member who serves as the Director of CEAC and three graduate student evaluators, one who serves as the full-time Project Manager for CEAC and two others who serve as graduate assistants. Lessons learned and dilemmas from practice will be explored from each perspective. This is a continuation of a session begun at the 2008 conference and now looks at lessons learned by each participant as the Center matures another year and across more evaluation projects.
|
|
From the Evaluation Director Perspective
|
| Cynthia Tananis, University of Pittsburgh, tananis@pitt.edu
|
|
As the director of the evaluation collaborative, I am ultimately responsible for contracts and grants, evaluation planning and development, hiring and supervision of staff and graduate students and all management and leadership aspects of the Collaborative. My role on the panel is both to manage the conversation (to assure ample coverage from each of the presenters' perspectives and the audience) and to present the issues and dilemmas of managing a large evaluation group with staff and students who have no formal evaluation training or experience.
|
|
|
From the Perspective of Full-time Staff: The Project Manager
|
| Cara Ciminillo, University of Pittsburgh, ciminillo@gmail.com
|
|
As the Project Manager for CEAC and doctoral student in educational leadership (with the Director as my academic advisor), I wear numerous hats. I am the first-line supervisor for the other graduate students who work for CEAC while also being a peer student and evaluator with them on projects. I serve as a peer evaluator with my academic advisor and employer/CEAC Director. I supervise and hire undergraduate students who work as support staff for our unit. While I had no formal evaluation experience prior to working with my advisor and CEAC, I find my continuing academic pursuits and interests being influenced by my work and my work being equally influenced by my academic interests. My perspective is varied and unique among CEAC staff and students.
| |
|
From the Perspective of Graduate Students: Assistantship to Support Graduate Study - Many Roles, Many Pulls
|
| Keith Trahan, University of Pittsburgh, kwt2@pitt.edu
|
| Tracy Pelkowski, University of Pittsburgh, tlp26@pitt.edu
|
| Yuanyuan Wang, University of Pittsburgh, yywang.crystal@gmail.com
|
| Rebecca Price , University of Pittsburgh, rhp10@pitt.edu
|
|
As graduate assistants at CEAC and full-time students (three as doctoral and one as masters students, all in social and comparative analysis in education), we serve as evaluators (3) and a communication specialist (1) for CEAC. We explore and discuss how the work of CEAC has influenced our activity, interests, experiences, and progress as graduate students and conversely, how our education and experiences as scholar-practitioners influences our work with CEAC.
| |