| Roundtable Rotation I: Building National Evaluation Capacity in Senegal: Lessons Learned and Current Challenges |
| Roundtable Presentation 451 to be held in the Boardroom on Friday, Nov 13, 9:15 AM to 10:45 AM |
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG |
| Presenter(s): |
| Ian Hopwood, Independent Consultant, ihopwood@orange.sn |
| Abstract: There has been increasing activity to build evaluation capacity in the public and also private sector in Senegal. The presentation will analyze results obtained, and identify lessons learned, drawing upon recent studies and additional insights from the author's experience as former UNICEF HQ Evaluation Chief, UNICEF's Senegal Representative and now evaluation militant/member of Senegalese Evaluation Network. Capacity building has included strategies to promote demand for evaluation as well as measures to meet that demand, including institutional arrangements and incentives, improving evaluation methods and practice, and training and networking. Among key capacity challenges are the need to evaluate public policies (e.g. national poverty strategy, implementation of the Paris Declaration on Aid Effectiveness), and how to ensure that evaluation contributes to government performance and accountability in context of results based management. Questions for discussion include the appropriate institutional arrangements including decentralization, how to ensure independence and quality in context of partnerships, and the role of the international community. |
| Roundtable Rotation II: Strengthening Evaluator Competencies and Institutional Evaluation Capacity Through Professional Development Programs and Services |
| Roundtable Presentation 451 to be held in the Boardroom on Friday, Nov 13, 9:15 AM to 10:45 AM |
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG |
| Presenter(s): |
| Sandiran Premakanthan, Symbiotic International Consulting Services (SICS), sandiran_premakanthan@phac-aspc.gc.ca |
| Abstract: The Canadian Evaluation Society has led the way in proposing competencies for Canadian Evaluation Practice along with the Guidelines for Ethical Conduct and the Program Evaluation Standards. It is an attempt to develop a Professional Designation for Evaluators. The paper presents a review of the professional development programs and services (academic and private sector service providers) and institutional arrangements to cater to the needs of the Community of Evaluators in strengthening their competencies and professional practice. It examines the adequacy of the current professional development programs and services available in the National Capital Region, Ottawa in meeting the future demand for evaluator training and credentialing. The author also shares his experience of a model from the American Society for Quality (ASQ) that certifies its members and awards professional designations in Quality Management Systems Auditing and what is involved in maintaining the designation (recertification) through continuous education, training and professional development activities. |
| Session Title: Body of Evidence or Firsthand Experience? Evaluation of Two Concurrent and Overlapping Advocacy Initiatives | ||||
| Panel Session 452 to be held in Panzacola Section F1 on Friday, Nov 13, 9:15 AM to 10:45 AM | ||||
| Sponsored by the Advocacy and Policy Change TIG | ||||
| Chair(s): | ||||
| Carlisle Levine, CARE, clevine@care.org | ||||
| Abstract: If alleviating global poverty depends on successful pro-poor policies, then, CARE, like other international humanitarian organizations, can promote these policies by presenting evidence based on decades of working in more than 60 countries. With Gates Foundation support, CARE is testing this hypothesis via two initiatives. CARE's LIFT UP grant aims to build organizational capacity to more systematically use country-level evidence to influence U.S. policymakers. CARE's Learning Tours grant provides Members of Congress with firsthand experiences aimed at increasing their support for improving maternal health and child nutrition globally. Working with external evaluators Innovation Network and Continuous Progress Strategic Services/ Asibey Consulting, CARE is assessing the effectiveness of these approaches. This panel will address the challenges in defining and assessing meaningful interim outcomes, and determining the degree to which these two specific investments have indeed increased CARE's ability to influence policy change from the perspectives of the two evaluators and CARE. | ||||
| ||||
| ||||
|
| Session Title: Enhancing Evaluation and Evaluation Practice Using Web 2.0 | |||
| Panel Session 453 to be held in Panzacola Section F2 on Friday, Nov 13, 9:15 AM to 10:45 AM | |||
| Sponsored by the Independent Consulting TIG | |||
| Chair(s): | |||
| Amy Germuth, EvalWorks LLC, agermuth@mindspring.com | |||
| Abstract: As technology advances, more tools are becoming available that have the potential to positively change the way evaluators work In this panel evaluators will share their professional insights on using Web 2.0 tools to enhance the evaluations they conduct and their own evaluation practices and skills. Panelists will describe ways in which they are using Web 2.0 tools to conduct virtual world evaluations and improve data collection, evaluation management, and client communication. Lessons learned will be shared and discussions will focus on the pros and cons of these various methods and tools in the context of evaluation. The impact of Web 2.0 tools and resources on evaluation as a discipline and practice will be explored with the intent of identifying further ways that technology can advance evaluation. | |||
| |||
| |||
| |||
|
| Session Title: Illustration of Assessment Methodologies and Evaluation Approaches | ||||||||||||||||||||||||||||
| Multipaper Session 454 to be held in Panzacola Section F3 on Friday, Nov 13, 9:15 AM to 10:45 AM | ||||||||||||||||||||||||||||
| Sponsored by the Health Evaluation TIG | ||||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||||
| Sue Hamann, National Institutes of Health, sue.hamann@nih.gov | ||||||||||||||||||||||||||||
|
| Session Title: Systems Thinking and Logic Models: Two Sides to the Same Coin? |
| Think Tank Session 455 to be held in Panzacola Section F4 on Friday, Nov 13, 9:15 AM to 10:45 AM |
| Sponsored by the Systems in Evaluation TIG and the Program Theory and Theory-driven Evaluation TIG |
| Presenter(s): |
| Janice Noga, Pathfinder Evaluation and Consulting, jan.noga@stanfordalumni.org |
| Discussant(s): |
| Janice Noga, Pathfinder Evaluation and Consulting, jan.noga@stanfordalumni.org |
| Margaret Hargreaves, Mathematica Policy Research Inc, mhargreaves@mathematica-mpr.com |
| Abstract: "A picture is worth a thousand words" - how often do we hear that? Approaches to modeling a system, program, or intervention abound as do the inevitable debates over which is best. This think tank challenges participants to think of systems and logic models as complementary tools for evaluation. Examples of both types of models, as well as an example of an integrated model, will be provided and used as the basis for group discussion. Questions that will be considered include: What are the strengths and weaknesses of each? How can the two approaches to modeling be integrated to provide a framework for understanding both the program and the system in which it functions? Ultimately, it is hoped that participants will come away with new ideas about how to capture critical elements of process, context, and program theory in their own work through the use of both types of models. |
| Session Title: Consequences of Evaluation: The Power of Process Use | ||||||||||
| Multipaper Session 456 to be held in Panzacola Section G1 on Friday, Nov 13, 9:15 AM to 10:45 AM | ||||||||||
| Sponsored by the Evaluation Use TIG , the Collaborative, Participatory & Empowerment Evaluation TIG, and the Government Evaluation TIG | ||||||||||
| Chair(s): | ||||||||||
| Jacqueline Stillisano, Texas A&M University, jstillisano@tamu.edu | ||||||||||
| Discussant(s): | ||||||||||
| Helene Jennings, ICF Macro, helene.p.jennings@macrointernational.com | ||||||||||
|
| Session Title: Needs Assessment TIG Business Meeting and Presentation: Pesky Issues in Needs Assessment and a Generic Model for Conducting Assessments |
| Business Meeting Session 457 to be held in Panzacola Section G2 on Friday, Nov 13, 9:15 AM to 10:45 AM |
| Sponsored by the Needs Assessment TIG |
| TIG Leader(s): |
| Ann Del Vecchio, Alpha Assessment Associates, delvecchio.nm@comcast.net |
| Hsin-Ling Hung, University of Cincinnati, hunghg@ucmail.uc.edu |
| Jeffry White, Ashland University, jwsrc1997@aol.com |
| Yi-Fang Lee, National Chi Nan University, ivanalee@ncnu.edu.tw |
| Janet Matulis, University of Cincinnati, janet.matulis@uc.edu |
| Presenter(s): |
| James Altschuld, The Ohio State University, altschuld.1@osu.edu |
| Abstract: Many issues in needs assessment are often glossed over in practice as well as in the literature. Examples include: defining standards or 'what should be's'; separating needs from wants; dealing with missing data when double or triple scaling is used; combining data from mixed methods into a coherent picture of needs; how to meaningfully involve multiple constituencies in the assessment; implementing assessments that are neither too broad or too narrow; getting needs assessment results utilized for the good of organizations; and so forth. A sampling of these problems will be described followed by a brief discussion of a generic model for assessing needs. Attendees will have opportunities to cite problems they have encountered and possible solution strategies for overcoming them. |
| Session Title: Independent Consulting: Selected Issues and Strategies | |||||||||||||||
| Multipaper Session 458 to be held in Panzacola Section H1 on Friday, Nov 13, 9:15 AM to 10:45 AM | |||||||||||||||
| Sponsored by the Independent Consulting TIG | |||||||||||||||
| Chair(s): | |||||||||||||||
| Pat Yee, Vital Research, patyee@vitalresearch.com | |||||||||||||||
| Discussant(s): | |||||||||||||||
| Susan Wolfe, Fort Worth Independent School District, susan.wolfe@fwisd.org | |||||||||||||||
|
| Session Title: Quantifying the Evidence for Psychotherapy: The German Way to Quality Control | |||||
| Panel Session 459 to be held in Panzacola Section H2 on Friday, Nov 13, 9:15 AM to 10:45 AM | |||||
| Sponsored by the Quantitative Methods: Theory and Design TIG | |||||
| Chair(s): | |||||
| Lee Sechrest, University of Arizona, sechrest@u.arizona.edu | |||||
| Discussant(s): | |||||
| Fred L Newman, Florida International University, newmanf@fiu.edu | |||||
| Abstract: Psychotherapy in Germany either ambulatory or stationary was under the scrutiny of quality control. Our Mannheim research group was directly involved in delivering the evaluation plans and to report about the outcomes for three different large-scale projects. The first one was a meta-analysis of stationary psychotherapy for psychosomatic patients, the second a place randomized trial for evaluating a computerized feedback tool mapping the progress of individual patient financed by a health care insurance company and the third one a similar system sponsored by the Bavarian Association of Compulsory Health Insurance Physicians (KVB). The panel intends to demonstrate the quantitative methodological tools our group implemented in these different projects and what benefits result in applying them. | |||||
| |||||
| |||||
|
| Session Title: International Perspectives in Evaluation in Higher Education | ||||||||||||
| Multipaper Session 460 to be held in Panzacola Section H3 on Friday, Nov 13, 9:15 AM to 10:45 AM | ||||||||||||
| Sponsored by the Assessment in Higher Education TIG and the International and Cross-cultural Evaluation TIG | ||||||||||||
| Chair(s): | ||||||||||||
| Courtney Brown, Indiana University, coubrown@indiana.edu | ||||||||||||
| Discussant(s): | ||||||||||||
| George Reinhart, University of Maryland, greinhart@casl.umd.edu | ||||||||||||
|
| Session Title: The Internal and External Context of Evaluation in the Non-profit Sector: Can Evaluation Capacity Building Help Non-profits Move From Accountability to Organizational Learning? | |||
| Panel Session 461 to be held in Panzacola Section H4 on Friday, Nov 13, 9:15 AM to 10:45 AM | |||
| Sponsored by the Non-profit and Foundations Evaluation TIG | |||
| Chair(s): | |||
| Joanne Carman, University of North Carolina at Charlotte, jgcarman@uncc.edu | |||
| Abstract: The purpose of this panel is to highlight the unique context of evaluation in the nonprofit sector and explore the extent to which specific stakeholder groups have an effect on the evaluation expectations and requirements of nonprofit organizations. In this panel, we bring together the authors of the most recent empirical research about the evaluation practices of nonprofit organizations. Our first panelist, Deena Murphy, will discuss the specific organizational characteristics that are associated with using evaluation as an organizational learning tool, as opposed to an external accountability tool. Our second panelist, Sal Alaimo, will discuss the important role of the nonprofit's board of directors. Our third panelist, Laura Pejsa, will present case study research which highlights the extent to which the evaluation requirements of funders tend to drive evaluation capacity building efforts. Our final panelist, Joanne Carman, will explore the role of nonprofit accrediting bodies. | |||
| |||
| |||
|
| Session Title: Federal Evaluation Policy and Performance Management | ||||||||||||||||||||||||||
| Multipaper Session 462 to be held in Sebastian Section I1 on Friday, Nov 13, 9:15 AM to 10:45 AM | ||||||||||||||||||||||||||
| Sponsored by the Government Evaluation TIG | ||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||
| David J Bernstein, Westat, davidbernstein@westat.com | ||||||||||||||||||||||||||
|
| Session Title: Toward Universal Design for Evaluation: Successes and Lessons Learned in Varied Contexts | |||
| Panel Session 463 to be held in Sebastian Section I2 on Friday, Nov 13, 9:15 AM to 10:45 AM | |||
| Sponsored by the Special Needs Populations TIG | |||
| Chair(s): | |||
| Jennifer Sulewski, University of Massachusetts Boston, jennifer.sulewski@umb.edu | |||
| Discussant(s): | |||
| Donna Mertens, Gallaudet University, donna.mertens@gallaudet.edu | |||
| Abstract: Universal design refers to designing products or programs so that they are accessible to everyone. Originally conceived in the context of architecture and physical accessibility for people with disabilities, the concept of Universal Design has been adapted to a variety of contexts, including technology, education, and the design of programs and services. This panel will address the idea of Universal Design for Evaluation, drawing on the panelists' individual experiences conducting research with people with and without disabilities. Each panelist will briefly present on what he or she has learned, in the course of his or her own research, about how best to design evaluations to be inclusive of everyone, followed by a discussion of the cross-cutting issues and lessons and what they mean for the evaluation field. | |||
| |||
| |||
| |||
| |||
|
| Session Title: Context-Sensitive Evaluation: Lessons Learned From Large-scale Education Initiatives | ||||
| Panel Session 464 to be held in Sebastian Section I3 on Friday, Nov 13, 9:15 AM to 10:45 AM | ||||
| Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG | ||||
| Chair(s): | ||||
| Ann House, SRI International, ann.house@sri.com | ||||
| Discussant(s): | ||||
| Jon Price, Intel, jon.k.price@intel.com | ||||
| Abstract: Thomas "Tip" O'Neill, a longtime Speaker of the House in the U.S. Congress, famously declared, "All politics is local." Although many policies and funding decisions are made at the larger state and federal levels, ultimately the impact of legislation is felt in the form of pothole repair, snowplowing, and a vast range of government functions. In the end, government must work for the people at home. The question we will address in this panel is a corollary to O'Neill's statement: when considering the challenges of implementing large-scale education programs in vastly different contexts, is all education local? How can a state-wide or global program work at the local level of a single school or a single classroom? What challenges do evaluators of these programs face? And what strategies do these programs use to make their programs work at the local level? | ||||
| ||||
| ||||
|
| Session Title: Assessing Evaluation Needs: Multiple Methods and Implications for Practice | |||
| Panel Session 465 to be held in Sebastian Section I4 on Friday, Nov 13, 9:15 AM to 10:45 AM | |||
| Sponsored by the Research on Evaluation TIG | |||
| Chair(s): | |||
| Arlen Gullickson, Western Michigan University, arlen.gullickson@wmich.edu | |||
| Discussant(s): | |||
| Frances Lawrenz, University of Minnesota, lawrenz@umn.edu | |||
| Abstract: The Evaluation Center was recently funded by the National Science Foundation to provide evaluation-related technical assistance for one of its programs. In this project's first year, a comprehensive needs assessment was conducted to determine the types of evaluation support that were most needed by the program's grantees. In this panel session, each presenter will describe ways in which evaluation-specific needs were assessed. The first paper describes three distinct metaevaluation efforts. The second discusses how the questions posed for technical assistance were used as means for identifying needs. The last paper discusses how existing data on evaluation practices among the target audience were mined and combined with results from document reviews and interviews to provide another perspective on needs for evaluation support. Together, the panel offers a broad picture of how to assess needs and gives a rich lesson in how evaluators can better assist their clients and improve their practice. | |||
| |||
| |||
|
| Session Title: Community as Context: Evaluation of Comprehensive Community Initiatives | |||||
| Panel Session 466 to be held in Sebastian Section K on Friday, Nov 13, 9:15 AM to 10:45 AM | |||||
| Sponsored by the Presidential Strand | |||||
| Chair(s): | |||||
| Teresa Behrens, The Foundation Review, behrenst@foundationreview.org | |||||
| Discussant(s): | |||||
| Teresa Behrens, The Foundation Review, behrenst@foundationreview.org | |||||
| Abstract: Recognizing the complexity of communities, many foundations have adopted a strategy of supporting broad or deep change in targeted geographical areas. Commonly called comprehensive community initiatives, or CCI's, these initiatives present unique evaluation challenges, including multiple interventions; often poorly defined or overly ambitious goals; and an evaluator who may become part of the change work. This panel, which includes authors represented in the first issue of The Foundation Review, this will explore the ways in which evaluators met these challenges and the ways in which the community perspective can be represented in the evaluation. | |||||
| |||||
| |||||
|
| Session Title: Contextual Challenges of Evaluating Democracy Assistance | |||
| Panel Session 467 to be held in Sebastian Section L1 on Friday, Nov 13, 9:15 AM to 10:45 AM | |||
| Sponsored by the International and Cross-cultural Evaluation TIG | |||
| Chair(s): | |||
| Rebekah Usatin, National Endowment for Democracy, rebekahu@ned.org | |||
| Abstract: Democracy assistance presents a particular set of difficulties to the field of evaluation at both the macro and micro levels. Often, the conditions under which democracy assistance projects and programs take place are challenging for political reasons. By their very nature, these types of projects and programs are extremely difficult to evaluate and attributing causality is virtually impossible. Nonetheless, both donors and implementers of democracy assistance make considerable attempts to utilize qualitative and quantitative data to determine what difference their projects and programs are making. This panel will explore the challenges of context faced by evaluation staff at three different organizations working abroad. | |||
| |||
| |||
|
| Session Title: Good Practice Guidelines and Examples for Evaluating Global and Regional Partnership Programs (GRPPs) | |||
| Panel Session 468 to be held in Sebastian Section L2 on Friday, Nov 13, 9:15 AM to 10:45 AM | |||
| Sponsored by the International and Cross-cultural Evaluation TIG | |||
| Chair(s): | |||
| Mark Sundberg, World Bank, msundberg@worldbank.org | |||
| Abstract: The number of Global and Regional Partnership Programs (GRPPs) addressing cross-country issues such as preserving environmental commons and mitigating communicable diseases has grown exponentially since the early 1990s. While the value of periodic evaluation has been recognized, special challenges in applying standard evaluation criteria include: (a) programs evolve considerably over time; (b) results chains are complex and multi-layered; (c) central functions such as governance also need to be assessed; and (d) global partnership aspects require a tailored approach to assessing sustainability of outcomes. In collaboration with several evaluation networks, the World Bank's Independent Evaluation Group has been developing good-practice guidelines, tools and examples for evaluating GRPPs based on a survey of over 60 such evaluations, to complement the previously published Sourcebook for Evaluating GRPPs. This AEA panel, one of two highlighting the findings of this work, focuses on applying standard evaluation criteria to these challenging partnership evaluations. | |||
| |||
| |||
| |||
|
| Session Title: Measuring Outcomes and Building Capacity Within the Informal Science Education Program at National Science Foundation: What Every Evaluator Should Know | |||
| Panel Session 469 to be held in Sebastian Section L3 on Friday, Nov 13, 9:15 AM to 10:45 AM | |||
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG | |||
| Chair(s): | |||
| Leslie Goodyear, National Science Foundation, lgoodyea@nsf.gov | |||
| Abstract: This session will provide participants with an overview of the Informal Science Education (ISE) Program at the National Science Foundation (NSF) and describe ongoing efforts to promote evaluation within the ISE program. NSF Program Directors will present findings from two recent portfolio review efforts: (a) a trend analysis of the ISE portfolio conducted by the Portfolio Inquiry Group [Center for Advancement of Informal Science Education (CAISE) and NSF] and (b) a portfolio review of ISE media projects. Two other major ISE evaluation activities will be presented: 1) A primary author of the Framework for Evaluating Impacts of Informal Science Education Projects-the guide that shapes project evaluation within the ISE program-will discuss what evaluators need to know about using this guide to frame evaluations; and 2) The Senior Study Director leading the development and implementation of the ISE project management system will present the Online Program Monitoring System, used to document the collective impact of the ISE portfolio of funded projects, monitor participants' activities and accomplishments, and obtain information that can inform design and implementation of future ISE projects. The implications of these ISE evaluative efforts on the field and potential ISE project evaluators will be discussed. | |||
| |||
| |||
|
| Session Title: Managing Program Evaluation: Towards Explicating a Professional Practice |
| Multipaper Session 470 to be held in Sebastian Section L4 on Friday, Nov 13, 9:15 AM to 10:45 AM |
| Sponsored by the Evaluation Managers and Supervisors TIG |
| Chair(s): |
| Don Compton, Centers for Disease Control and Prevention, dcompton@cdc.gov |
| Discussant(s): |
| Ann Maxwell, United States Department of Health and Human Services, ann.maxwell@oig.hhs.gov |
| Laura Feldman, University of Wyoming, lfeldman@uwyo.edu |
| Abstract: Our recent issue of New Directions for Evaluation is Managing Program Evaluation: Towards Explicating a Professional Practice. It was designed to make explicit this practice, and to address our profession with three focal questions: Should we recognize managing evaluation as a core professional expertise? Should we promote this by legitimizing the preparation of experts and expertise? And, if so, what should be the curriculum, pedagogy and learning sites? These three questions are the substance of this multipaper presentation. Baizerman begins with an overview of the purpose, core concepts and insights in the issue, and will be followed by authors of three of the issue's case studies who will present the central themes of their work. Then the co-leaders of the Evaluation Managers and Supervisors TIG, which sponsored the issue and the session, will serve as discussants. Discussion with presenters and participants about the focal questions, the case studies and the issue in general will conclude the session. |
| Overview of the Issue |
| Michael Baizerman, University of Minnesota, mbaizerm@umn.edu |
| Baizerman begins the session with an overview of the purpose, core concepts and insights in the issue. |
| Managing Studies Versus Managing for Evaluation Capacity Building |
| Don Compton, Centers for Disease Control and Prevention, dcompton@cdc.gov |
| Compton provides the key themes from his case study of managing for evaluation capacity building at the American Cancer Society. |
| Managing Evaluation in a Federal Public Health Setting |
| Michael Schooley, Centers for Disease Control and Prevention, mschooley@cdc.gov |
| Michael Schooley, an experienced manager of a heterogeneous evaluation group at CDC, discusses managing in a federal health setting. |
| Slaying Myths, Eliminating Excuses: Managing for Accountability by Putting Kids First |
| Robert Rodosky, Jefferson County Public Schools, robert.rodosky@jefferson.kyschools.us |
| Rodosky and Munoz present their case study of managing an evaluation unit in the Jefferson County public schools. |
| Session Title: The Logic Model and Systems Thinking: Can They Co-Exist? |
| Think Tank Session 471 to be held in Suwannee 11 on Friday, Nov 13, 9:15 AM to 10:45 AM |
| Sponsored by the Extension Education Evaluation TIG |
| Presenter(s): |
| Robert Richard, Louisiana State University, rrichard@agcenter.lsu.edu |
| Abstract: The Logic Model with its familiar Inputs - Outputs- Outcomes linearity has found great use within Cooperative Extension program planning and evaluation. As we recognize the chaotic nature of today's world and the emergence of Chaos Theory it begs the question of how we best utilize the linear Logic Model vis-à-vis a world that seems to change with each sunrise. How do ideas such as those suggested by Peter Senge, Otto Schrarmer, Dee Hock and others impact the way Extension should engage stakeholders, and plan and deliver programs? How do program evaluation ideas such as Michael Patton's and Glenda Eoyang's blend with concepts promulgated by the Logic Model? This session will consider these questions and others as we explore the place of the Logic Model, Systems Thinking and other concepts in keeping Extension program planning and evaluation relevant |
| Session Title: Using Evaluation Methodologies in Context | |||||||||||||||||||
| Multipaper Session 472 to be held in Suwannee 12 on Friday, Nov 13, 9:15 AM to 10:45 AM | |||||||||||||||||||
| Sponsored by the Social Work TIG | |||||||||||||||||||
| Chair(s): | |||||||||||||||||||
| Brian Pagkos, Community Connections of New York, bpagkos@comconnectionsny.org | |||||||||||||||||||
|
| Session Title: Working With Contexts Affecting Student Test Scores: Spatial Patterns, Reference Grouping Effects, and Timing | |||||||||||||||
| Multipaper Session 473 to be held in Suwannee 13 on Friday, Nov 13, 9:15 AM to 10:45 AM | |||||||||||||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | |||||||||||||||
| Chair(s): | |||||||||||||||
| Bonnie Swan, University of Central Florida, bswan@mail.ucf.edu | |||||||||||||||
| Discussant(s): | |||||||||||||||
| Aarti Bellara, University of South Florida, abellara@mail.usf.edu | |||||||||||||||
|
| Session Title: Emerging Models and Concepts in Educational Program Evaluation | ||||||||||||||||||||||||||||
| Multipaper Session 474 to be held in Suwannee 14 on Friday, Nov 13, 9:15 AM to 10:45 AM | ||||||||||||||||||||||||||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | ||||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||||
| Julieta Lugo-Gil, Mathematica Policy Research Inc, jlugo-gil@mathematica-mpr.com | ||||||||||||||||||||||||||||
|
| Session Title: Evaluating Educational Partnership Projects: Four Approaches | ||||||||||||||||||||||||
| Multipaper Session 475 to be held in Suwannee 15 on Friday, Nov 13, 9:15 AM to 10:45 AM | ||||||||||||||||||||||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | ||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||
| Joy Sotolongo, North Carolina Partnership for Children, jsotolongo@ncsmartstart.org | ||||||||||||||||||||||||
| Discussant(s): | ||||||||||||||||||||||||
| Cindy Beckett, Independent Consultant, cbevaluate@aol.com | ||||||||||||||||||||||||
|
| Session Title: Lessons Learned From Working With Our Own: Reflections on How Personal Values and Experience Contribute to Working in an Indigenous Context |
| Multipaper Session 476 to be held in Suwannee 16 on Friday, Nov 13, 9:15 AM to 10:45 AM |
| Sponsored by the Indigenous Peoples in Evaluation TIG |
| Chair(s): |
| Kataraina Pipi, FEM (2006) Limited, kpipi@xtra.co.nz |
| Abstract: Three indigenous evaluators with varying levels of expertise and knowledge in evaluation portray some of their experiences of working within their communities where the context is indigenous, and where evaluation practice aligns with their distinctive cultural values. The evaluators discuss 5-6 cultural values that underpin their practice. They provide examples of where these have been applied in different contexts, and examples of beneficial results for the outcomes of evaluations. |
| Pepeha: My Personal Global Positioning System in Evaluative Action |
| Kirimatao Paipa, Kia Maia Limited, kiripaipa@ihug.co.nz |
| This presentation will explore the way that cultural values and practices influenced this indigenous evaluator in an evaluation about the negative impacts of methamphetamine on indigenous families. Interview techniques used included Maori cultural aspects of rapport-building, showing empathy, encouraging story-telling, acknowledging pain and suffering, and acknowledement of the journey to well-being. |
| Timely Reporting: Working With and Around Cultural Mores |
| Vivienne Kennedy, VK Associates Limited, vppk@snap.net.nz |
| This presentation will discuss a Kaupapa Maori approach to evaluating a workforce development mentoring program and how cultural norms and practices affect evaluative reporting. Discussion will cover cultural values of time and place versus the project timelines, and appropriate forms of cultural engagement. All this leads to an acknowledgement of how indigenous evaluators work with and around cultural mores. |
| Whanaungtanga: The Cost of Utilizing Relationships and Connections in Evaluation |
| Kataraina Pipi, FEM (2006) Limited, kpipi@xtra.co.nz |
| This presentation will reflect on the importance of relationships that need to be developed and maintained throughout and post the evaluation project. Reflections on lessons learned from a project regarding indigenous approaches to family violence are shared. Discussion will cover the advantages and disadvantages of utilising existing relationships and connections. The cultural and professional costs are considered. |
| Session Title: Considering Developmental Issues as Critical Facets to the Evaluation Context | ||||
| Panel Session 477 to be held in Suwannee 17 on Friday, Nov 13, 9:15 AM to 10:45 AM | ||||
| Sponsored by the Human Services Evaluation TIG | ||||
| Chair(s): | ||||
| Tiffany Berry, Claremont Graduate University, tiffany.berry@cgu.edu | ||||
| Discussant(s): | ||||
| Katherine Byrd, Claremont Graduate University, katherine.byrd@cgu.edu | ||||
| Abstract: What does training in developmental psychology afford the evaluation community? What do evaluators who work with programs serving children need to know about developmental issues? How do evaluation practices change as a result of incorporating salient developmental issues? The purpose of this panel is three-fold: (1) introduce salient developmental issues relevant to the evaluation of programs serving children and youth; (2) describe the unique developmental issues involved with evaluation methods (e.g., design, measurement, assessment, etc.); and (3) illustrate how these salient developmental issues can integrated feasibly and efficiently into an existing evaluation framework (i.e., Center for Disease Control's framework). Ultimately, considering developmental issues as critical facets to the evaluation context may improve evaluation practice, as well as the sensitivity of the evaluation for picking up program effects. | ||||
| ||||
| ||||
|
| Roundtable Rotation I: Forming a Topical Interest Group (TIG) for Internal Evaluation |
| Roundtable Presentation 478 to be held in Suwannee 18 on Friday, Nov 13, 9:15 AM to 10:45 AM |
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG and the AEA Conference Committee |
| Presenter(s): |
| Kathleen Tinworth, Denver Museum of Nature and Science, kathleen.tinworth@dmns.org |
| Wendy DuBow, University of Colorado at Boulder, wendy.dubow@colorado.edu |
| Boris Volkov, University of North Dakota, bvolkov@medicine.nodak.edu |
| Abstract: Recent lively discussion on the AEA listserv and amongst colleagues nationwide has solidified the suspicion that there are indeed distinct characteristics to internal evaluation. Issues of ethics, politics, and practice play out in unique and sometimes challenging ways. Many internal evaluators are 'departments of one,' and have few opportunities to address and explore their unique role. Join internal and external evaluators alike to discuss the role of internal evaluation, its strengths, weaknesses, challenges, and importance. We will discuss whether or not forming an AEA TIG would provide community, support and focus for this subset of evaluators. Because internal evaluators work in a wide variety of settings, we will explore the extent of our commonalities to see if they justify a TIG. One of the co-hosts has experience forming an AEA TIG and will share those insights as well. All interested parties are welcome to attend, no matter your perspective. |
| Roundtable Rotation II: Challenges and Benefits of an Internal Evaluator: Defining Roles and Responsibilities for Optimal Effectiveness |
| Roundtable Presentation 478 to be held in Suwannee 18 on Friday, Nov 13, 9:15 AM to 10:45 AM |
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG and the AEA Conference Committee |
| Presenter(s): |
| Leslie Aldrich, Massachusetts General Hospital, laldrich@partners.org |
| Danelle Marable, Massachusetts General Hospital, dmarable@partners.org |
| Erica Clarke, Massachusetts General Hospital, esclarke@partners.org |
| Adriana Bearse, Massachusetts General Hospital, abearse@partners.org |
| Abstract: This roundtable will focus on the role of an internal evaluator for a hospital-based center that supports roughly 20 community health programs and projects. The benefits and drawbacks of being an internal evaluator will be discussed, as will situations where use of an internal evaluator might be particularly beneficial for programs. Organizational context and politics play an important part in defining the role of an internal evaluator, as organizations often use evaluators as program managers, technical experts, or community liaisons. Issues addressed will include: roles and responsibilities; objectivity; flexibility and structure; relationship building and staff acceptance; funding and cost effectiveness; and capacity building. Participants will be encouraged to discuss the pros and cons of their own experiences as internal evaluators, and will be asked to think critically about the qualities of successful internal evaluators and how to manage and negotiate conflicts that arise. |
| Roundtable Rotation I: Using Evaluation to Enhance Program Implementation and Effectiveness in the National Institutes of Health (NIH) Context |
| Roundtable Presentation 479 to be held in Suwannee 19 on Friday, Nov 13, 9:15 AM to 10:45 AM |
| Sponsored by the Health Evaluation TIG and the Government Evaluation TIG |
| Presenter(s): |
| Deshiree Belis, National Institutes of Health, belisd@mail.nih.gov |
| Rosanna Ng, National Institutes of Health, ngr@mail.nih.gov |
| James Peterson, National Institutes of Health, petersonjm2@mail.nih.gov |
| Linda Piccinino, National Institutes of Health, piccininol@mail.nih.gov |
| Madeleine Wallace, National Institutes of Health, wallacem2@mail.nih.gov |
| Abstract: The purpose of evaluation at NIH includes assessing progress toward achieving program objectives, and examining a broad range of information on program performance and its context. The diversity of objectives, scientific scope and funding levels at NIH's 27 institutes and centers creates its own set of evaluation challenges. Biomedical research programs take time to produce outcomes, and measuring progress in science can be difficult. Applying evaluation methodologies to some programs therefore requires added planning and strategic thinking. The Evaluation Branch at NIH strives to enhance program implementation and effectiveness by sharing technical expertise and practical experience with the NIH evaluation community. The presentation will cover examples of methodologies used for different types of evaluations, such as needs assessments, process and outcome evaluations, and feasibility studies. The aim is to use context-sensitive evaluation to foster accountability and transparency in program implementation, and to disseminate actionable evidence to policymakers and stakeholders. |
| Roundtable Rotation II: Framing Contextual Issues in an Outcome Monitoring Project: The Role of Process Monitoring |
| Roundtable Presentation 479 to be held in Suwannee 19 on Friday, Nov 13, 9:15 AM to 10:45 AM |
| Sponsored by the Health Evaluation TIG and the Government Evaluation TIG |
| Presenter(s): |
| Elizabeth Kalayil, MANILA Consulting Group Inc, ehk2@cdc.gov |
| Tobey Sapiano, Centers for Disease Control and Prevention, gvf8@cdc.gov |
| Andrea Moore, MANILA Consulting Group Inc, dii7@cdc.gov |
| Ekaterine Shapatava, Northrop Grumman, fpk7@cdc.gov |
| Tanesha Griffin, Northrop Grumman , tgg5@cdc.gov |
| Gary Uhl, Centers for Disease Control and Prevention, gau4@cdc.gov |
| Abstract: The Centers for Disease Control and Prevention (CDC) is conducting the Community-based Organizations Behavioral Outcome Project (CBOP), a longitudinal outcome monitoring study on three group-level evidence-based HIV prevention interventions (EBIs) being implemented nationally by CDC directly-funded community-based organizations (CBOs). This roundtable will focus on the experiences and challenges with process monitoring in an outcome monitoring project. Process monitoring through CBOP has highlighted contextual issues including variation in the delivery of EBIs in real-world settings, which may or may not influence behavioral outcomes. Participants will discuss the extent to which process monitoring should be incorporated in studies such as CBOP. This dialog will contribute to the knowledge base in the field of evaluation by providing a forum for evaluators to exchange ideas on the optimal ways for collecting process monitoring data and how these data can be used to interpret behavioral outcomes, which are key factors in determining intervention effectiveness. |
| Roundtable Rotation I: Who Sets the Goals for K-12 Instructional Coaching Programs? Evaluating State, District and School Influences on the Implementation and Impact of a School-based Coaching Program |
| Roundtable Presentation 480 to be held in Suwannee 20 on Friday, Nov 13, 9:15 AM to 10:45 AM |
| Sponsored by the Pre-K - 12 Educational Evaluation TIG |
| Presenter(s): |
| Mary Jean Taylor, MJT Associates Inc, mjstaylor@aol.com |
| Veronica Gardner, MJT Associates Inc, v@veronicagardner.com |
| Abstract: Instructional coaching and mentoring are strategies that have inherent appeal and would appear to be relatively easy to implement. In fact, many districts across the country have implemented coaching programs under a variety of titles, but with vague goals. The work of coaches is often viewed as hard to quantify and too far removed from students to link coaching and student achievement. This discussion explores some of the ways that state, district, school and coach influences either support and reinforce goals related to student learning or redirect and subvert the potential for change. The discussion is based on data from a three-year evaluation of an instructional coaching program in a large western U.S. school district. The program was eliminated when the economic situation changed, but the vestiges that remain are decentralized and relatively isolated from the influence of the district administrative structure. Even though changes in state funding and policy resulted in program instability and undermined the potential for impact, the evaluation detected positive relationships between coaching and student achievement in the elementary schools. |
| Roundtable Rotation II: Needs-Based Professional Development: Evaluating the Effects of Response to Intervention (RtI) Training Among Coaches and School Psychologists |
| Roundtable Presentation 480 to be held in Suwannee 20 on Friday, Nov 13, 9:15 AM to 10:45 AM |
| Sponsored by the Pre-K - 12 Educational Evaluation TIG |
| Presenter(s): |
| Amanda March, University of South Florida, amandamarch8@hotmail.com |
| Kevin Stockslager, University of South Florida, kstocksl@mail.usf.edu |
| Abstract: Evaluation of processes used to enhance student outcomes in schools is essential to determine what activities should be supported, changed, or terminated within the educational system. National policies such as The No Child Left Behind Act (NCLB) of 2002 and the reauthorization of the Individuals with Disabilities Education Act (IDEIA) of 2004 require school staff to use a Response to Intervention (RtI) framework of services delivery to enhance outcomes of all students. However, a significant amount of professional development (PD) for school staff, such as psychologists and RtI coaches, is required to successfully implement RtI processes. This evaluation utilizes fundamentals of both the management-oriented and the participant-oriented approaches to evaluate the PD offered to school psychologists and RtI coaches in one Florida school district. This Roundtable will discuss the outcomes of the evaluation, how findings were used to enhance PD activities, and implications for future PD evaluations in schools. |
| Roundtable Rotation I: Evaluating a Longitudinal Group-Randomized Sexual Abstinence Program: Approach and Challenges |
| Roundtable Presentation 481 to be held in Suwannee 21 on Friday, Nov 13, 9:15 AM to 10:45 AM |
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG |
| Presenter(s): |
| Ann Peisher, University of Georgia, apeisher@uga.edu |
| Virginia Dick, University of Georgia, vdick@cviog.uga.edu |
| Katrina Aaron, Augusta Partnership for Children Inc, kaaron@augustapartnership.org |
| Robetta McKenzie, Augusta Partnership for Children Inc, rmckenzie@augustapartnership.org |
| Abstract: This community-based evaluation research seeks to determine the effects of a comprehensive saturation approach to reducing premarital sexual activity and pregnancy among middle school youth. Research suggests more scope in abstinence programming and more rigorous evaluation can yield significant change (Hauser, 2004). While this evaluation-intensive initiative offers increased scope and a more rigorous evaluation, it is recognized that conducting evaluation research in the real world poses challenges. Securing adequate groups for sampling, monitoring program/initiative fidelity, maintaining timely, accurate data collection for a longitudinal design and planning and insuring quality statistical analysis that accounts for varying group comparability, potential attrition and missing data are evaluation issues being addressed in this five-year evaluation research effort. Working collaboratively with the community partners to design and implement the initiative and the evaluation research can improve the capacity of the evaluation team to handle these challenges. This session will describe the approach and evaluation tools used, discuss the challenges and entertain questions and suggestions from peer evaluators. |
| Roundtable Rotation II: Is Anyone Listening? Evaluating an At-risk Youth Mentoring Program |
| Roundtable Presentation 481 to be held in Suwannee 21 on Friday, Nov 13, 9:15 AM to 10:45 AM |
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG |
| Presenter(s): |
| Corina Owens, University of South Florida, cowens@coedu.usf.edu |
| George MacDonald, University of South Florida, macdonal@coedu.usf.edu |
| Abstract: Middle school students, specifically at-risk middle school students, need encouragement and guidance in navigating the treacherous terrain of adolescence. One way of assisting students through this difficult time in their life is to involve them in mentoring programs with carefully selected and properly trained adults from the surrounding community. This paper describes the methods used to evaluate the effectiveness of a newly constituted mentoring program utilizing the Model for Collaborative Evaluations (Rodriguez-Campos, 2005, MCE). The MCE focuses on a set of six interactive components that encourage and promote collaboration between all evaluation team members. A collaborative approach, specifically the MCE, was chosen to actively engage stakeholders in the evaluation process in order to transform this formative evaluation into a learning and growing experience to better serve at-risk youth in a middle school setting. |
| Session Title: Advancing the Research and Culturally Responsive Evaluation Enterprise in Historically Black Colleges and Universities (HBCU) for Global Justice |
| Think Tank Session 482 to be held in Wekiwa 3 on Friday, Nov 13, 9:15 AM to 10:45 AM |
| Sponsored by the Multiethnic Issues in Evaluation TIG |
| Presenter(s): |
| Leona Johnson, Hampton University, leona.johnson@hamptonu.edu |
| Discussant(s): |
| Stella Hargett, Morgan State University, evaluations561@aol.com |
| Marie Hammond, Tennessee State University, mshammond@cpsy.com |
| Ruth Greene, Johnson C Smith University, rgreene@jcsu.edu |
| Kevin Favor, Lincoln University, kfavor@lincoln.edu |
| Warren Gooden, Cheyney State University, doctorayo@comcast.net |
| Abstract: This Think Tank allows for the capacity building initiative, commenced by the American Evaluation Association and supported substantially by the National Science Foundation, to move forward. The planning grant from which six HBCU were able to direct an assessment of each campus communities' available pool of talent, interest, resources, support, needs, and expectations proved valuable in terms of identifying the degree of preparedness and the challenges faced if actualizing those models deemed most attractive by their campus collaborators. Five contextual questions that encapsulate the concerns tapped by the planning effort are to be posed to five groups of attendees. Each group will be facilitated with the intent to report ideas promising for 1) maximizing collaborations intra- and inter-institutionally, 2) obtaining professional development for whom, 3) recruiting/ retaining targeted students, 4) infusing cultural knowledge and familiarity into the institutional community, and 5) addressing gate-keeping courses and administrative challenges. |
| Session Title: Context, Culture and Evaluation: Ethics, Politics, and Appropriateness | |||||||||||||||||||||||||
| Multipaper Session 483 to be held in Wekiwa 4 on Friday, Nov 13, 9:15 AM to 10:45 AM | |||||||||||||||||||||||||
| Sponsored by the International and Cross-cultural Evaluation TIG | |||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||
| Paula Bilinsky, Independent Consultant, pbilinsky@hotmail.com | |||||||||||||||||||||||||
| Discussant(s): | |||||||||||||||||||||||||
| Paula Bilinsky, Independent Consultant, pbilinsky@hotmail.com | |||||||||||||||||||||||||
|
| Session Title: Involving Stakeholders in Evaluations: Alternative Views | |||||||||||||||||||||||||||||||
| Multipaper Session 484 to be held in Wekiwa 5 on Friday, Nov 13, 9:15 AM to 10:45 AM | |||||||||||||||||||||||||||||||
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG | |||||||||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||||||||
| Wes Martz, Kadant Inc, wes.martz@gmail.com | |||||||||||||||||||||||||||||||
|
| Session Title: New Evaluation Techniques For Estimating the Impacts of Science and Technology Innovations |
| Multipaper Session 485 to be held in Wekiwa 6 on Friday, Nov 13, 9:15 AM to 10:45 AM |
| Sponsored by the Research, Technology, and Development Evaluation TIG and the Costs, Effectiveness, Benefits, and Economics TIG |
| Chair(s): |
| Jerald Hage, University of Maryland, hage@socy.umd.edu |
| Abstract: Government concern about demonstrating the value of investments in science and technology has been heightened by the current economic crisis. This panel presents three novel approaches for evaluating the returns on investment in research and technology development from three distinct government agencies, two in the United States and one in Canada. Together these papers illustrate the importance of developing new measures for benefits of science and technology (S&T) innovations that move beyond the traditional economic measures of the dollar value of improved productivity and revenue from sales. The methods also address health, environmental, security, and knowledge benefits in quantitative as well as qualitative ways, and get at intermediate impacts as well as global impacts which helps attribute benefits to specific S&T programs. |
| A Credible Approach to Benefit-Cost Evaluation for Federal Energy Technology Programs |
| Gretchen Jordan, Sandia National Laboratories, gbjorda@sandia.gov |
| Rosalie Ruegg, TIA Consulting, ruegg@ec.rr.com |
| This paper describes a methodology that improves upon an already credible approach developed for a 2001 National Research Council study: "Energy Research at DOE: Was It worth It?" Three benefit-cost studies using this modified approach will be completed by the U.S. Department of Energy's Energy Efficiency and Renewable Energy in 2009. Economic performance metrics that are calculated are Net benefits, Benefit-cost ratio, and Internal Rate of Return. Benefits and costs for selected technology "winners" are calculated compared against the next best alternative. Additionally, an innovative "Cluster approach" is used that compares benefits of larger elements of a program to investment costs of the entire program. Environmental and Security benefits are also assessed, as are knowledge benefits. In contrast to the 2001 NRC study, the modified approach requires a case-by-case assessment of an array of ways additionality can occur, the difference that DOE made in the outcome. |
| Techniques for Evaluating Potential Benefits of New Scientific Instruments |
| Jonathan Mote, University of Maryland, jmote@socy.umd.edu |
| Aleia Clark, University of Maryland, alclark@socy.umd.edu |
| Jerald Hage, University of Maryland, hage@socy.umd.edu |
| This paper proposes a technique for evaluating the potential impacts of new scientific instruments in a way that avoids the pitfalls of "economic-only" cost-benefit analysis and meets the needs of the customer organization and Congress. The evaluation should convince Congress to fund a new suite instruments, the Hyperspectral Sounder (HES) on a new weather satellite to be launched by the National Oceanographic and Atmospheric Agency (NOAA). The evaluation will focus on improvements in warning time for severe weather events using more localized forecasting. More warning time will result in saved lives and reduced health consequences of sudden decreases in air quality. The context of the evaluation requires dealing with how the collection of regional weather data can be made compatible with current collection systems, and, of course, the question is what evidence can be marshaled for a system that is not yet operational. |
| A New Evaluation Strategy for Measuring the Returns on Investments in Medical Research: The Meso Level of the Treatment Sector |
| Jerald Hage, University of Maryland, hage@socy.umd.edu |
| Gretchen Jordan, Sandia National Laboratories, gbjorda@sandia.gov |
| Typically health evaluations are at either the micro level of a particular treatment or the macro level of a series of health benefits. With a small grant from the Canadian Academy of Health Sciences, we developed a new strategy that allows for the synthesis of evaluation of research findings from a variety of studies, treatment sector by treatment sector. The specific metrics of the framework are 1) health care impact by stage in the treatment process; 2) research investment by arenas within the production of medical knowledge within the specific treatment sector; 3)contributions to scientific knowledge; 4) network gaps in the production of innovative treatment protocols; and 5) economic and social benefits of medical research. Two unusual features are recognition that the advantages of alternatives kinds of research can be estimated and the potentiality of the valley of death in the transfer of medical research into health care products. |
| Session Title: Contextualizing the Evaluand: Planning and Implementing an Evaluation of the Injury Control Research Center (ICRC) Program |
| Multipaper Session 486 to be held in Wekiwa 7 on Friday, Nov 13, 9:15 AM to 10:45 AM |
| Sponsored by the Research, Technology, and Development Evaluation TIG |
| Chair(s): |
| Sue Lin Yee, Centers for Disease Control and Prevention, sby9@cdc.gov |
| Discussant(s): |
| Thomas Bartenfeld, Centers for Disease Control and Prevention, tbartenfeld@cdc.gov |
| Abstract: The context of a program and perspectives of key stakeholders introduce a set of assumptions that influence the planning and implementation of an evaluation and ultimate utility of the evaluation findings. In evaluations of research and technology programs, systematic attention toward these assumptions yields an evaluation that thoughtfully addresses competing realities of different contexts. In 2008, CDC's National Center for Injury Prevention and Control (NCIPC) conducted a portfolio evaluation of 12 Injury Control Research Centers (ICRC) using the CDC Framework for Program Evaluation, a utilization-focused planning tool. The evaluation team will discuss the dilemmas and resulting solutions that arose from addressing the myriad of contexts in stakeholder engagement, clarifying the evaluation focus, data collection and analysis, and communicating findings. In closing, we will offer lessons learned that will be insightful for any evaluator of research and technology seeking to maximize the utility of their evaluation. |
| Negotiating Diverse Contexts and Expectations in Stakeholder Engagement |
| Sue Lin Yee, Centers for Disease Control and Prevention, sby9@cdc.gov |
| In most evaluations, the contexts and perspectives of key stakeholders overlap and often compete with one another. Conducting an evaluation that is meaningful and useful to all stakeholders requires an understanding of each stakeholder's expectations from the beginning and a willingness to revisit them throughout the evaluation. In the ICRC Portfolio Evaluation, the primary stakeholders are university grantees conducting research, training, and coordination of injury activities, and the funder, the CDC National Center for Injury and Control (NCIPC). Secondary stakeholders also play an important a role in assessing and providing program recommendations. To negotiate these diverse contexts and expectations, the ICRC Portfolio Evaluation Workgroup was established to guide the planning, implementation, and use of evaluation findings. This presentation will describe the perspectives of the major stakeholders and the contexts in which they operate and offer strategies for sustained interaction, despite the reality of varied priorities and power differentials. |
| Clarifying the Evaluation Focus in a Complex Program Context |
| Howard Kress, Centers for Disease Control and Prevention, hak6@cdc.gov |
| This presentation describes the iterative and dynamic processes undertaken to focus the purpose of the ICRC Portfolio Evaluation. Specifically, we developed the following tools to guide our understanding of the program context and that of the key stakeholders: (1) a hierarchical tree of the evaluation questions, (2) a program conceptual model, and (3) two logic models that describe the ICRC program. We will describe the iterative process of developing, vetting, and validating the evaluation questions with the logic models, and discuss the manner in which these tools laid the groundwork for subsequent phases of the evaluation. The presentation will close with lessons learned that should be helpful for other research and technology evaluations seeking to clarify their evaluation focus as well as negotiate complex program context. |
| Considering Context in Data Collection and Analysis |
| Jamie Weinstein, MayaTech Corporation, jweinstein@mayatech.com |
| The ICRC Portfolio Evaluation Team addressed the context and perspectives of the stakeholders in identifying the most appropriate data collection and analyses methods. The complex nature of the program context seemed best explored through using qualitative data collection methods. The evaluation employed a four phase data collection approach, in which each phase was designed to meet the specific needs of the evaluation and maximize the utility of the findings. Qualitative data was collected through site visits to two centers, teleconference interviews with each of the twelve participating centers, and teleconference interviews with past and current CDC staff. At every stage of the data collection and analysis, iterative data analyses ensured that the evaluation questions and purpose linked back to the evaluation goals, and the needs of the key stakeholders. Challenges faced during the data collection process will be discussed and lessons learned will be shared. |
| Contextual Influences and Constraints on Communicating Findings |
| Kristianna Pettibone, MayaTech Corporation, kpettibone@mayatech.com |
| A critical component of any evaluation is to share findings with stakeholders. The ICRC Portfolio Evaluation involved multiple stakeholders who brought an array of perspectives on how the findings should be shared. As the funder, CDC's National Center for Injury Prevention and Control provided the primary context for determining the utility of the evaluation, which initially was to produce an internal report for documenting accountability and identifying areas for program improvement. Involvement of stakeholders such as the ICRC directors and CDC staff introduced another set of contextual assumptions that influenced decisions related to sharing findings. Finally, in conducting the evaluation and identifying recommendations for program improvement, we discuss other potential uses of the evaluation findings. This presentation examines the influence and constraints that key stakeholders introduced on sharing findings and proposes strategies for evaluators of research and technology on managing these competing contexts. |
| Session Title: Using Qualitative Methods to Evaluate Military Family Support Programs | ||||||||||
| Multipaper Session 487 to be held in Wekiwa 8 on Friday, Nov 13, 9:15 AM to 10:45 AM | ||||||||||
| Sponsored by the Qualitative Methods TIG | ||||||||||
| Chair(s): | ||||||||||
| Rhoda Risner, United States Army, rhoda.risner@conus.army.mil | ||||||||||
|
| Session Title: Climate Change Mitigation and the World Bank | ||||
| Panel Session 488 to be held in Wekiwa 9 on Friday, Nov 13, 9:15 AM to 10:45 AM | ||||
| Sponsored by the Environmental Program Evaluation TIG | ||||
| Chair(s): | ||||
| Cheryl Gray, World Bank, cgray@worldbank.org | ||||
| Abstract: This session assesses the impact of three World Bank Group-supported activities on reducing greenhouse gas (GHG) emissions. Two of these activities - support for industrial energy efficiency and for solar power in China - had explicit GHG reduction goals. The third - support for protected areas in tropical forests - was motivated by biodiversity concerns but offers lessons for the emerging agenda on reducing emissions from deforestation. | ||||
| ||||
| ||||
|
| Session Title: Evaluators: It's all in the Method! | ||||||||||||||||
| Multipaper Session 489 to be held in Wekiwa 10 on Friday, Nov 13, 9:15 AM to 10:45 AM | ||||||||||||||||
| Sponsored by the Graduate Student and New Evaluator TIG | ||||||||||||||||
| Chair(s): | ||||||||||||||||
| Melinda Davis, University of Arizona, mfd@email.arizona.edu | ||||||||||||||||
|