| Session Title: The Role of Metaevaluation in Promoting Evaluation Quality: National and International Cases | ||||||
| Panel Session 282 to be held in Lone Star A on Thursday, Nov 11, 1:40 PM to 3:10 PM | ||||||
| Sponsored by the Presidential Strand | ||||||
| Chair(s): | ||||||
| Leslie Cooksy, University of Delaware, ljcooksy@udel.edu | ||||||
| Discussant(s): | ||||||
| Donald Yarbrough, University of Iowa, d-yarbrough@uiowa.edu | ||||||
| Abstract: Evaluation quality is a primary concern to the evaluation profession. However, different organizations and professionals may conceptualize, operationalize, practice, and use “evaluation quality” differently. This panel focuses on metaevaluations and their role in promoting quality in national and international contexts. The first two presentations will emphasize experiences at the United Nations Children's Fund (UNICEF) and CARE International from the perspectives of those who are or have been managing such metaevaluations. The third presentation will reflect on the experience in conducting guided, external, independent appraisals for the International Labour Organisation and reflect on challenges and opportunities for assessing evaluation quality over multiple years. The fourth presentation discusses pitfalls associated with applying the Joint Committee’s Program Evaluation Standards to written reports and identifies ways to improve consistency in metaevaluation. Together, these presentations allow for exploring theoretical and practical dimensions of evaluation quality in national and international metaevaluation contexts. | ||||||
| ||||||
| ||||||
| ||||||
|
| Session Title: From Agent-Based Modeling to Cynefin: The ABC's of Systems Frameworks for Evaluation | ||||||||||||||||||||
| Multipaper Session 283 to be held in Lone Star B on Thursday, Nov 11, 1:40 PM to 3:10 PM | ||||||||||||||||||||
| Sponsored by the Systems in Evaluation TIG | ||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||
| Mary McEathron, University of Minnesota, mceat001@umn.edu | ||||||||||||||||||||
| Discussant(s): | ||||||||||||||||||||
| Mary McEathron, University of Minnesota, mceat001@umn.edu | ||||||||||||||||||||
|
| Session Title: I See What You Mean: Applications of Visual Methods in Evaluation |
| Think Tank Session 284 to be held in Lone Star C on Thursday, Nov 11, 1:40 PM to 3:10 PM |
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG |
| Presenter(s): |
| Terry Uyeki, Humboldt State University, terry.uyeki@humboldt.edu |
| Discussant(s): |
| Jara Dean-Coffey, jdcPartnerships, jara@jdcpartnerships.com |
| Terry Uyeki, Humboldt State University, terry.uyeki@humboldt.edu |
| Jara Dean-Coffey, jdcPartnerships, jara@jdcpartnerships.com |
| Abstract: This interactive session will explore the use of visual methods to maximize inclusive evaluation approaches for obtaining input from stakeholders, enhancing and deepening participant engagement, and supporting collaborative problem solving and visioning. The two presenters will share the ways in which they use visual methods, ranging from graphic recording and graphic facilitation to graphic adaptations of concept mapping or multiple cause diagrams of complex systems or situations. Small groups will discuss the benefits and potential applications for visual methods in their work, implications for effective integration in to their practice, as well as evaluation settings in which visual methods may not be appropriate. |
| Session Title: Contextual Issues in a Randomized Control Group Evaluation of a School-based Intervention: Fielding an Evidence-based Intervention to Reduce Youth Gun Violence in Chicago | |||
| Panel Session 285 to be held in Lone Star D on Thursday, Nov 11, 1:40 PM to 3:10 PM | |||
| Sponsored by the Non-profit and Foundations Evaluation TIG | |||
| Chair(s): | |||
| Wendy Fine, Youth Guidance, wfine@youth-guidance.org | |||
| Abstract: This panel will explore how a promising youth development program is being evaluated, with the goal of utilizing the results to establish an evidence-based intervention that reduces youth gun violence, a major problem affecting many American cities. Panelists will discuss the different stakeholder contexts that have left their mark on the evaluation process: a) mobilizing a group of civic-minded funders for a large scale experimental evaluation; b) establishing program stakeholder buy-in for experimental evaluation design; and c) the evaluation’s impact on program model implementation by collaborating nonprofits. Panel members will highlight challenges and key decisions made along the way, such as 1) the selection process used to identify the most promising program for the evaluation; 2) how risk indices were developed to identify the target student population – a politically sensitive issue; and 3) the approach of the service provider in maintaining schools’ support for participation. | |||
| |||
| |||
|
| Session Title: Estimating Rater Consistency: Which Method Is Appropriate? |
| Demonstration Session 286 to be held in Lone Star E on Thursday, Nov 11, 1:40 PM to 3:10 PM |
| Sponsored by the Quantitative Methods: Theory and Design TIG |
| Presenter(s): |
| Robert Johnson, University of South Carolina, rjohnson@mailbox.sc.edu |
| Min Zhu, University of South Carolina, helen970114@gmail.com |
| Grant Morgan, University of South Carolina, praxisgm@aol.com |
| Vasanthi Rao, University of South Carolina, vasanthiji@yahoo.com |
| Abstract: When essays, portfolios, or other complex performance assessments are used in program evaluations, scoring the assessments require raters to make judgments about the quality of each examinee’s performance. Concerns about the objectivity of raters’ assignment of scores have contributed to the development of scoring rubrics, methods of rater training, and statistical methods for examining the consistency of raters’ scoring. Statistical methods for examining rater consistency include percent agreement and interrater reliability estimates (e.g., percent agreement, Spearman correlation, generalizability coefficient). This session describes each method, demonstrates its calculation, and describes when each is appropriate. |
| Session Title: Influencing Evaluation Policy and Evaluation Practice: A Progress Report From the American Evaluation Association's (AEA) Evaluation Policy Task Force | |||
| Panel Session 287 to be held in Lone Star F on Thursday, Nov 11, 1:40 PM to 3:10 PM | |||
| Sponsored by the AEA Conference Committee | |||
| Chair(s): | |||
| Patrick Grasso, World Bank, pgrasso45@comcast.net | |||
| Discussant(s): | |||
| Jennifer Greene, University of Illinois at Urbana-Champaign, jcgreene@illinois.edu | |||
| Abstract: The Board of Directors of the American Evaluation Association (AEA) established the Evaluation Policy Task Force (EPTF) in September, 2007, to enhance AEA's ability to identify and influence policies that have a broad effect on evaluation practice and to establish a framework and procedures for accomplishing this objective. The EPTF has issued key documents promoting a wider role for evaluation in the Federal Government, influenced federal legislation and executive policy, and informed AEA members and others about the value of evaluation through public presentations and newsletter articles. This session will provide an update on their work and invite member input on their plans and actions. | |||
| |||
|
| Roundtable Rotation I: Simplifying the Complex: Creating Transparent Evaluation in Multi-institutional Education Partnerships |
| Roundtable Presentation 288 to be held in MISSION A on Thursday, Nov 11, 1:40 PM to 3:10 PM |
| Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG |
| Presenter(s): |
| Dewayne Morgan, University System of Maryland, dmorgan@usmd.edu |
| Susan Tucker, Evaluation & Development Associates, sutucker1@mac.com |
| Jennifer Frank, University System of Maryland, jfrank@usmd.edu |
| Abstract: Transparency in programmatic outcomes is fast becoming an expectation from federal funding agencies. This presentation will engage participants in a discussion about the challenges associated with evaluating large-scale, multi-institutional projects. Presenters will use their diverse set of experiences and qualifications to offer examples for making evaluation findings relevant to broader education policy and practice, while attending to the expectation for transparency. |
| Roundtable Rotation II: Evaluating Twenty First Century Community Learning Centers: Reconciling Evaluation Needs and Constraints at Multiple Systemic Levels |
| Roundtable Presentation 288 to be held in MISSION A on Thursday, Nov 11, 1:40 PM to 3:10 PM |
| Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG |
| Presenter(s): |
| Elizabeth Whipple, Research Works Inc, ewhipple@researchworks.org |
| Mildred Savidge, Research Works Inc, msavidge@researchworks.org |
| Abstract: As State Evaluators for the 21st CCLC program in New York State, we are responsible for reporting on the quality of programs across the state. Initially (in 2006), local programs only reported to the federal government, using an online data collection system that provided information for federal reporting, but was not sufficient for state or local purposes. No local evaluator was required. Initial evaluation indicated a need for local evaluation targeted to both state and local needs. Based on our recommendation, all programs were required to have a local evaluator beginning in 2008. Subsequent review of local evaluation reports indicated the need for standardization of reporting to respond to state evaluation needs. This session will review and discuss the needs of decision makers at different systemic levels, soliciting feedback from the group on an evaluation template designed to provide standardized information for addressing local and state level evaluation needs. |
| Session Title: Constructing Relevant Guidelines for Disability Program Evaluations | |||
| Panel Session 289 to be held in MISSION B on Thursday, Nov 11, 1:40 PM to 3:10 PM | |||
| Sponsored by the Special Needs Populations TIG | |||
| Chair(s): | |||
| Mary Moriarty, Picker Engineering Program, Smith College, mmoriart@smith.edu | |||
| Abstract: This panel brings together a team of experts in program evaluation and disability to discuss critical challenges in providing high quality evaluation of disability-based programs. Our intent is to identify and discuss challenges in meeting quality standards, measures, and evidence in disability evaluations from three perspectives: a governmental agency, a disability program director, and an evaluator. Dr. Linda Thurston from the National Science Foundation will provide an overview of NSF/Research in Disability Education evaluation expectations and guidelines. Dr. Joan McGuire from the Center on Postsecondary Education and Disability at the University of Connecticut will discuss the postsecondary institutional perspective, and Dr. Mary Moriarty from the Picker Engineering Program at Smith College will talk about strategies from an evaluator’s perspective. The presentation will incorporate a discussion of critical factors in disability program evaluation. Included are such issues as utilizing research-based practices, incorporating an understanding of contextual factors, confidentiality, and standards-based frameworks. | |||
| |||
| |||
|
| Session Title: The Fight for Evaluation Quality: Perspectives From the Trenches | |||
| Panel Session 290 to be held in BOWIE A on Thursday, Nov 11, 1:40 PM to 3:10 PM | |||
| Sponsored by the Independent Consulting TIG | |||
| Chair(s): | |||
| Carol Haden, Magnolia Consulting LLC, carol@magnoliaconsulting.org | |||
| Abstract: Teaching students about evaluation quality in a graduate class is one thing. Finding ways to ensure it when conducting applied evaluation studies is another. Evaluators can enter into the conduct of evaluations with idealistic notions of how their work will espouse quality. What they will find are a myriad of challenges that can threaten the quality of all aspects of evaluation. The job of the evaluator is to anticipate, plan for, and respond to these challenges as they present themselves during evaluation studies. This panel is comprised of independent evaluation consultants who will share their experiences and perspectives related to evaluation quality using examples from a variety of studies. Presenters will share insights about how local context influences evaluation quality, the nature of relationships in promoting evaluation quality, and evaluators’ role in ensuring quality through reporting. | |||
| |||
| |||
|
| Session Title: Advancing Multiethnic Program Evaluation Through Theory and Practice: An Examination of Culture, Cultural Context, and Culturally Responsive Evaluation | ||||||||||||||||||||
| Multipaper Session 291 to be held in BOWIE B on Thursday, Nov 11, 1:40 PM to 3:10 PM | ||||||||||||||||||||
| Sponsored by the Multiethnic Issues in Evaluation TIG | ||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||
| Pamela Frazier-Anderson, Lincoln University, pfanderson@lincoln.edu | ||||||||||||||||||||
|
| Session Title: Who Are Champions, What Are Their Impact and How Do You Know? Considerations for Advocates, Funders, and Evaluators |
| Think Tank Session 292 to be held in BOWIE C on Thursday, Nov 11, 1:40 PM to 3:10 PM |
| Sponsored by the Advocacy and Policy Change TIG |
| Presenter(s): |
| Steve Mumford, Organizational Research Services, smumford@organizationalresearch.com |
| Sarah Stachowiak, Organizational Research Services, sarahs@organizationalresearch.com |
| Lance Potter, Bill & Melinda Gates Foundation, lance.potter@gatesfoundation.org |
| Abstract: Champion development is an important aspect of advocacy work. Building relationships with key individuals to support a cause requires time and energy on the part of advocacy organizations; however, the impact of this work may be unexamined. In small groups, participants will explore three key questions related to evaluating champion development and impact: How are champions defined? How can advocates and evaluators track and measure champion actions toward various outcomes? And what are potential challenges and opportunities for evaluating champions? Presenters will introduce questions by sharing lessons learned from work with advocacy projects in several fields. These will include a working framework for defining “champions” and potential outcomes, methods for tracking and measuring champion actions and impact, and important considerations for data collection. Discussion will lead toward a deeper understanding of champion development as an advocacy technique and how it can be evaluated. |
| Roundtable Rotation I: Evaluation of the HIV Treatment Adherence Education Program |
| Roundtable Presentation 293 to be held in GOLIAD on Thursday, Nov 11, 1:40 PM to 3:10 PM |
| Sponsored by the Health Evaluation TIG |
| Presenter(s): |
| Robin Kelley, National Minority AIDS Council, rkelley@nmac.org |
| Melanie Graham, National Minority AIDS Council, mgraham@nmac.org |
| Kim Johnson, National Minority AIDS Council, kjohnson@mnac.org |
| Abstract: Evaluation of the HIV Treatment Adherence Education Program By Robin T. Kelley, PhD and Melanie Graham, MSW, Kim Johnson, MD This is a multilevel evaluation with process evaluation internally and external evaluation by evaluation staff. There will also be individual evaluation forms completed, organizational assessments. There will be evaluation of individual technical assistance and of group level training. The evaluation revealed the effectiveness of an HIV peer driven behavior change program applied to HIV/AIDS Treatment Adherence. The concept was that through an innovative program that included technical assistance, a person living with HIV could become a role model for other HIV positive individuals. Findings revealed that organizations which implemented our peer program and the peers which complied to its core components demonstrated treatment adherence behavior and a greater level of personal development. The personal development included more life skills which included sense of self-efficacy, determination, and confidence. Moreover, with capacity building to shore up the infrastructure of the program, peers were able to further develop in their jobs. |
| Roundtable Rotation II: Measuring Communication Campaign Intermediate Outcomes: Tools and Techniques |
| Roundtable Presentation 293 to be held in GOLIAD on Thursday, Nov 11, 1:40 PM to 3:10 PM |
| Sponsored by the Health Evaluation TIG |
| Presenter(s): |
| Michael Burke, RTI International, mburke@rti.org |
| Abstract: Communication campaigns often produce a wide range of outputs that may be connected to a variety of process measures and intermediate outcomes that might be examined. There are numerous ways changes in awareness, attitudes, and behavior can be assessed, and a wide range of behavioral domains that can provide useful evaluation information. For example, although HIV testing or condom usage may be a desired outcomes, information seeking behaviors such as ordering materials and visiting a website might be important intermediate indicators of likely changes in behavior. In this session we will discuss several ways evaluators can identify and assess intermediate outcomes, especially when in a resource constrained environment. Issues of quality, timeliness, ownership, and cost will be discussed. |
| Roundtable Rotation I: Reflection on an Instrument for Capturing School Conditions in Developing Countries |
| Roundtable Presentation 294 to be held in SAN JACINTO on Thursday, Nov 11, 1:40 PM to 3:10 PM |
| Sponsored by the International and Cross-cultural Evaluation TIG |
| Presenter(s): |
| Helene Jennings, ICF Macro, helene.p.jennings@macrointernational.com |
| Abstract: A classroom observation instrument needed to be designed for evaluations focusing on making improvements in schools in developing countries where many schools are visited over the course of a week or two. It was determined that a method of rapid appraisal of conditions in each school needed to be established and documented in a format that would permit comparisons across schools and provide summaries of indicators of educational quality. A streamlined protocol of “Indicators of Education Adequacy” was developed and field tested. It will be presented in a roundtable setting to gain feedback on the elements itemized in this tool (related to assessing Facilities and Environment, Learning Resources, Student Characteristics, Teacher Characteristics, as well as a guide to an overall assessment of instructional quality) and on the utility of the instrument. A discussion of other means of capturing such data is encouraged. |
| Roundtable Rotation II: Assessing Principals' Needs for Professional Development |
| Roundtable Presentation 294 to be held in SAN JACINTO on Thursday, Nov 11, 1:40 PM to 3:10 PM |
| Sponsored by the International and Cross-cultural Evaluation TIG |
| Presenter(s): |
| Edith J Cisneros-Cohernour, University of Yucatan, cchacon@uady.mx |
| Roger Patron-Cortes, Universidad Autonoma de Campeche, roger_patron_cortes@hotmail.com |
| Abstract: This paper presents the findings of a study for assessing School Principal's needs for professional development. The research is part of an international evaluation study conducted in Australia, Canadá, United Kingdom, Mexico, scottland, South Africa and the US Unidos. Data collection involved qualitative case studies (participant observation, indepth interviews, focus groups, document analysis) as well as a survey to beginning principals in three states of Southern Mexico. |
| Session Title: Why Settle for Silos? Four Applications of Social Network Analysis for Building More Effective Organizational Networks and Alignment Around Outreach and New Initiatives |
| Demonstration Session 295 to be held in TRAVIS A on Thursday, Nov 11, 1:40 PM to 3:10 PM |
| Sponsored by the |
| Presenter(s): |
| Tom Bartholomay, University of Minnesota, barth020@umn.edu |
| Abstract: Organizations are increasingly challenged to adapt and respond to a changing world. Additionally, funders are increasingly focused on impacts that require integrated systems-based interventions. The University of Minnesota Extension has been using social network analysis (SNA) as a means to leverage existing “knowledge capital” between programs, increase alignment, and build new collaborative structures to reach new and shifting goals. This presentation will demonstrate how U of M Extension has used SNA for assessing (1) large outreach structures, (2) internal structures that support outreach, (3) existing collaboration levels, and (4) revealing potential frontier networks around target problems or audiences. Demonstrations will include examples of SNA concepts in application, SNA data collection instruments used, maps of network results, interpretation of networks, and how network maps can be useful at different levels of the organization. |
| Session Title: Contribution of Technology to Evaluation Practice | ||||||||||||||||||||||||
| Multipaper Session 296 to be held in TRAVIS B on Thursday, Nov 11, 1:40 PM to 3:10 PM | ||||||||||||||||||||||||
| Sponsored by the Integrating Technology Into Evaluation | ||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||
| Paul Lorton Jr, University of San Francisco, lorton@usfca.edu | ||||||||||||||||||||||||
|
| Session Title: Introduction to Designing Needs Assessment Surveys |
| Skill-Building Workshop 297 to be held in TRAVIS C on Thursday, Nov 11, 1:40 PM to 3:10 PM |
| Sponsored by the Needs Assessment TIG |
| Presenter(s): |
| James Altschuld, The Ohio State University, altschuld.1@osu.edu |
| Yi-Fang Lee, National Chi Nan University, ivanalee@ncnu.edu.tw |
| Hsin-Ling Hung, University of Cincinnati, hunghg@ucmail.uc.edu |
| Jeffry White, University of Louisiana, Lafayette, jwhite1@louisiana.edu |
| Abstract: Many evaluators, while familiar with what needs are and procedures for assessing them, are less knowledgeable about the unique aspects of and ways to design surveys for the endeavor. This short workshop will begin with questions asked of participants in regard to how they work with organizations as related to needs assessment (NA) surveys. From that starting point a brief overview of the NA process will be given followed by some key features of surveys (such as general guidelines, typical content areas, types of surveys, formats employed, groupings of questions, inclusion of multiple concerned stakeholders, item formats, scaling approaches, some analysis principles, etc). Participants will then use typical NA scenarios to try their hand at assorted item writing tasks. The workshop concludes with a group discussion of the nature of NA surveys, how they may be used, problems encountered, and other related issues. |
| Session Title: Attending to Context and Situation to Improve Evaluation Process and Reporting | ||||||||||||||||||
| Multipaper Session 298 to be held in TRAVIS D on Thursday, Nov 11, 1:40 PM to 3:10 PM | ||||||||||||||||||
| Sponsored by the | ||||||||||||||||||
|
| Session Title: Use of Fidelity Scores in Measuring Outcomes for Children Involved in the Child Welfare System | |||
| Panel Session 299 to be held in INDEPENDENCE on Thursday, Nov 11, 1:40 PM to 3:10 PM | |||
| Sponsored by the Human Services Evaluation TIG | |||
| Chair(s): | |||
| Madeleine Kimmich, Human Services Research Institute, mkimmich@hsri.org | |||
| Abstract: Human Services Research Institute (HSRI) explored the use of model fidelity at the case level, using child fidelity scores as a covariate in outcomes analyses within the context of a five-year evaluation. This evaluation examined three strategies implemented across multiple Ohio child welfare agencies: Family Team Meetings, Supervised Visitation, and Kinship Supports. Fidelity to the strategies was used to help explain individual child outcomes via two statistical approaches: regression and ANOVA. By using these two techniques, evaluators were able to compare means of each fidelity group, as well as use fidelity as an independent variable in predicting child outcomes. In testing whether a very high level of fidelity is actually needed in order to achieve the desired results, evaluators are able to learn about effective practice even when high model fidelity is not evident across all areas. This becomes increasing important in complex service delivery environments, such as child welfare. | |||
| |||
| |||
|
| Session Title: Propensity Score Matching: Further Methodological Development | ||||
| Multipaper Session 300 to be held in PRESIDIO A on Thursday, Nov 11, 1:40 PM to 3:10 PM | ||||
| Sponsored by the Quantitative Methods: Theory and Design TIG | ||||
| Chair(s): | ||||
| Ning Rui, Research for Better Schools, rui@rbs.org | ||||
| Discussant(s): | ||||
| Frederick Newman, Florida International University, newmanf@fiu.edu | ||||
|
| Session Title: Improving Medical and Prevention Services Through Continuous Evaluation and Organizational Learning | ||||||||||||||||||||
| Multipaper Session 301 to be held in PRESIDIO B on Thursday, Nov 11, 1:40 PM to 3:10 PM | ||||||||||||||||||||
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG and the Health Evaluation TIG | ||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||
| John Bosma, WestEd, jbosma@wested.org | ||||||||||||||||||||
| Discussant(s): | ||||||||||||||||||||
| Gayle Sulik, Texas Woman's University, gsulik@twu.edu | ||||||||||||||||||||
|
| Session Title: Outcome Assessment in Substance Abuse and Mental Health | |||||||||||||||||
| Multipaper Session 302 to be held in PRESIDIO C on Thursday, Nov 11, 1:40 PM to 3:10 PM | |||||||||||||||||
| Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG | |||||||||||||||||
| Chair(s): | |||||||||||||||||
| Diana Seybolt, University of Maryland, Baltimore, dseybolt@psych.maryland.edu | |||||||||||||||||
|
| Roundtable Rotation I: Small Foundations With Big Learning Agenda: A Case of Using Analysis of Past Grant Making to Support Future Organizational Learning |
| Roundtable Presentation 303 to be held in BONHAM A on Thursday, Nov 11, 1:40 PM to 3:10 PM |
| Sponsored by the Non-profit and Foundations Evaluation TIG |
| Presenter(s): |
| William Bickel, University of Pittsburgh, bickel@pitt.edu |
| Jennifer Iriti, University of Pittsburgh, iriti@pitt.edu |
| Julie Meredith, University of Pittsburgh, julie.meredith@gmail.com |
| Abstract: Foundations have several core avenues through which they can contribute to social good. Beyond direct grant making, systematic learning from past work to inform and strengthen future grant making and support building of field knowledge are additional possibilities. Large foundations make sizable investments in sophisticated knowledge capture and evaluation infrastructure and processes in this regard. But what is feasible in more modest sized foundations? The authors present a case study of a regional foundation commissioning a university-based evaluation group to undertake a retrospective review of selected grants to better understand grantee evaluability broadly, grantee capacities to document performance, and to identify ways the foundation can better support learning from its grant making going forward. The low-cost review yielded a number of specific recommendations regarding: modifications in foundation outcome targets for grantees, redesigns of foundation infrastructure to support learning, and actions relevant to building grantee capacities long-term to document their results. |
| Roundtable Rotation II: Challenges in Developing Multi-level Logic Models |
| Roundtable Presentation 303 to be held in BONHAM A on Thursday, Nov 11, 1:40 PM to 3:10 PM |
| Sponsored by the Non-profit and Foundations Evaluation TIG |
| Presenter(s): |
| Barbara Wauchope, University of New Hampshire, barb.wauchope@unh.edu |
| Curt Grimm, University of New Hampshire, curt.grimm@unh.edu |
| Abstract: Developing a multi-level logic model describing a foundation’s grant making initiative and the projects of its grantees is a useful tool to guide the evaluation of activities, outputs, and outcomes of both the initiative and its projects. When such models work well, the initiative model and individual project models link together logically to describe the contribution of each grantee’s project to the initiative overall. In actual work with foundations, however, we have found that multi-level model development is not always as successful a process as we would like it to be. This paper will describe the challenges faced by the evaluators of a current five year regional initiative in the development of a logic model that works for both the foundation and its grantees. The evaluators will invite a discussion of the factors involved and strategies that could make the process easier with better results for all. |
| Session Title: Insights Into Foundation Evaluation | ||||||||||||||||||||
| Multipaper Session 304 to be held in BONHAM B on Thursday, Nov 11, 1:40 PM to 3:10 PM | ||||||||||||||||||||
| Sponsored by the Non-profit and Foundations Evaluation TIG | ||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||
| Ellie Buteau, Center for Effective Philanthropy, ellieb@effectivephilanthropy.org | ||||||||||||||||||||
|
| Session Title: Current Topics in Educational Evaluation: An Eclectic Set of Noteworthy Projects | |||||||||||||||||||||||||||||
| Multipaper Session 305 to be held in BONHAM C on Thursday, Nov 11, 1:40 PM to 3:10 PM | |||||||||||||||||||||||||||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | |||||||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||||||
| James S Sass, Research Support Services, jimsass@earthlink.net | |||||||||||||||||||||||||||||
|
| Session Title: Methods and Models for Evaluating Pre-Kindergarten and School Readiness Programs | |||||||||||||||||||||||||||||
| Multipaper Session 306 to be held in BONHAM D on Thursday, Nov 11, 1:40 PM to 3:10 PM | |||||||||||||||||||||||||||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | |||||||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||||||
| Katie Dahlke, Learning Point Associates, katie.dahlke@learningpt.org | |||||||||||||||||||||||||||||
| Discussant(s): | |||||||||||||||||||||||||||||
| James P Van Haneghan, University of South Alabama, jvanhane@usouthal.edu | |||||||||||||||||||||||||||||
|
| Session Title: Grappling With Uncertainty in Innovative and Complex Settings: Weaving Quality in Developmental Evaluation. | |||||||||||
| Panel Session 307 to be held in BONHAM E on Thursday, Nov 11, 1:40 PM to 3:10 PM | |||||||||||
| Sponsored by the Systems in Evaluation TIG and the Indigenous Peoples in Evaluation TIG | |||||||||||
| Chair(s): | |||||||||||
| Syd King, New Zealand Qualifications Authority, syd.king@nzqa.govt.nz | |||||||||||
| Discussant(s): | |||||||||||
| Michael Quinn Patton, Utilization-Focused Evaluaton, mqpatton@prodigy.net | |||||||||||
| Abstract: They say of life only three things are certain; birth, death and taxes. We say of Developmental Evaluation, the only certainty is uncertainty. In this session we present the challenges associated with ensuring evaluation quality in innovative and complex situations. Through a Developmental Evaluation lens we explore the process of moving from a conceptual vision to on-the-ground practice of evaluation in emergent and dynamic contexts. We reflect on what it takes to systematically weave quality into the engagement processes, data collection and evaluative thinking, in settings where uncertainty reigns. | |||||||||||
| |||||||||||
| |||||||||||
| |||||||||||
|
| Session Title: Tips From the Trenches: The Role of the Evaluator in Designing a Quality Evaluation | |||
| Panel Session 308 to be held in Texas A on Thursday, Nov 11, 1:40 PM to 3:10 PM | |||
| Sponsored by the Government Evaluation TIG | |||
| Chair(s): | |||
| Stanley Capela, HeartShare Human Services of New York, stan.capela@heartshare.org | |||
| Discussant(s): | |||
| Amy Germuth, EvalWorks LLC, agermuth@mindspring.com | |||
| Abstract: The session will focus on how three program evaluators from the government and non-profit sector developed an approach to designing a quality evaluation. It will provide participants a variety of tips and techniques on how to facilitate the process in a way that engages stakeholders to ensure the evaluation provides meaningful information to assist in strengthening program services. As part of the discussion, presenters will also discuss barriers they confronted and how they approached these issues to further ensure the quality of the evaluation. | |||
| |||
| |||
|
| Session Title: The Future of Knowledge Production and Dissemination in Evaluation | |||
| Panel Session 309 to be held in Texas B on Thursday, Nov 11, 1:40 PM to 3:10 PM | |||
| Sponsored by the | |||
| Chair(s): | |||
| Sandra Mathison, University of British Columbia, sandra.mathison@ubc.ca | |||
| Discussant(s): | |||
| Patricia Rogers, Royal Melbourne Institute of Technology, patricia.rogers@rmit.edu.au | |||
| Abstract: In this panel, we intend to address this broad set of developments and their implications for how we think about the production, organization, dissemination, appraisal, and use of knowledge about evaluation theory and practice. Four panelists have been asked to each address the following broad set of questions: (1) What are the most significant challenges to refereed, scholarly journals posed by new modes of organizing knowledge production & dissemination? (2) How does globalism (and global communications technologies) effect knowledge production & dissemination? (3) What role do truth, trust and expertise have to play in the creation and dissemination of knowledge through Internet sources? (4) How is knowledge–based expertise developed in the field, who are one’s ‘peers’, and what constitutes appropriate professional training? (5) What is the future role of the university (and the traditional academic disciplines and professional schools) in the knowledge society? | |||
| |||
| |||
|
| Session Title: Using Mixed Methods to Expand Frameworks for Program Evaluation | |||||||||||||||||
| Multipaper Session 310 to be held in Texas C on Thursday, Nov 11, 1:40 PM to 3:10 PM | |||||||||||||||||
| Sponsored by the | |||||||||||||||||
| Chair(s): | |||||||||||||||||
| Rita Fierro, Independant Consultant, fierro.evaluation@gmail.com | |||||||||||||||||
| Discussant(s): | |||||||||||||||||
| Virginia Dick, University of Georgia, vdick@cviog.uga.edu | |||||||||||||||||
|
| Session Title: Third Generation Research Knowledge Tracking: Citation Analyses |
| Demonstration Session 311 to be held in Texas D on Thursday, Nov 11, 1:40 PM to 3:10 PM |
| Sponsored by the Research, Technology, and Development Evaluation TIG |
| Presenter(s): |
| Alan Porter, Georgia Institute of Technology, alan.porter@isye.gatech.edu |
| Stephen Carley, Georgia Institute of Technology, stephen.carley@innovate.gatech.edu |
| Abstract: Tracking the “citation trails” of research publications provides the strongest empirical evidence of research influence. This workshop demonstrates how desktop bibliometric/text mining software tools can facilitate analyses of Web of Science (WOS) data for three generations of data. We start at the “second generation” – research publications reflecting a body of research (e.g., papers deriving from a particular program or the work of a given research center). From these data, we process the cited references to extract author information and to derive subject category information, thus providing “first generation” data. Via new, openly available i-macros, we separately capture the citing paper abstracts from WOS – the “third generation” data. We then consolidate the data and prepare research profiles for each generation. We identify and visualize the generation-spanning networks. Science overlay and science citation maps further elucidate the transfer of knowledge among researchers, and across disciplines, institutions, and countries. |
| Session Title: The Cycle of Evidence-based Policy and Practice: Synthesis, Translation, and Evaluation Capacity Building | |||||||||
| Panel Session 312 to be held in Texas E on Thursday, Nov 11, 1:40 PM to 3:10 PM | |||||||||
| Sponsored by the Evaluation Use TIG , the Organizational Learning and Evaluation Capacity Building TIG, Alcohol, Drug Abuse, and Mental Health TIG, and the Health Evaluation TIG | |||||||||
| Chair(s): | |||||||||
| Susan Labin, Independant Consultant, susan@susanlabin.com | |||||||||
| Abstract: This panel will include presentations on the major components in the cycle of developing evidence-based policy and practice. The need for synthesis for policy will be explored in terms of contributions and roles of evaluation and performance measurement. The synthesis method, the core methodology to aggregate findings for evidence-based practice reviews, will be defined and various types of syntheses will be compared. Using results from synthesis is the impetus for the Centers for Disease Control and Prevention’s system for synthesis and translational research, which will be presented by those involved in developing the system. The translational process is further explored in a presentation of the Service to Science Academies (supported by the Substance Abuse and Mental Health Services Administration and the Center for Substance Abuse Prevention), an example of evaluation capacity building to increase the utilization of evidenced-based findings in the field and to bring practice into the evidence base. | |||||||||
| |||||||||
| |||||||||
| |||||||||
|
| Session Title: Change is a Process, Not An Outcome: Implication for Evolving Federal Evaluation Policy |
| Think Tank Session 313 to be held in Texas F on Thursday, Nov 11, 1:40 PM to 3:10 PM |
| Sponsored by the |
| Presenter(s): |
| Dianna L Newman, State University of New York at Albany, dnewman@uamail.albany.edu |
| Discussant(s): |
| Dianna L Newman, State University of New York at Albany, dnewman@uamail.albany.edu |
| Anna Lobosco, New York State Developmental Disabilities Planning Council, alobosco@ddpc.state.ny.us |
| Abstract: This think tank will explore the impact of outcomes-based project logic models on programs intended to promote systemic change and to inform federal evaluation policy development and refinement. Typical logic models are clearly service delivery focused and federal stewardship has a singular focus on attaining defined outputs and outcomes having a deleterious effect on effecting and documenting systemic change. The 3Is Model has been used to evaluate programs with systemic change intents and to re-think and refine their logic models. Since systems change is a complicated and intricate process, adjusting to changes in all aspects of the program logic model (including outputs and outcomes) is a feature of change efforts. Lessons learned from use of the 3Is Model will be used as an impetus for discussion intended to inform evolving federal evaluation policy to support program improvement and identification of promising practices. |
| Session Title: Pandemic Influenza and Evaluation Lessons Learned: The H1N1 Outbreak of 2009 - 2010 | |||
| Panel Session 314 to be held in CROCKETT A on Thursday, Nov 11, 1:40 PM to 3:10 PM | |||
| Sponsored by the Disaster and Emergency Management Evaluation TIG | |||
| Chair(s): | |||
| Elizabeth Harris, EMT Associates Inc, eharris@emt.org | |||
| Abstract: CDC-INFO is the Centers for Disease Control and Prevention’s unified, integrated contact center for delivering public health information. It responds to public inquiries by phone, e-mail and by sending CDC publications. CDC-INFO has developed a multi-component performance monitoring, quality improvement, and evaluation system that provides continuous feedback to CDC program managers and policy makers. The H1N1 pandemic posed an emergency response need for CDC-INFO, and a pilot for testing of surveillance measurement capability to be activated in the event of a public health emergency. The panel presents a) the planned approach and purpose, b) the implementation process and a major shift in purpose that emerged from that process, c) the findings and implications, and d) lessons learned. | |||
| |||
| |||
|
| Session Title: Strategies for Preparing Quality Evaluation Practitioners | ||||||||||||||||||||||
| Multipaper Session 315 to be held in CROCKETT B on Thursday, Nov 11, 1:40 PM to 3:10 PM | ||||||||||||||||||||||
| Sponsored by the Teaching of Evaluation TIG | ||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||
| Gary Skolits, University of Tennessee, Knoxville, gskolits@utk.edu | ||||||||||||||||||||||
|
| Session Title: Evaluating Support to Poverty and Gender in Cross Country Aid Programs | |||
| Panel Session 316 to be held in CROCKETT C on Thursday, Nov 11, 1:40 PM to 3:10 PM | |||
| Sponsored by the International and Cross-cultural Evaluation TIG | |||
| Chair(s): | |||
| Cheryl Gray, World Bank, cgray@worldbank.org | |||
| Abstract: This panel highlights how a set of three recent IEG multi-country evaluations tackles assessing the World Bank’s support to poverty reduction and gender. The findings are drawn from IEG’s evaluations of World Bank support to Poverty and Social Impact Analysis; Gender; and the use Poverty Reduction Support Credits as instruments for poverty reduction and social outcomes. The panel focuses on standard evaluation criteria and some insightful tools and approaches used in these challenging multi-country program evaluations. | |||
| |||
| |||
|
| Session Title: Towards an Understanding of the Role of the Local Evaluator in Federally Funded Demonstration Projects: The Perspectives of Federal Policymakers, Community-based Organizations, and Evaluators | ||||
| Multipaper Session 317 to be held in CROCKETT D on Thursday, Nov 11, 1:40 PM to 3:10 PM | ||||
| Sponsored by the Government Evaluation TIG | ||||
| Chair(s): | ||||
| Soundaram Ramaswami, Kean University, sramaswa@kean.edu | ||||
|
| Session Title: Gender and Human Rights Evaluation | ||||
| Panel Session 318 to be held in SEGUIN B on Thursday, Nov 11, 1:40 PM to 3:10 PM | ||||
| Sponsored by the Feminist Issues in Evaluation TIG | ||||
| Chair(s): | ||||
| Divya Bheda, University of Oregon, dbheda@uoregon.edu | ||||
| Discussant(s): | ||||
| Donna Podems, OtherWISE, donna@otherwise.co.za | ||||
| Divya Bheda, University of Oregon, dbheda@uoregon.edu | ||||
| Abstract: Since the development of the United Nations Evaluation Group (UNEG) Norms and Standards in 2005, UN entities have made efforts to achieve progress on integrating gender equality and human rights in evaluations, given that these two principles are at the heart of the UN work and should be guiding all its operational activities. Evaluation frameworks for gender focused programs have not always been in synch with principles associated with the furtherance of a human rights mission. This panel will examine UNIFEM's mandate for evaluation of gender focused programs, a transformative framework that shifts the focus directly on human rights, and strategies associated with grassroots movements in such contexts. | ||||
| ||||
| ||||
|
| Session Title: Reflections From Applying a Complexity Lens to Monitoring and Evaluation | ||||||||
| Panel Session 319 to be held in REPUBLIC A on Thursday, Nov 11, 1:40 PM to 3:10 PM | ||||||||
| Sponsored by the International and Cross-cultural Evaluation TIG | ||||||||
| Chair(s): | ||||||||
| Tricia Wind, International Development Research Centre, twind@idrc.ca | ||||||||
| Abstract: This panel will share reflections and questions emerging from a collaborative study among four action research projects that are experimenting with applying systems and complexity thinking to their monitoring and evaluation systems. The panel will present both broad themes as well as some practical experience of the projects in modifying their M&E strategies. The panel will include an overview to the study from the International Research Centre. It will highlight reflections from one of the participating projects, whose research seeks to understand the working, environmental and health conditions of informal sector solid waste workers and their families in Peru. The panel will conclude with reflections from the evaluation consultant who has been working with the Peruvian project to identify the project’s outcomes and evaluate the significance of those outcomes. | ||||||||
| ||||||||
| ||||||||
|
| Session Title: Improving School-Based Health Through Campus Centers, Nursing, and Effective Interventions | |||||||||||||||||||||||||||||||||||||||||
| Multipaper Session 320 to be held in REPUBLIC B on Thursday, Nov 11, 1:40 PM to 3:10 PM | |||||||||||||||||||||||||||||||||||||||||
| Sponsored by the Health Evaluation TIG | |||||||||||||||||||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||||||||||||||||||
| Kim van der Woerd, Reciprocal Consulting, kvanderwoerd@gmail.com | |||||||||||||||||||||||||||||||||||||||||
|
| Session Title: Five Partners, One Evaluation: A Cohesive Evaluation of the Action Communities for Health, Innocation, Environment Change (ACHIEVE) Healthy Communities Initiative | |||
| Panel Session 321 to be held in REPUBLIC C on Thursday, Nov 11, 1:40 PM to 3:10 PM | |||
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG | |||
| Chair(s): | |||
| Andrea Lee, YMCA of the USA, andrea.lee@ymca.net | |||
| Abstract: Five national organizations are partnering on the ACHIEVE (Action Communities for Health, Innovation, EnVironment changE) initiative, supported by CDC’s Healthy Communities Program. Since 2008, ninety-three communities nationwide received funding to convene local leaders to promote health and well-being through policy, system, and environmental change. ACHIEVE partners include the National Association of County and City Health Officials (NACCHO), National Association of Chronic Disease Directors (NACDD), National Recreation and Parks Association (NRPA), and YMCA of the USA (Y-USA), with the Society for Public Health Education (SOPHE) contributing technical assistance and lesson dissemination. While ACHIEVE is predicated on collaboration at the local level, collaboration is also necessary among the national partners to effectively and comprehensively conduct an evaluation. Evaluators from each organization work closely to balance community and organizational needs while ensuring a cohesive evaluation of ACHIEVE. Representatives of each partner organization will present their perspectives of the evaluation plan and lessons learned. | |||
| |||
| |||
| |||
|