Evaluation 2009 Banner

Return to search form  

Roundtable: Context, Ethics, and the Participatory Evaluation Approach Among Services for Persons With Special Needs; an Argument for "Relationship Based Evaluation"
Roundtable Presentation 493 to be held in the Boardroom on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Special Needs Populations TIG
Presenter(s):
Aimee Sickels, Custom Evaluation Services, aimee@customevaluation.com
Amanda Diaczenko, Programs for Exceptional People, amanda.diaczenko@gmail.com
Abstract: This roundtable proposal offers those working in an evaluative capacity, with agencies serving person's who have special needs, a forum to discuss: the context of their work, the ethical issues and concerns involved in delivering the services as well as the evaluation processes, and to share ideas and seek insight on effective ways to include persons receiving services in the evaluation processes. The purpose of these conversations is to offer those attending, a set of resources/ideas with which to strengthen their own evaluation processes by creating new ways for service providers to involve the clients at varying levels. The ultimate goal of the conversation at large is to create a service environment that encourages those persons with special needs to become actively involved in their own service management and evaluation processes as well as to become self-advocates. The proposed approach is being called 'Relationship Based Evaluation'.

Session Title: The Need for Mixed Methods in Advocacy Evaluation
Panel Session 494 to be held in Panzacola Section F1 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Brian C Quinn, Robert Wood Johnson Foundation, bquinn@rwjf.org
Abstract: Foundations are increasingly supporting advocacy to expand health insurance coverage and achieve other important social goals. As resources devoted to such efforts grow, the need to evaluate them is also growing. No single evaluation methodology can capture the complexity of the advocacy process, assess advocacy outcomes, and describe the contextual factors that influence policy. To evaluate a foundation-sponsored 12-state program funding consumer advocacy networks in 12 states to support expanding health insurance coverage, Mathematica Policy Research, Inc. (MPR) designed an evaluation using capacity assessment surveys, focus groups, and site visits with advocates; structured interviews with policymakers; and a social network analysis (SNA) of consumer advocacy coalitions to describe and measure advocacy capacity, advocacy strategies, and policy changes within and across states and over time. In this panel, MPR will show why multiple methods are critical for advocacy evaluation and discuss the challenges of applying SNA to evaluate advocacy coalitions.
A Multi-Method Approach to Evaluating Consumer Advocacy to Expand Health Insurance Coverage
Debra A Strong, Mathematica Policy Research Inc, dstrong@mathematica-mpr.com
As programs supporting advocacy to achieve social goals grow, the need to evaluate them is also growing. Funders need to ensure their dollars are invested wisely. They also want to use evaluation to maximize opportunities for grantees to succeed in specific advocacy efforts. To meet these needs, advocacy evaluations must provide real-time feedback, emphasize interim outcomes, use meaningful measures, and above all, be flexible. No single evaluation methodology can meet these ambitious goals. The first paper in this panel will describe how and why Mathematica Policy Research designed a multi-method evaluation of a 12-state program to support consumer advocacy coalitions, why specific methods were chosen, and how results have been developed and provided to the foundation, advocates, and other stakeholders. It will discuss whether and to what extent evaluation results provided by different methods appear to be useful to the foundation, to advocates, and to technical assistance providers.
Applying Social Network Analysis to Understand Advocacy Coalitions
Todd C Honeycutt, Mathematica Policy Research Inc, thoneycutt@mathematica-mpr.com
As part of an evaluation of a consumer advocacy project, Mathematica Policy Research (MPR) is using social network analysis (SNA) to assess the structure and operation of 12 advocacy leadership coalitions. SNA is an important evaluation tool that for understanding and quantifying relationships among organizations, showing how networks change over time, and considering whether relationships are associated with program outcomes. The insights that SNA brings to advocacy networks can be valuable only if results are adapted and interpreted to meet their needs. This paper will explain how MPR is applying SNA using data from coalition meeting records and a survey of coalition members to understand how organizations work together, share values, and make contact with policy makers. We will discuss the challenges to using SNA to provide formative feedback, and the adaptations made to keep results meaningful for advocates.

Session Title: Influencing Evaluation Policy and Evaluation Practice: A Progress Report From The American Evaluation Association's (AEA) Evaluation Policy Task Force
Panel Session 495 to be held in Panzacola Section F2 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the AEA Conference Committee
Chair(s):
William Trochim, Cornell University, wmt1@cornell.edu
Patrick Grasso, World Bank, pgrasso@worldbank.org
Discussant(s):
Eleanor Chelimsky, Independent Consultant, oandecleveland@aol.com
Leslie J Cooksy, University of Delaware, ljcooksy@udel.edu
Katherine Dawes, United States Environmental Protection Agency, dawes.katherine@epamail.epa.gov
Susan Kistler, American Evaluation Association, susan@eval.org
Melvin Mark, Pennsylvania State University, m5m@psu.edu
Stephanie L Shipman, United States Government Accountability Office, shipmans@gao.gov
Abstract: The Board of Directors of the American Evaluation Association (AEA) established the Evaluation Policy Task Force (EPTF) in order to enhance AEA's ability to identify and influence policies that have a broad effect on evaluation practice and to establish a framework and procedures for accomplishing this objective. Since starting operations on September 1, 2007, the EPTF has issued key documents promoting a wider role for evaluation in the Federal Government, influenced both federal legislation and executive policy, and informed AEA members and others about the value of evaluation through public presentations and newsletter articles. In July, the Board extended the charter of the EPTF for two years with an evaluation at the end of this period. It also appointed Patrick Grasso as the new Chair, replacing Bill Trochim, who chaired the EPTF since its inception. This session will provide an update on their work and invite member input on their plans and actions.
Introduction to the Evaluation Policy Task Force
William Trochim, Cornell University, wmt1@cornell.edu
This will be an overview of EPTF activities over the last two years and a summary of current plans for the future.
Activities and Plans for the EPTF
Patrick Grasso, World Bank, pgrasso@worldbank.org
George F Grob, Center for Public Program Evaluation, georgefgrob@cs.com
Mr. Grob, Consultant to the EPTF, will facilitate a discussion involving EPTF members and the audience about the activities and plans of the EPTF

Session Title: Implementing the Getting to Outcomes (GTO) System in a Statewide Initiative to Reduce Underage Drinking
Demonstration Session 497 to be held in Panzacola Section F4 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Presenter(s):
Pamela Imm, Lexington/Richland Alcohol and Drug Abuse Council, pamimm@windstream.net
Annie Wright, University of South Carolina, patriciaannwright@yahoo.com
Matthew Chinman, RAND Corporation, chinman@rand.org
Patricia Ebener, RAND Corporation, patebener@rand.org
Karen Osilla, RAND Corporation, karenc@rand.org
Abstract: Researchers in South Carolina, in collaboration with the RAND Corporation, have received a grant from the Centers for Disease Control and Prevention to evaluation the implementation of the Getting to Outcomes system to reduce underage alcohol use. three intervention counties are receiving the GTO system and will be compared to three comparison counties on a variety of outcome variables. This presentation will focus on issues related to planning, implementating, and evaluating the use of the GTO system with special emphasis on demonstrating the use of the overall logic model as well as methods for tracking changes in merchants, key leaders, and community partners (e.g., prevention specialist, law enforcement, etc) who implement the strategies for change.

Session Title: Implementation Measurement for Formative and Summative Evaluation
Demonstration Session 498 to be held in Panzacola Section G1 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Evaluation Use TIG
Presenter(s):
Heather Bennett, University of South Carolina, bennethl@mailbox.sc.edu
Tammiee Dickenson, University of South Carolina, tsdicken@mailbox.sc.edu
Abstract: Previous evaluation approaches have been criticized for lack connection between process and outcomes. The approach we will describe is a method of addressing this criticism. This presentation discusses the incorporation of implementation measures, rubrics and matrices, as pieces of larger program evaluations in three separate programs in South Carolina: Diverse Pathways in Teacher Preparation (DP), South Carolina Reading Initiative (SCRI), and South Carolina Reading First (SCRF). Evaluators at the University of South Carolina collaborated with project personnel to develop these program-specific instruments to measure implementation. Instruments to measure implementation, Implementation Rubrics or Implementation Matrices, will be discussed and sample items from each instrument will be provided. The specific project needs that guided instrument development as well as the instruments' use for formative or summative purposes will be shared. Additionally, researchers will share information about administration of the instruments, data analysis and reporting, and strengths and limitations of this method.

Session Title: Examples From the Field: Program Theory Applied to Health and Medical Contexts
Multipaper Session 499 to be held in Panzacola Section G2 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Chair(s):
John Gargani,  Gargani + Company, john@gcoinc.com
Use of Theory of Change to Inform the Evaluation of a Geriatric Psychiatry Consultation Service for Primary Care Providers
Presenter(s):
Christine Clements, University of Massachusetts Medical School, christine.clements@umassmed.edu
Linda Cabral, Center for Health Policy and Research University, linda.cabral@umassmed.edu
Brenda King, University of Massachusetts Memorial Healthcare, brenda.king@umassmemorial.org
Abstract: The Center for Health Policy and Research, University of Massachusetts Medical School, is conducting formative evaluation to inform the development and operation of a Geriatric Psychiatry Consultation Service for primary care providers in the Worcester, Massachusetts area. The evaluation aims to improve understanding of a) how primary care providers manage psychiatric conditions among their patients age 60 and over without a psychiatric consult service, b) resources they need to better serve this population, and c) effective mechanisms for delivery of the program to meet the needs of primary care providers and their patients. This paper presentation will describe how we refined our use of program theory to facilitate study of the program's development, implementation and operation. The presentation will also describe learning from our initial use of a structure, process, and outcomes framework and implementation theory to our recent use of theory of change.
Organizational Theory: Practical Application in Evaluation Design
Presenter(s):
Debora Goetz Goldberg, Virginia Commonwealth University, goetzdc@vcu.edu
Diane Dodd-McCue, Virginia Commonwealth University, ddoddmccue@vcu.edu
Abstract: Organizational theories provide a framework to conceptualize organizational behavior as it relates to program performance. This session presents an overview of open systems theories and examines specific perspectives from institutional, resource dependency, contingency, organizational ecology, and diffusion of innovation theories. These theories provide a framework for understanding how the organization's external environment and internal characteristics such as structure, culture, leadership, and technology influence program performance. The focus is on incorporating internal and external organizational factors in evaluation design. An evaluation of the Department of Veterans Affairs' prosthetics and rehabilitation programs serves as an example of how to apply organizational theory to evaluation methodology. Organizational theory was used to identify data elements and data collection techniques. Qualitative and quantitative research methods were used to collect data on management activities, customer satisfaction, access to services, and clinical outcomes. The evaluation resulted in information for improvement to structure, processes, training, communication, and technology.

Session Title: The Lifecycle Continues: Do Independent Consultants Follow a Different Trajectory?
Think Tank Session 500 to be held in Panzacola Section H1 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Independent Consulting TIG
Presenter(s):
Gail V Barrington, Barrington Research Group Inc, gbarrington@barringtonresearchgrp.com
Discussant(s):
Larry K Bremner, Proactive Information Services Inc, larry@proactive.mb.ca
Melanie Hwalek, SPEC Associates, mhwalek@specassociates.org
Arnold J Love, Independent Consultant, ajlove1@attglobal.net
Gail V Barrington, Barrington Research Group Inc, gbarrington@barringtonresearchgrp.com
Abstract: In 2006, the question was asked if independent evaluators move through the typical lifecycle of a business entrepreneur or if the nature of evaluation work results in a different evolution. Six senior non-university based evaluation consultants were interviewed and their case studies reported in Issue #111 of New Directions for Evaluation. In the past three years, what has changed? What has been the career impact of the economic downturn and a changing government context? Have they remained at the fifth lifecycle stage, Maturity, or has there been some impetus to move in a new, and possibly uncharted, direction? Several study participants will provide updates on their lifecycle development and will consider the possibility of a Post-maturity stage. Audience members will be asked to contribute to the discussion.

Session Title: Prescriptive forecasting: Putting Our Programs and Policies Into the Proper Context
Demonstration Session 501 to be held in Panzacola Section H2 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Patrick McKnight, George Mason University, pmcknigh@gmu.edu
Abstract: Prescriptive forecasting allows evaluators to use cost-effectiveness and probability benchmarks to guide program development and evaluation. Program and policy developers frequently begin with little guidance concerning costs, outcomes, or efficiency. If developers and evaluators used informative benchmarks that were quantifiable then our outcomes might be more easily conveyed. Cost-effectiveness studies, Bayesian probability models and effect size estimates are essential methods for prescriptive forecasting. Cost-effectiveness tools help shape both the cost of a program along with the expected outcomes. Bayesian models help refine those parameters so evaluators may see how different program aspects (e.g., costs per unit served, time required for full implementation, and stakeholder support) may affect these relevant outcomes. I plan to present an example of prescriptive forecasting using these quantitative tools. My aim is to show that the process of setting developmental benchmarks requires little skill and can result in easy-to-communicate results.

Session Title: Evaluating Institutional Impact in the Science, Technology, Engineering, and Mathematics (STEM) Education: How Do We Assess the Ways in Which We are Strengthening and Diversifying the STEM Pipeline and Mainline
Expert Lecture Session 502 to be held in  Panzacola Section H3 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Lizanne DeStefano, University of Illinois, destefan@illinois.edu
Presenter(s):
Lizanne DeStefano, University of Illinois, destefan@illinois.edu
Abstract: In this session, we will present a strategy for conceptualizing and evaluating a university's impact on strengthening and diversifying the Science, Technology, Engineering and Mathematics (STEM) pipeline and mainline. Activities being evaluated include: preK-16 Outreach, preK-16 Teacher Education and Professional Development; Undergraduate/Graduate Educational Reform, and Institutional Transformation. Challenges and opportunities associated with the evaluation of STEM Education Programs will be discussed including: sustainability, integration and coordination, focus, and intensity of intervention. The University of Illinois, with its strength in STEM research and myriad of NSF and NIH grants, will be used as an illustration of the use and application of this evaluation framework.

Session Title: Measuring and Enhancing the Achievement of Performance Results in Service and Government Organizations: An Industrial Engineering Model
Expert Lecture Session 503 to be held in  Panzacola Section H4 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Non-profit and Foundations Evaluation TIG
Presenter(s):
Sandiran Premakanthan, Symbiotic International Consulting Services (SICS), sandiran_premakanthan@phac-aspc.gc.ca
Abstract: The Industrial Engineering (IE) approach to Defining Development Performance Results leads to the design of the 10th Order Performance-Metric Structure. The structure is an orderly approach to developing quantitative controls for managing government and service organizations. The approach defines the statement of performance and determines the performance-metrics that are to be counted. It is both a top down and a bottom up approach and lays the foundation for measuring, monitoring, evaluating and controlling and reporting on organizational and development policies, programs, projects and initiatives and activities. The hierarchical approach to defining organizational performance metrics links the upper strategic management control system with the lower operational management control systems. It is a framework for developing credible strategic integrated performance information (SIPI) for decision-making. It would satisfy the performance results management information needs of an organization, Central Agencies, Parliamentarians and the Citizens. Further, the I.E approach rather than being competitive to other approaches to good management in the public and private sectors is integrative. It is a framework for the application of other management improvement initiatives, tools and techniques.

Session Title: Not a Poor Cousin to Evaluation: The Critical Role of Performance Monitoring for Program Improvement
Panel Session 504 to be held in Sebastian Section I1 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Government Evaluation TIG
Chair(s):
Elizabeth Harris, Evaluation, Management & Training Associates Inc, eharris@emt.org
Abstract: The proposed panel will explicate the role and value of performance monitoring as a complement to traditional evaluation. More specifically it will 1) clarify the distinctions between, and inter-relation of, performance monitoring and evaluation, and 2) demonstrate how performance monitoring complements traditional evaluation by linking evaluative data more closely to policy and management decisions. These conclusions are grounded in the design, implementation, information products, and system improvements demonstrated in the comprehensive performance monitoring system developed to support quality improvement, effectiveness assessment and decision making for the Centers for Disease Control and Prevention's recently consolidated National Consumer Response Services Center (CDC-INFO). Two presentations will be made, by a member of the CDC-INFO management team, and a representative from EMT Associates, Inc., the external contractor for the performance monitoring and evaluation system.
Development of the Centers for Disease Control and Prevention's (CDC's)-INFO's Performance Monitoring System
Elizabeth Harris, Evaluation, Management & Training Associates Inc, eharris@emt.org
The introductory presentation will provide an overview of the CDC-INFO performance monitoring system, and the systematic and collaborative process through which it was developed. EMT was contracted to conduct a seven-year "performance evaluation" in 2005. Influences that shaped the system come from the policy world, CDC's evaluation framework and the overall goal of meeting CDC's information needs. The development of the system necessitated clarification of both evaluation and performance monitoring components. Explicit examples from the project provide a meaningful grounding of more general distinctions. Examples include, a)the process of determining what factors should go into the contact center contractor award fee and therefore performance monitoring, b) the phasing of development on initial quality improvement and subsequent incorporation of different stakeholder products that adapt real time data to multiple decision and information needs (e.g., training and coaching, consumer need information for CDC topical programs), and c) incorporation of outcome and effectiveness information to improve system outreach and content Dr. Harris will present functional definitions for terminology used and present a perspective on the differences between performance monitoring and evaluation. The purpose of performance monitoring from the perspective of the CDC evaluation will be discussed, along with lessons learned and implications for the field of evaluation as a new federal fiscal funding year begins under a new administration.
Useful Information for Decision Making From the Centers for Disease Control and Prevention's (CDC's)-INFO Performance Monitoring System
Paul Abamonte, Centers for Disease Control and Prevention, paa6@cdc.gov
Mr. Abamonte is CDC's Evaluation/Audience Research officer for CDC-INFO (CDC's National Consumer Response Services Center) and an end-user of performance monitoring data and information generated by the system established by EMT. The "Performance Evaluation" contract was designed with the intent to "provide ongoing independent, systematic, and continuous evaluation" and "utilize industry best practices to ensure conformance to established performance standards and to assist management in key decision-making efforts" (p. 5 of the original RFQ). Mr. Abamonte will show how the information generated contributed to a feedback loop and resulted in demonstrable, measurable quality improvement. Examples of when evaluation techniques were brought in to answer key questions (and the appropriate context) will be presented by way of illustration. The balance between competing information needs from the government perspective will also be explored in the context of increasing demands and decreasing funding. Implications for how evaluators can assist will be a focus.

Session Title: Looking for Quality? An Interactive Demonstration of How to Assess the Quality of Evaluation Products
Demonstration Session 505 to be held in Sebastian Section I2 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Presenter(s):
Rashon Lane, Centers for Disease Control and Prevention, rlane@cdc.gov
Lauren Gase, Centers for Disease Control and Prevention, lgase@cdc.gov
Aisha Tucker-Brown, Northrop Grumman, atucker-brown@cdc.gov
Abstract: Evaluators from the Division for Heart Disease and Stroke Prevention (DHDSP) at the Centers for Disease Control and Prevention will facilitate an interactive demonstration of its new Logic Model and Evaluation Plan Criteria Checklists. The checklists were developed to provide a set of criteria for assessing the quality of logic models and evaluation plans. The checklists were piloted in our programs for heart disease and stroke prevention and demonstrated to have utility in assessing the quality of logic models and evaluation plans of three distinctly different funded programs: those that address policy and systems change, provide direct services, and implement quality improvement. Presenters will guide conference participants through a practical application of the checklist to illustrate its usefulness in determining the quality of evaluation products. Participants will gain a better understanding of the development process of evaluation checklists and learn how they have been incorporated in evaluation practice

Session Title: Data Management and Acquisition for Large-scale Evaluations of Public Agencies
Multipaper Session 506 to be held in Sebastian Section I3 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Hannah Betesh, Berkeley Policy Associates, hannah@bpacal.com
Abstract: In evaluation practice, participant data is often messy or non-standardized, and acquiring data from public agencies calls for careful negotiations around consent and confidentiality. In this session, we share lessons learned and practical approaches for acquiring and managing data from public agency clients, including school districts, county departments, and community-based organizations contracted to conduct public services. We will draw on our experiences with a multi-year random assignment evaluation of a teacher professional development program in 50 schools and an evaluation of a city-funded youth violence prevention program. We will focus on methods that ensure data integrity and maintain accuracy and quality while consolidating data from multiple sources. Specific strategies to be discussed include the use of a third-party encoder to handle sensitive data matching, development of memoranda of understanding with public agencies, and integration of multiple public data sources.
Data Acquisition and Management Practices for a Teacher Professional Development Evaluation Across Eight School Districts
Lorena Ortiz, Berkeley Policy Associates, lorena@bpacal.com
Assessing student outcomes based on standardized test scores and administrative student records is challenging under the best of circumstances. This presentation will address these challenges of data collection and management in a multi-year, multi-school district random assignment evaluation of a teacher professional development program in 50 schools. As with many studies, this research hinges on effectively managing data collection and management processes to get clean, organized student data--yet these important processes are often not discussed in proposals or study designs. This presentation highlights the necessity of incorporating such processes into proposal writing and study designs. Topics to be discussed include developing and maintaining data contacts with each school district; deciding which data elements are necessary and attainable; requesting data elements in ways that require the least amount of 'cleaning'; developing secure protocols for data transfers; normalizing data elements across multiple data sources, and merging data files.
Triangulating Outcomes Across Multiple Public Data Sets: Lessons Learned From an Evaluation of a Violence Prevention Program
Hannah Betesh, Berkeley Policy Associates, hannah@bpacal.com
Measure Y is the Violence Prevention and Public Safety Act of 2004, a City of Oakland voter approved initiative to fund, among other services, violence prevention programs provided in partnership with schools, community-based organizations and the county probation department. The evaluation of Measure Y's violence prevention programs necessitated a unique data management process to handle confidential data gathered from multiple sources: program participation data from a centralized city database, arrest records from the Alameda County Probation Department, suspension, attendance data from the Oakland Unified School District, and satisfaction and outcome surveys conducted by partner community-based organizations. Because of the high-risk target population and the confidential nature of the outcome data, a key feature of our approach was the use of a third-party encoder to match participation records with school and probation records. This session will discuss the process and lessons learned from this evaluation.

Session Title: Research on Theory Driven Evaluation
Multipaper Session 507 to be held in Sebastian Section I4 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Research on Evaluation TIG
Chair(s):
Dreolin Fleischer,  Claremont Graduate University, dreolin.fleischer@cgu.edu
An Empirical Review of Theory-Driven Evaluation Practice from 1990 - 2008
Presenter(s):
Chris L S Coryn, Western Michigan University, chris.coryn@wmich.edu
Lindsay Noakes, Western Michigan University, lindsay.noakes@wmich.edu
Daniela C Schroeter, Western Michigan University, daniela.schroeter@wmich.edu
Carl Westine, Western Michigan University, carl.d.westine@wmich.edu
Abstract: Evaluation theories are models for evaluation practice. They are intended to guide practice rather than explain phenomenon and they are prescriptions for the ideal. Such theories address the focus and role of evaluation, specific questions to be studied, design and implementation, and use of results. Although its origins can be traced to Ralph Tyler in the 1930s, later reappearing in the 1960s and 1970s, and again in the 1980s, it was not until 1990 that theory-driven evaluation resonated more widely in the evaluation community with the publication of Huey Chen's book Theory-Driven Evaluations. Since then, conceptual and theoretical writings on the approach have been commonplace. Nonetheless, the degree to which theory-driven evaluation practice adheres to and exemplifies the central principles of the approach as described and prescribed by prominent theoretical writers is disputable. In this study, the authors examined whether theoretical prescriptions and real-world practices do or do not align.
Key Factors That Influence Logic Model Use and Benefits: Findings From Evaluation Practitioners
Presenter(s):
Rosalie Torres, Torres Consulting Group, rosalie@torresconsultinggroup.com
Rodney Hopson, Duquesne University, hopson@duq.edu
Jill Casey, Torres Consulting Group, jill@torresconsultinggroup.com
Abstract: Within the field of evaluation, several tools related to explicating program design, theory, activities, and contextual influences are in use. Among these are: logic models, theories of action, theories of change, and systems approaches to evaluation. This paper will review the use of these tools as examined through an NSF-funded survey of logic model practitioners. Using a combined survey-interview methodology, this study will address these research questions: (a) What types of logic model schemes (traditional logic model, theory of change, etc.) are currently being used among AEA logic model practitioners? (b) How are they being used and what factors influence the nature and impact of use? (c) Within what different cultural contexts (i.e., race/ethnicity, age, religion, gender, sexual orientation) is use occurring? The paper will describe key factors influencing the development, design, content, and use of these tools, as well as the benefits and challenges to use. Implications for (a) improving current practice and (b) further research will be highlighted.

Session Title: Evaluation for Dynamically Complex Contexts
Expert Lecture Session 508 to be held in  Sebastian Section K on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Presidential Strand
Presenter(s):
Michael Patton, Utilization-Focused Evaluation, mqpatton@prodigy.net
Abstract: Physical scientists distinguish static, dynamic, and dynamical types of change, which occur within different contexts. A static context is stable, predictable, and known; change efforts can be planned and controlled, as can evaluation designs. A dynamic context, in contrast, is one that is changing in an evolutionary and fairly manageable direction, constituting a relatively smooth trajectory. The third distinction, dynamical contexts and changes, are characterized as volatile, unpredictable, nonlinear, and complex; change emerges as dynamically interdependent factors and variables interact. The global financial crisis constitutes a dynamical context for change interventions of all kinds in the last year - and therefore, a dynamical context for evaluations of any such interventions. This session will explore the implications for evaluation of dynamical contexts with specific examples of evaluation cases, decisions, issues, and methods. One methods implication is the need for emergent designs and measures that support quick feedback in the face of turbulence.

Session Title: Methodology and Methodological Challenges of the Evaluation of the Paris Declaration
Panel Session 509 to be held in Sebastian Section L1 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the International and Cross-cultural Evaluation TIG
Chair(s):
Niels Dabelstein, Evaluation Secretariat of the Paris Declaration, nda@diis.dk
Discussant(s):
Ted Kliest, Netherlands Ministry of Foreign Affairs, tj.kliest@minbuza.nl
Abstract: Evaluating the Paris Declaration: phase 2 evaluation of aid and development results The Paris Declaration, endorsed in March 2005, is an international agreement signed by over one hundred Ministers, Heads of Agencies and other Senior Officials. The Declaration lays down an action-orientated roadmap intended to improve the quality of aid and its impact on development. An independent, multi-phased and cross-country evaluation of the Paris Declaration commissioned and overseen by an international Reference Group was initiated in 2007. The first phase of the evaluation consisted of 20 separate but coordinated evaluations in donor countries and developing countries. A Synthesis of these evaluation was completed in June 2008. The second phase of the evaluation includes an even larger number of donors, agencies and developing countries. It will be conducted during 2009 - 2010. The major focus of phase 2 is on assessing the effects of the Paris Declaration in terms of aid effectiveness and development results. This is one of the hitherto largest joint evaluations undertaken applying a unique decentralized approach. As a follow-up to the presentation at the AEA 2008 which focused on the organizational and methodological lessons learned by different stakeholders during the first phase of the evaluation, the Panel will present and discuss the detailed approach and methodology, and in particular the methodological challenges of this evaluation.
Methodology and Methodological Challenges of the Evaluation of the Paris Declaration
Elliot Stern, Lancaster University, crofters@clara.net
The approach and methods of the evaluation will be outlined on the basis of the overall Terms of Reference for the evaluation. Topics dealt with include: 1. The overall methodology and approach 2. Methodological challenges to be addressed - Substantive aspects: -Different ways in which the Paris Declaration is being implemented -Importance of different political, economic and institutional contexts for implementation ('intervening variables') -Significance of key actors' intentions and priorities -Possibility of multi-directional causality between the main elements in the model (implicitly) underpinning the Paris Declaration -Iterative nature of policy implementation associated with the Paris Declaration - How to measure change: - How to attribute change to the Paris Declaration: Several non-mutually exclusive ways of ascertaining attribution will be discussed.
Approach and Methods of the Country Evaluation Malawi
Naomi Ngwira, Ministry of Finance Malawi, naomingwira@yahoo.com
Based on the terms of reference for the country evaluation Malawi, which forms part of the overall evaluation of the results of the Paris Declaration, the specific methods, approache and challenges for this particular country case study will be highligted.

Session Title: Demonstration of a Multimedia Website Offering Best Practices in Community-Based Program Evaluation
Demonstration Session 510 to be held in Sebastian Section L2 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Distance Ed. & Other Educational Technologies TIG
Presenter(s):
Jane Koppelman, The Lewin Group, jane.koppelman@lewin.com
Frances Gragg, The Lewin Group, frances.gragg@lewin.com
Abstract: Presenters at this session will demonstrate a sophisticated, interactive, multi-media website designed to give evaluators of community-based programs the tools to conduct scientifically sound evaluations and work effectively with their clients. The website features: * An interactive Evaluation Tutorial, designed to highlight the decisions that must be made by program providers and evaluators during each stage of evaluation (e.g., planning, design, data collection, data analysis, data interpretation, and reporting of findings). It also offers best practices in various design, data collection and analysis techniques. Exercises to reinforce learning are included. * Video consultations that simulate the conversations that should occur between program providers and evaluators during various stages of evaluation. * A database containing over 100 written resources that offers information on a range of issues covered throughout the stages of program evaluation. Resources come in the form of reports, tip sheets, and other related instructional materials.

Session Title: Evaluation of the KnowHow2Go Campaign: How Context Influences Implementation and Outcomes
Panel Session 511 to be held in Sebastian Section L3 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the College Access Programs TIG
Chair(s):
Tania Jarosewich, Censeo Group, tania@censeogroup.com
Discussant(s):
Nushina Mir, Lumina Foundation for Education, nmir@luminafoundation.org
Abstract: The purpose of the KnowHow2GO (KH2GO) initiative is to raise awareness among low-income and first-generation students in grades eight through ten about the process of preparing for college and taking the steps necessary for college admission. The Lumina Foundation Service has funded providers in fifteen states to support the media campaign and provide college access services in the four areas of the initiative. The initiative is also developing and testing a certification to reassure stakeholders that service providers adhere to basic standards for KnowHow2GO programming and operations. This panel will present the results of the ongoing evaluations of the KH2GO campaign and the pilot project of the certification process. The evaluation officer that is overseeing the initiative at the Lumina Foundation will discuss the implications of the evaluation for the project and the college access field. The presentation allows time for audience discussion.
Certification of College Access Providers: How a Pilot Project Can Inform National Implementation
Tania Jarosewich, Censeo Group, tania@censeogroup.com
A variety of organizations in the nonprofit sector (e.g., state associations of nonprofit organizations, United Way) offer agency certifications to demonstrate an organization's attention to the health of its finances and governance but these certifications often do not address the organizations' mission-related work. The KnowHow2GO initiative is developing a certification process for organizations or partnerships that offer KnowHow2GO college access services to demonstrate adherence to the four components of the campaign and attention to their organization's health. This presentation describes lessons learned including the extent to which the certification is applicable to various types of college access networks and includes best practices in college access programming and organizational effectiveness. The presentation also discusses challenges of developing a process that meets the needs of providers with different levels of capacity, include traditional college access providers and youth-serving organizations, and also has the goal of increasing regional and statewide college access networks.
Creating College Access Networks That Work: Lessons Learned From the KnowHow2GO Campaign
Linda Simkin, Academy for Educational Development, lsimkin@aed.org
This presentation provides observations on the development of the KnowHow2GO college access networks. The presentation highlights some of the cross-cutting themes and progress of the networks toward short-term outcomes identified in the KnowHow2GO campaign's theory logic model. Given the diversity of the network sites, cross-site comparisons are not appropriate. Rather, the purpose of this presentation is to describe overall progress made in strengthening and expanding college access and success networks, in widely disseminating the KnowHow2GO messages at the local level, and connecting students and caring adults to resources; to identify challenges encountered; and to offer recommendations intended to improve practice and outcomes in the coming year.

Session Title: Using Outcomes Theory to Solve Important Conceptual and Practical Problems in Evaluation, Monitoring and Performance Management Systems
Expert Lecture Session 512 to be held in  Sebastian Section L4 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Theories of Evaluation TIG
Presenter(s):
Paul Duignan, Massey University Auckland, paul@parkerduignan.com
Abstract: Many monitoring and evaluation systems have common conceptual problems that jeopardize their practical implementation, ultimately undermining the credibility and practical utility of such systems. Outcomes theory is a recently developed theory that covers the areas of evaluation, monitoring and performance management, and which sheds light on these problems. Such problems include the following: 1) purely indicator-based systems in which you cannot identify important, but currently not measured, outcomes; 2) the sometimes futile search for the non-output demonstrably attributable intermediate outcome; and, 3) systems which hold parties to account for non-demonstrably attributable indicators. Outcomes theory enables these problems to be easily identified and provides solutions to them based on ensuring that such systems have the requisite building-blocks required for sound evaluation, monitoring and performance management systems. By providing a common conceptual language across these different systems, outcomes theory can speed up the process of building better systems. See http://www.tinyurl.com/ot232.

Session Title: Multi-component Evaluation of School-Based Health Center Network in a Post-Disaster Context
Demonstration Session 513 to be held in Suwannee 11 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Health Evaluation TIG
Presenter(s):
Sarah Kohler, Louisiana Public Health Institute, skohler@lphi.org
Arina Lekht, Louisiana Public Health Institute, alekht@lphi.org
Lisanne Brown, Louisiana Public Health Institute, lbrown@lphi.org
Abstract: School Health Connection (SHC) is a consortium of school-based health centers (SBHCs) striving to increase access to, standardize, and improve the quality of care in SBHCs in Greater New Orleans (GNO). SHC has developed and implemented a multi-component evaluation approach to measuring its complex clinical component using both qualitative and quantitative methods. Evaluation activities to be discussed include the data collection activities and tools of the Youth Risk Behavior Survey (YRBS), Continual Quality Improvement (CQI) Chart Reviews, utilization data, patient satisfaction surveys, focus group discussions, and interviews from baseline to impact study. Modifications made to evaluation activities in the context of new technological opportunities and funding circumstances will be explored as both relate to CQI chart reviews and the Student Survey. While each activity had unique challenges, they presented many unforeseen opportunities to further efforts to standardize the quality of care in SBHCs in the GNO region.

Session Title: The Speed Dating Concept Applied to Data Collection, Idea and Information Sharing, and Consensus Building Efforts
Demonstration Session 514 to be held in Suwannee 12 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Maurya West Meiers, World Bank, mwestmeiers@worldbank.org
Sara Okada, World Bank, sokada@worldbank.org
Cristina Ling, World Bank, cling@worldbank.org
Peter Lin, World Bank, ylin1@worldbank.org
Abstract: This demonstration features how the concept of speed dating (SD) can be applied as a methodology in work and community settings to facilitate networking and the sharing of ideas and information among participants in an efficient and effective manner. It also provides opportunities for matchmaking of participants interested in shared issues. Applying the concept to M&E processes, SD provides a quick (and often fun) way to facilitate individual and group ideas, information, and opinions. The concept can be used in a variety of ways, ranging from exchanging ideas on developing program goals for logic models, to collecting user opinions about service delivery. Attendees of this demonstration will participate in a simulated session. They will also receive handouts with tips and tools to use in developing their own "speed dating" M&E sessions.

Session Title: Social Network Analysis TIG Business Meeting
Business Meeting Session 515 to be held in Suwannee 13 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Social Network Analysis TIG
TIG Leader(s):
Maryann Durland, Durland Consulting, mdurland@durlandconsulting.com
Stacey Friedman, FAIMER, staceyfmail@gmail.com

Session Title: Dialogues Between Internal and External Evaluators: Evaluating Program Impacts on the Academic Achievement of Homeless/Highly Mobile Children in Minneapolis Public Schools
Panel Session 516 to be held in Suwannee 14 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Leah Goldstein Moses, The Improve Group, leah@theimprovegroup.com
Abstract: This session stems from dialogues between an internal evaluator and an external evaluator on how to examine program impact on the academic achievement of homeless/highly mobile children in Minneapolis Public Schools (MPS). Research and Evaluation staff from MPS will explain their use of a multi-phrase longitudinal analysis approach. Using available quantitative data in this approach, MPS examines program impact on average and on individual achievement growth of homeless/highly mobile students before, during, and after an intervention. The Improve Group, an independent consultant hired by a public-private partnership which runs a program for homeless/highly mobile students in MPS, will share how de-identified quantitative data and qualitative data was used to help the partnership understand student challenges and program impact. Presentations will address the kinds of questions which can and cannot be answered by these different approaches and what the evaluators have learned from different perspectives and tensions and opportunities in collaboration.
Applying Multiphase Longitudinal Analysis to Evaluate Program Impact on the Achievement Growth of Homeless/Highly Mobile Students
Chi-Keung Chan, Minneapolis Public Schools, alex.chan@mpls.k12.mn.us
Elizabeth Hinz, Minneapolis Public Schools, elizabeth.hinz@mpls.k12.mn.us
Like many urban school districts, Minneapolis Public Schools (MPS) serves thousands of homeless/highly mobile students. MPS has been concentrating tremendous efforts to identify these students and provide equal schooling and academic support to them in compliance with the McKinney-Vento Act. Alex Chan is an internal evaluator for the District who is responsible for the homeless/highly mobile student program evaluations. The privilege of his position is that he is able to access to substantial individual-level data, and he has advanced knowledge of academic assessment measurement and statistics. Alex has used these assets to conduct evaluations and research on homeless student issues. He also works with Elizabeth Hinz, the District's homeless liaison, to present the findings to educators and policy-makers. In his presentation, Chan will illustrate how to apply the multiphase longitudinal matched-sample analysis in homeless/highly mobile student program evaluations. He will also discuss the limitations of this approach.
Kids Collaborative: An Independent Program Evaluation of How Family Support Impacts Homeless Student Achievement
Rebecca Stewart, The Improve Group, beckys@theimprovegroup.com
The Improve Group is an independent evaluation firm hired by the partnership which created the Kids Collaborative program. The program provides housing, case management and other support for the families of homeless/highly mobile students in Minneapolis Public Schools. The presenter has managed this evaluation for two years; the evaluation design includes pre- and post-intervention measures and a mixed-method design. The evaluator has access to summary data for participants from Minneapolis Public Schools, de-identified data from Minneapolis Public Housing and case file data from Lutheran Social Service (program partners). Qualitative data from family and teacher interviews contributes to a deeper understanding of the impact of household dynamics on student achievement and teacher perspectives on factors critical to student achievement. The presentation will discuss the advantages and limitations of such an approach in attempting to evaluate the impact of this program on homeless students' school achievement.

Session Title: Using Meta-evaluation to Understand and Improve a Healthy Marriages Program
Demonstration Session 517 to be held in Suwannee 15 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Human Services Evaluation TIG
Presenter(s):
Wendy Tackett, iEval, wendolyn@mac.com
Yael Levi, Child & Family Resource Council, ylevi@childresource.cc
Megan Mullins, iEval, mullinsmegan@hotmail.com
Abstract: An external evaluation team, iEval, was hired to work with a Healthy Marriages program. The program staff conducted their own internal evaluation for years. At the start of the partnership, the external evaluators conducted a thorough meta-evaluation to determine what had been done, what needed to be done, and where the gaps and strengths of internal evaluation processes were. This session will focus on the implementation of the meta-evaluation process, the reporting of meta-evaluation findings, the use of data by the client to make program improvements, and how the meta-evaluation directed the subsequent evaluation plan.

Session Title: Visualizing HIV Clinical Training in Context: Using Geographic Information System (GIS) Technology to Integrate Evaluation Into Everyday Operations
Demonstration Session 518 to be held in Suwannee 16 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Integrating Technology Into Evaluation
Presenter(s):
Rebecca Culyba, Emory University, rculyba@emory.edu
Blake Tyler McGee, Emory University, btmcgee@emory.edu
Abstract: The Southeast AIDS Training and Education Center (SEATEC) is the U.S. Public Health Service-designated AIDS Education and Training Center for six southeastern states: Alabama, Georgia, Kentucky, North Carolina, South Carolina, and Tennessee. SEATEC implemented a method for integrating program-level evaluation and publicly available epidemiological data using a straightforward spatial representation to convey the needs and impact of HIV clinical training at 7 training sites. Regional and state maps were developed collaboratively and updated routinely and then presented to training managers and coordinators. Training managers used the maps to monitor performance and for strategic planning while training coordinators used the maps to target their outreach and training efforts. Presented is an overview of GIS mapping technology, the maps developed by SEATEC, and the process by which evaluators and management integrated GIS technology into planning and evaluation resulting in increased capacity to use data to support program planning and performance monitoring.

Session Title: A Sampler of Applications of Multilevel Modeling to the Evaluation of Community Collaboration
Multipaper Session 519 to be held in Suwannee 17 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Human Services Evaluation TIG
Chair(s):
Adam Darnell, EMSTAR Research Inc, adam_darnell@yahoo.com
Discussant(s):
James Emshoff, EMSTAR Research Inc, jemshoff@gsu.edu
Abstract: Georgia's Family Connection Partnership (GFCP) represents the largest network of community collaboratives in the nation, one serving each county of the state (n=157). GFCP collaboratives are associations of social service providers, civic organizations, community leaders, and private citizens cooperatively addressing challenges to their communities' well-being. In this session we describe results from two different applications of multilevel modeling to the study of Family Connection collaboratives. In the first, multilevel modeling was applied to data from the Collaborative Member Survey, a self-report measure of collaborative functioning completed by numerous respondents from each collaborative. Dimensions of collaborative functioning were examined using multilevel confirmatory factor analysis. In the second paper, multilevel modeling was applied to longitudinal data on child abuse measured at the county-level annually from 1994-2006. Multilevel modeling was used to examine change in county-level child abuse rates as an outcome of various indicators of community context and collaborative structure and function.
A Multilevel Confirmatory Factor Analysis of the Collaborative Member Survey
Jack Barile, EMSTAR Research Inc, jpbarile@hotmail.com
Scott Weaver, Georgia State University, srweaver@gsu.edu
Adam Darnell, EMSTAR Research Inc, adam_darnell@yahoo.com
Steve Erickson, EMSTAR Research Inc, ericksoneval@att.net
James Emshoff, EMSTAR Research Inc, jemshoff@gsu.edu
Gabriel P Kuperminc, Georgia State University, gkuperminc@gsu.edu
The collaborative member survey assesses dimensions of collaborative functioning including communication, planning, leadership, budgeting and family involvement. It is administered on a voluntary basis to multiple respondents from each Family Connection collaborative. A measurement model of collaborative functioning was tested using multilevel confirmatory factor analysis. This measurement structure was also tested for measurement invariance across three years of Collaborative Member Survey data from 2006 to 2008. Then, using the most recent data, we discuss effects on collaborative functioning of both within-collaborative predictors (i.e., characteristics of individual respondents) and between-collaborative predictors (e.g., collaborative age, meeting frequency, county SES and demographic composition). Implications from both measurement and structural modeling will be discussed, including dimensions of collaborative functioning and correlates of high collaborative functioning.
Identifying Effects of Community Collaboration on Child Abuse Using Latent Growth Modeling
Adam Darnell, EMSTAR Research Inc, adam_darnell@yahoo.com
Scott Weaver, Georgia State University, srweaver@gsu.edu
Jack Barile, Georgia State University, jpbarle@hotmail.com
Gabriel P Kuperminc, Georgia State University, gkuperminc@gsu.edu
Steve Erickson, EMSTAR Research Inc, ericksoneval@att.net
James Emshoff, EMSTAR Research Inc, jemshoff@gsu.edu
Child abuse is one of the most commonly targeted outcomes among GFCP collaboratives. Rates of substantiated cases of child abuse for each Georgia county were measured annually from 1994 to 2006. Longitudinal change in abuse rates was modeled using latent growth modeling. Between-county differences in change in abuse rates were examined for association with the introduction into the county of a collaborative directly targeting abuse, controlling for community context (e.g., SES, population size, demographic composition). Several different approaches to testing the effect of collaboration were used, including a sequential process growth model relating change in abuse rates to change in a collaborative's propensity to target abuse in the period prior to measured abuse rates. We also report results from a cross-lagged autoregressive model of the relationship between collaboration and abuse rates. Discussion will address the strengths and weaknesses of each approach in terms of causal inference and practical challenges.

Roundtable: Integrating Strategic Evaluation Planning and Evaluation Capacity Building: A Discussion Based on a Public Health Program's Experiences
Roundtable Presentation 520 to be held in Suwannee 18 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
Presenter(s):
Carlyn Orians, Battelle Centers for Public Health Research and Evaluation, orians@battelle.org
Maureen Wilce, Centers for Disease Control and Prevention, muw9@cdc.gov
Michele Mercier, Centers for Disease Control and Prevention, zaf5@cdc.gov
Abstract: Building evaluation capacity in an organization is a complex, multi-faceted process. CDC's National Asthma Control Program has embarked on a process to build evaluation capacity among its funded partners to enable them to conduct useful and feasible program evaluations. While evaluators regularly develop plans for multiple evaluations of various aspects of a program over time, there has been little discussion of how to strategically sequence evaluations to augment evaluation capacity building activities. This roundtable will explore strategies for developing a strategic evaluation plan that will build capacity in the context of a public health program. Presenters will share experiences in developing tools to strategically assess both programmatic evaluation needs and organizational needs for evaluation capacity development. Presenters will also discuss challenges encountered in selecting elements to be included in strategic evaluation plans and potential criteria for assessing plans and measuring capacity.

Roundtable: Cultural and Language Barriers When Conducting Indigenous Evaluations in the Third World Context
Roundtable Presentation 521 to be held in Suwannee 19 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Indigenous Peoples in Evaluation TIG
Presenter(s):
Andrea Velasquez, Brigham Young University, andrea_velasquez@byu.net
Randall Davies, Brigham Young University, randy.davies@byu.edu
Abstract: There are many added challenges to overcome when implementing evaluations in the Third World. These challenges may range from technological to cultural. Cultural language barriers are among the most prominent and difficult barriers to overcome in order to provide a credible evaluation in indigenous evaluations. This article provides insights from the evaluator's experience of cultural and language barriers that were present during an evaluation of a non-profit organization, Care for Life. This evaluation was conducted in Mozambique, Africa with the Sena and Ndou people.

Roundtable: Assessment of Student Outcomes: Using an Evaluation Capacity Building Framework
Roundtable Presentation 522 to be held in Suwannee 20 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Assessment in Higher Education TIG
Presenter(s):
Steve Culver, Virginia Tech, sculver@vt.edu
Abstract: Evaluation Capacity Building (ECB) has most typically been used as a system to help organizations develop processes and structures to use evaluation to meet accountability standards. In higher education, these accountability requirements focus on direct and indirect evidence of student learning as demonstrated through the development of student learning outcomes, measuring those outcomes, and making changes based on these measurements. This continuous improvement process provides an example of how ECB principles are developed and carried out in higher education institutions and how subcultures within a particular institution affect and determine the process. This session will focus on program examples from two different universities that demonstrate the complexities of following an overall institutional strategy adapted to individual programs within the institution.

Roundtable: Beyond Monitoring and Oversight: A Case Study of Evaluator as Critical Friend
Roundtable Presentation 523 to be held in Suwannee 21 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Evaluation Use TIG
Presenter(s):
Sharon Rallis, University of Massachusetts Amherst, sharonrallis@earthlink.net
Abstract: The Massachusetts Public Charter School Association (MPCSA) has engaged in a three-year dissemination and replication initiative funded by the United States Department of Education. The project began by identifying high-performing charter schools that serve students from high-need communities and who are at risk for educational failure. A research team was contracted to document common elements of success across these schools. The project's goal was to disseminate findings and facilitate replication. Funding required external evaluation to measure progress toward benchmarks and monitor end-products, based on predetermined criteria. We were selected to fill this role, seen by the contractors to be one of oversight and compliance. From this perspective, the MPCSA was not anticipating use of evaluation findings for formative purposes. This paper tells how, over the course of the project, we changed the association's understanding of evaluation use. They came to describe us as their 'critical friends' who contributed to program improvement.

Roundtable: Early, Risk Focused Delinquency Prevention: What Works and How We Know
Roundtable Presentation 524 to be held in Wekiwa 3 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Crime and Justice TIG
Presenter(s):
Roger Przybylski, RKC Group, rogerkp@comcast.net
Abstract: While crime prevention takes many forms, early risk-focused prevention programs that target children and families and focus on risk factors for criminal conduct can produce long-lasting public safety benefits. Unfortunately, many of the programs that prevent crime in the most cost-effective manner are not well known. Moreover, some interventions that have considerable political or public appeal turn out, based on a review of the evidence not to be very effective at all. This presentation, which is based on the presenter's 2008 publication titled What Works: A Compendium of Evidence-Based Options for Preventing New and Persistent Criminal Behavior, identifies effective risk-focused prevention programs and discusses the evaluation findings that demonstrate their efficacy and effectiveness. Evidence-based programs for every stage of a child's development, and the methods employed and lessons learned during the course of the presenter's review of the evidence will all be discussed.

Session Title: Ethical Evaluation in Contexts Where Costs, Benefits and Net Value Matter
Multipaper Session 525 to be held in Wekiwa 4 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
Chair(s):
Valerie J Caracelli, United States Government Accountability Office, caracelliv@gao.gov
Abstract: Resistance to including costs and benefits evaluations is common, yet this can be understood as an unethical response to demand for improved accountability, learning and coordination. Ethics of evaluating impacts in monetary terms and methodologies for effectively doing so are complementary issues addressed in this session. When an evaluation does not include costs and benefits this can be construed as saying that money does not matter. Excluding certain 'impactees' is like saying they do not matter. It is now painfully obvious that financial stewardship and social equity do matter, yet some questions remain. How might I evaluate costs and benefits in a way that strives for positive effects for all impactees while enabling the greater whole to develop and sustain increased financial viability? Principles and methods are illuminated with examples to show how evaluators can provide formative guidance that helps nurture economic growth while meeting social needs at lower cost.
Multi-evaluand Co-evaluation: The Virtually Possible Imperative
Ron Visscher, Western Michigan University, visscron@aquinas.edu
This presentation shows how evaluators can better assist policymakers in meeting their ethical responsibility for rational decision-making while simultaneously enhancing collective learning and coordination across 'impactee' networks. A generally applicable systematic approach for collecting, modeling and analyzing network-wide cost, benefit and value information is explained. The approach enables evaluators to readily synthesize value impacts between and among interdependent 'evaluands' (and other impactees). The more inherent inclusion of value impacts from across impactee networks improves evaluations and their conclusions. The resulting ability to compare and align complex interactive effects on net value creation in context improves formative guidance, enabling improved optimization. Early results are positive, yet suggest a virtual system is necessary for the approach to be cost-feasible. Transparent foresight and verification of impacts can improve harmonization of cooperation and attribution of complex effects, and may be what is needed to repair ailing socioeconomic systems in a productive and equitable way.
Evaluating Costs and Benefits Ethically: Avoiding the Special Pitfalls of Using Monetary Units to Measure Resources "In" and Outcomes "Out"
Brian Yates, American University, brian.yates@mac.com
Ethical problem areas in cost-inclusive evaluation include unacknowledged biases in funding, hypotheses, methodology, analyses conducted, and utilization of findings in policy formulation and decision-making. As in other evaluations that eschew costs, but much more so in those evaluations that include costs and monetary outcomes (benefits) of offering programs, funding strategies can prevent detection of anything but large effects, examination of whether "sacred cows" are cost-beneficial. Interest groups such as consumers may be excluded from the evaluation, evaluation designs may prevent examination of less costly or more cost-beneficial alternatives. Valuing program outcomes as lifetime earnings can maintain social and political prejudices in favor of genders, ethnicities, economic classes, and age groups that are paid at higher rates for the same work. Excluding volunteered and donated resources from cost evaluation can prevent programs from being replicated in areas that cannot afford to provide similar amounts of resources without pay.

Session Title: Learning About Context Through Appreciative Inquiry
Demonstration Session 526 to be held in Wekiwa 5 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Presenter(s):
Tessie Catsambas, EnCompass LLC, tcatsambas@encompassworld.com
Abstract: The life of evaluators and evaluation clients can be deeply enriched through the application Appreciative Inquiry (AI) to Evaluation. This highly interactive session will have a short experience using AI to explore context in evaluation and help clarify program desirable outcomes. The session will be using an appreciative process and then build on the data that comes out of the practice. It will also provide examples of AI applications in different sectors and contexts, and variation and options in AI application (domestic and international organizations, government and nonprofits, at the community, organizational, national, and international levels). The session will show how applying Appreciative Inquiry to evaluation can help participants learn (clarifying the goals and purpose of evaluation, engaging stakeholders in exciting new ways, broadening participation, deepening the culture competence of evaluation, bringing a whole systems view to evaluation, and, ultimately, building evaluation and organizational capacity).

Session Title: Learning From Impacts and Cost Data
Multipaper Session 527 to be held in Wekiwa 6 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG
Chair(s):
Nadini Persaud,  University of West Indies, nadini.persaud@cavehill.uwi.edu
An Innovative Tool for Collecting Personnel Costs for Use in Economic Evaluations
Presenter(s):
Justin Ingels, University of Georgia, ingels@uga.edu
Phaedra Corso, University of Georgia, pcorso@uga.edu
Abstract: The objectives of this research are to conduct a prospective economic evaluation, both benefit-cost analysis and cost-effectiveness analysis, for a known efficacious program to prevent substance use among rural AA youth - the Strong African American Families-Teen program. As a first step in this research, we have developed an innovative electronic form for collecting personnel costs alongside the implementation of the randomized control trial. Using this tool, we found that the personnel costs for the treatment arm were 4 - 31% higher compared to the control arm, depending upon personnel category under consideration. The prospective nature of our data collection allows for improvements in data quality and accuracy. Further, one is able to clearly delineate those costs that are research-specific versus programmatic. It is our hope that this tool and approach will be adopted by other economic evaluation researchers in the field collecting programmatic costs.

Session Title: Employing Case Study Methodology in PreK-12 Settings
Multipaper Session 528 to be held in Wekiwa 7 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Qualitative Methods TIG and the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Eric Barela,  Partners in School Innovation, ebarela@partnersinschools.org
Looking Into the Black Box: Case Studies of the Reading First Program
Presenter(s):
Helene Jennings, ICF Macro, helene.p.jennings@macrointernational.com
Elaine Pierrel, Pierrel Associates, eepierrel@aol.com
Abstract: Macro International completed three years of evaluating the Reading First program in the State of Maryland. Quantitative data had been amassed in terms of student achievement on the state reading test; analyses were conducted to determine disaggregated results by subgroups; and educator surveys measured knowledge of early literacy instruction, teachers' attitudes, and the effectiveness of the professional development efforts. In an effort to understand some of the dynamics of the differing results of this program in urban, suburban, and rural school settings, six case studies were undertaken. A purposive sample selected varied environments to explore the questions of reasons for success, issues associated with classroom implementation, local leadership, expansion of the program beyond study schools and grades, effectiveness for ELL students, and sustainability. The presentation will focus on design, implementation, and lessons learned from use of the case study methodology.
Measuring Progress in Educating Marginalized Students at One Alternative School: Issues, Efforts, and Realities
Presenter(s):
Brianna Kennedy, University of Southern California, blkenned@usc.edu
Abstract: This paper contributes to the 2009 conference theme by examining educational evaluation in the specific context of K-12 schools that serve expelled and other students exhibiting maladaptive behavior. Alternative schools catering to students with poor school performance must contend with a host of problems that affect students' abilities to learn and demonstrate their learning. Schools that meet the needs of these children address social, emotional, and behavioral issues that impact learning, as well as alarming gaps in skills that many of these students have. Typical means of establishing a school's success, which primarily include standardized test scores administered annually, do not encompass the progress made in alternative schools serving this population of students. Utilizing case study data from 40 interviews and over 75 hours of observation, this paper examines assessment efforts at one such school in order to help inform the discussion of relevant accountability practices in alternative school contexts.

Session Title: Alcohol, Drug Abuse and Mental Health TIG Business Meeting
Business Meeting Session 529 to be held in Wekiwa 8 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
TIG Leader(s):
Diana Seybolt, University of Maryland Baltimore, dseybolt@psych.umaryland.edu
Margaret Cawley, National Development and Research Institutes Inc, cawley@ndri-nc.org

Session Title: The Evaluative Organization: A Call for Business and Industry to Evolve
Multipaper Session 530 to be held in Wekiwa 9 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Business and Industry TIG
Chair(s):
Jennifer Martineau, Center for Creative Leadership, martineauj@ccl.org
Abstract: Attempts to distinguish between an evaluative organization and a learning organization are in their infancy today. The tendency for evaluation scholars to blend the two concepts or to refer to evaluation as learning and learning as evaluation limits the explication of both concepts. Although a learning organization actively captures, transfers, and utilizes relevant knowledge, it does not necessarily collect information to determine the merit, worth, or significance of a strategic initiative or business process and its contribution toward improved organizational performance. To arrive at this evaluative conclusion, something more is required beyond pure learning. The first paper outlines the three levels of evaluation in organizations and explores the need for organizations to move beyond learning and instill a culture of evaluation into the organization. The second paper presents a case study exploring how learning is enhanced through evaluating the corrective action processes utilized in the nuclear power industry.
Moving Beyond Learning: The Evaluative Organization Imperative
Wes Martz, Kadant Inc, wes.martz@gmail.com
Although an evaluative organization is a learning organization, a learning organization is not always evaluative. The evaluative organization "adds value" to the learning organization concept by assessing the extent to which the knowledge acquired by or integrated into the organization is worthwhile and used to improve organizational effectiveness. Hence, an evaluative organization is a learning organization that instinctually reflects on its actions and external environment and continuously improves because of those reflections. In other words, the modus operandi of an evaluative organization is its incorporation of the evaluative attitude as an implicit element of organizational culture, moving well beyond the explicit acknowledgement of a commitment to evaluation or simply doing evaluations. This presentation outlines the three levels of evaluation in organizations and explores the need for organizations to move beyond learning and instill a culture of evaluation into the organization to maximize performance and improve organizational effectiveness.
How Effective is Our Learning? Evaluating the Corrective Action Process at United States Nuclear Power Plants
Otto Gustafson, Western Michigan University, ottonuke@yahoo.com
The U.S. commercial nuclear power industry is required by the Code of Federal Regulations to implement a Corrective Action Process to evaluate (i.e. determine the significance of) conditions adverse to quality at each nuclear power plant (e.g. degraded equipment, processes, or procedures, etc.). Through the Corrective Action Process, organizations capture, share, and utilize relevant evaluation results. The Corrective Action Process is thus a large part of how nuclear plant personnel learn and an integral contributor to sustainable organizational learning. Moving beyond learning to evaluate how the corrective action process contributes to organizational effectiveness is an important piece of how the evaluation transdiscipline can improve business and industry. A case study is used to demonstrate how several U.S. commercial nuclear power plants evaluate and learn from their Corrective Action Processes. Recommendations for systemic improvement are forwarded as a call for nuclear plants to evolve into evaluative organizations.

Session Title: Developing a Masters Degree Program in Collaboration With a Business School
Think Tank Session 531 to be held in Wekiwa 10 on Friday, Nov 13, 10:55 AM to 11:40 AM
Sponsored by the Teaching of Evaluation TIG
Presenter(s):
Leanne Kallemeyn, Loyola University Chicago, lkallemeyn@luc.edu
Discussant(s):
David Ensminger, Loyola University Chicago, densmin@luc.edu
Terri Pigott, Loyola University Chicago, tpigott@luc.edu
Abstract: University training programs for evaluators seem to be declining (Engle, 2004), and certificate-based training programs are emerging (e.g., The Evaluator's Institute). Through debates on certification, credentialing, and accreditation, the field has questioned the appropriateness of various approaches to training (Worthen, 1999). At the same time, the field of evaluation has seen a growth in performance measurement (e.g., McDavid & Hawthorn, 2006; Newcomer, 1997). Within this context, Loyola University Chicago School of Education is in the process of developing a Masters Degree in Program Evaluation in collaboration with the School of Business for students in education, social services, and industry. Faculty members chose to integrate an emphasis on human performance technology (HPT) with program evaluation. We will provide background in HPT. We will then focus the discussion on the similarities and differences between HPT and program evaluation, and the advantages and challenges of integrating program evaluation and HPT.

Return to Evaluation 2009
Search Results for All Sessions