| Session Title: Starting and Succeeding as an Independent Evaluation Consultant | |||
| Panel Session 592 to be held in Capitol Ballroom Section 1 on Friday, Nov 7, 1:35 PM to 3:05 PM | |||
| Sponsored by the Independent Consulting TIG | |||
| Chair(s): | |||
| Jennifer E Williams, JE Williams and Associates LLC, jew722@zoomtown.com | |||
| Amy Germuth, Compass Consulting Group LLC, agermuth@mindspring.com | |||
| Discussant(s): | |||
| Michael Hendricks, Independent Consultant, mikehendri@aol.com | |||
| Abstract: Independent Consultants will share their professional insights on starting and maintaining an Independent Evaluation Consulting business. Panelists will describe ways of building and maintaining client relationships and share their expertise related to initial business set-up and lessons they have learned. Discussions will include the pros and cons of having an independent consulting business, the various types of business structures, methods of contracting and fee setting, as well as the personal decisions that impact on having your own business. They will examine some consequences of evaluation in the context of conducting independent consulting in diverse settings. The session will include ample time for audience members to pose specific questions to the panelists. | |||
| |||
| |||
| |||
|
| Session Title: Using Logic Models to Support Organizational Alignment, Evaluation, and Learning: One Organization's Journey Toward a Culture of Evaluation | ||||
| Panel Session 593 to be held in Capitol Ballroom Section 2 on Friday, Nov 7, 1:35 PM to 3:05 PM | ||||
| Sponsored by the College Access Programs TIG | ||||
| Chair(s): | ||||
| Michelle Jay, University of South Carolina, jaym@gwm.sc.edu | ||||
| Keren Zuniga McDowell, Citizen Schools, kerenzuniga@citizenschools.org | ||||
| Discussant(s): | ||||
| Michelle Jay, University of South Carolina, jaym@gwm.sc.edu | ||||
| Abstract: This panel will discuss how one multi-program, multi-site organization used logic models to emerge from an organizational culture where programs operated in isolation, without cross-program outcomes measurement or sharing of data. The central theme of the panel will revolve around the development and use of a series of logic models, which were created collaboratively with organization stakeholders to ensure comprehensiveness and depth of understanding. Levels of success and lessons learned will be reviewed in three categories: strategic alignment of programs' measurement and outcomes; practical implementation of an effective and efficient evaluation policy; and, organizational growth and capacity building resulting from evaluation findings. Examples will be provided as to how logic models were used to align program theory and practice, inform evaluation design, and define program impact. | ||||
| ||||
| ||||
|
| Session Title: Assessment and Evaluation in Higher Education: Administrative and Policy Perspectives | |||||||||||||||||||||||
| Multipaper Session 594 to be held in Capitol Ballroom Section 3 on Friday, Nov 7, 1:35 PM to 3:05 PM | |||||||||||||||||||||||
| Sponsored by the Assessment in Higher Education TIG | |||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||
| Jeanne Hubelbank, Independent Consultant, jhubel@evalconsult.com | |||||||||||||||||||||||
| Discussant(s): | |||||||||||||||||||||||
| Beverly Parsons, InSites, beverlyaparsons@aol.com | |||||||||||||||||||||||
|
| Session Title: Relief in Sight: A Systems-Thinking Application of Self Determination Theory-based Logic Models to Modify the Effects of High Stakes Testing |
| Demonstration Session 595 to be held in Capitol Ballroom Section 4 on Friday, Nov 7, 1:35 PM to 3:05 PM |
| Sponsored by the Program Theory and Theory-driven Evaluation TIG |
| Presenter(s): |
| Deborah Wasserman, The Ohio State University, wasserman.12@osu.edu |
| Abstract: Whether in education, health care, mental health, or any other human service system, high stakes testing is a double-edged sword. As evidenced by No Child Left Behind, testing for accountability purposes can both improve and diminish program quality. This demonstration presents the use of Self-Determination Theory-based logic models as an approach that reconciles accountability with quality improvement. Based on a systems-thinking approach, these models create a means for holding human service systems responsible for both outcomes and well-being of the individuals the outcomes affect. Data from the evaluation of a comprehensive out-of-school program exemplifies how the data can be collected and the utility of the results. In addition to the theoretical explanation and exemplar, participants will be provided with tools for constructing models specific to their own evaluations, selecting and using measurement instruments, and methodology for analysis. |
| Session Title: Conversation Hour With the 2008 AEA Award Winners | ||||
| Panel Session 596 to be held in Capitol Ballroom Section 5 on Friday, Nov 7, 1:35 PM to 3:05 PM | ||||
| Sponsored by the AEA Conference Committee | ||||
| Chair(s): | ||||
| James Altschuld, The Ohio State University, altschuld.1@osu.edu | ||||
| Lois-ellin Datta, Datta Analysis, datta@ilhawaii.net | ||||
| Abstract: This is a unique session begun at AEA 2007. It provides an opportunity for AEA members to meet and interact with the 2008 national award winners. What are their insights and perceptions about the field of evaluation that they have gained from their work in it. Attendees at the session will be enable to discuss with the awardees factors (mentorships, learnings, special projects, etc.) in their careers that made a major impression on them and enhanced their efforts and productivity. Such discussion should be illuminating and informative for all members of the Association. | ||||
| ||||
| ||||
| ||||
|
| Session Title: Multicultural Issues in Public Health Evaluation | ||||||||||||||||
| Multipaper Session 597 to be held in Capitol Ballroom Section 6 on Friday, Nov 7, 1:35 PM to 3:05 PM | ||||||||||||||||
| Sponsored by the Multiethnic Issues in Evaluation TIG | ||||||||||||||||
| Chair(s): | ||||||||||||||||
| Tamara Bertrand Jones, Florida State University, tbertrand@fsu.edu | ||||||||||||||||
|
| Session Title: Service-Learning and Civic Engagement: Framing the Evaluation Issues | |||
| Panel Session 598 to be held in Capitol Ballroom Section 7 on Friday, Nov 7, 1:35 PM to 3:05 PM | |||
| Sponsored by the Assessment in Higher Education TIG | |||
| Chair(s): | |||
| Annalisa Raymer, University of Alaska, afalr@uaa.alaska.edu | |||
| Abstract: Service-learning is a concept that involves engaging in community service and learning subjects and dispositions (attitudes, values, etc.) related to being a citizen in a democratic society. A construct that is sometimes viewed as 'an amorphous concept that defies rigid definitions and universal understanding' (Shumer, 1993), service-learning if often defined by its context. Differing contexts create havoc for evaluators because they must continuously negotiate goals, purposes, processes, and outcomes. The goal of this panel is to frame diverse issues involved in conducting evaluations of service-learning and civic engagement. Among the challenges to be discussed are: 1) delineating both the terms and actions of the programs; 2) issues of fit and effectiveness - how program activities lead to measures of effectiveness and quality; and 3) considerations in selecting evaluation approaches that match social and civic outcomes, from participatory approaches, to case studies and individual systems of assessment. | |||
| |||
| |||
| |||
| |||
|
| Roundtable Rotation I: Can Second Life be a Useful Evaluative Tool in Real Life? |
| Roundtable Presentation 599 to be held in the Limestone Boardroom on Friday, Nov 7, 1:35 PM to 3:05 PM |
| Sponsored by the Integrating Technology Into Evaluation |
| Presenter(s): |
| Stephen Hulme, Brigham Young University, stephen_hulme@yahoo.com |
| Tonya Tripp, Brigham Young University, tonya.tripp@byu.edu |
| Abstract: Second Life, a popular Multi-User Virtual Environment, provides many technological advances that were never possible before. News stations (CNN), professional organizations (AECT among others), educators, businesses (Wells Fargo) and vendors (Lexus) have recognized the benefits of this tool, but evaluators have yet to jump on board. The capabilities Second Life should not go overlooked by evaluators; there are many tools that facilitate new and exciting evaluations, and increase flexibility and capability in our current evaluations. These capabilities include synchronous discussion from anywhere in the world, the option to capture (video record) conversations, meetings, presentations, focus groups, etc, which will enable evaluators to do things they’ve never done before. In addition to connecting with their current audience, evaluators will be able to reach an entire different demographic as well. This roundtable discussion will explore the pros and cons of using Second Life as an evaluative tool. |
| Roundtable Rotation II: New Tools for the Trade: The Role of Interactive Technology in Training the Next Generation of Evaluators |
| Roundtable Presentation 599 to be held in the Limestone Boardroom on Friday, Nov 7, 1:35 PM to 3:05 PM |
| Sponsored by the Integrating Technology Into Evaluation |
| Presenter(s): |
| SaraJoy Pond, Brigham Young University, sarajoypond@gmail.com |
| David Williams, Brigham Young University, dwilliams@byu.edu |
| Abstract: What roles do simulations, expert systems, video analysis tools and other forms of interactive technology play in the training of new evaluators? What role could they play? How can we integrate real-world experience into the predominant 1-semester or 1-year course that comprises all the training most new evaluators get? What solutions have been contributed in this area? Where do we go from here? This roundtable will feature a presentation of a new evaluation tool, an exploration of its features and the results of pilot testing, and an open discussion about possible implications and future directions for technology in training new evaluators. |
| Roundtable: What Works, Effective Recidivism Reduction and Risk-Focused Prevention Programs: A Compendium of Evidence-Based Options for Preventing New and Persistent Criminal Behavior |
| Roundtable Presentation 600 to be held in the Sandstone Boardroom on Friday, Nov 7, 1:35 PM to 3:05 PM |
| Sponsored by the Crime and Justice TIG |
| Presenter(s): |
| Roger Przybylski, RKC Group, rogerkp@comcast.net |
| Abstract: This roundtable is based on the presenter’s 2008 publication titled What Works, Effective Recidivism Reduction and Risk-Focused Prevention Programs: A Compendium of Evidence-Based Options for Preventing New and Persistent Criminal Behavior. Based on a comprehensive and systematic review of the literature, the publication discusses the impact of incarceration on crime, what works to reduce recidivism, what works to prevent the onset of delinquent and criminal behavior, and key issues concerning effective program implementation. Methods employed, key findings, and lessons learned from the research will be described during the presentation. |
| Roundtable Rotation I: Program Evaluation and Public School Districts: Facing the Challenges |
| Roundtable Presentation 601 to be held in the Marble Boardroom on Friday, Nov 7, 1:35 PM to 3:05 PM |
| Sponsored by the Pre-K - 12 Educational Evaluation TIG |
| Presenter(s): |
| Chandra Johnson, Clayton County Public Schools, cfjohnson@clayton.k12.ga.us |
| Qiana Cutts, Clayton County Public Schools, qmcutts@clayton.k12.ga.us |
| Joe Nail, Clayton County Public Schools, jnail@clayton.k12.ga.us |
| Stephanie Beane, Clayton County Public Schools, snbeane@clayton.k12.ga.us |
| Abstract: In the age of accountability supported by No Child Left Behind (2001) mandates, school districts across the United States are relying more and more on the evaluation of teaching and learning, academic programs, best practices and the like. As such, school districts’ research and accountability departments are faced with an urgent need to engage in constant program evaluation. Some school districts’ desire to have microwave program evaluations detracts from research and accountability departments’ opportunities to provide “methodically sound” evaluations that “produce credible, comprehensive, [and] context-sensitive” results. Often times, research and accountability specialists are transformed from evaluators to evaluation teachers while working with district personnel who possess a limited understanding of evaluation standards and procedures. In our system, we have helped to enhance evaluation knowledge among our stakeholders while increasing our own expertise. This presentation will expound on the issues and challenges around conducting evaluation in a large urban school district and outline measures that were taken to build stakeholders’ program evaluation competency. |
| Roundtable Rotation II: Evaluation of the Planning and Implementation Efforts for Year One Of an Urban High School Reform Project |
| Roundtable Presentation 601 to be held in the Marble Boardroom on Friday, Nov 7, 1:35 PM to 3:05 PM |
| Sponsored by the Pre-K - 12 Educational Evaluation TIG |
| Presenter(s): |
| Sharon Ross, Founder's Trust, sross@founderstrust.org |
| Gibbs Kanyongo, Duquesne University, kanyongog@duq.edu |
| Rodney Hopson, Duquesne University, hopson@duq.edu |
| Jessica Adams, Duquesne University, adams385@duq.edu |
| Carol Brooks, Founder's Trust, cxbrooks@founderstrust.org |
| Elizabeth Maurhoff, Founder's Trust, emaurhoff@founderstrust.org |
| Abstract: A school district in the northeast United States is in the first year of a multifaceted reform plan for achieving high school excellence. This evaluation focuses on the planning and initial implementation of three of the reform efforts in the district’s high schools, which include a program for students falling behind in their reading level, improved career and technical education programming, and a program to assist students through the critical transition that occurs in the 9th grade. The evaluation is unique in two ways: its use of a culturally responsive approach to ground the project in the context of the city in which the district is located and its use of a more democratic approach as a way to ensure that the voices of those being impacted by reform are heard and incorporated into decisions the district makes as a result of this evaluation. |
| Session Title: Evaluation Capacity Building: Tools Emerging From Practice | ||||||||||||||||||
| Multipaper Session 602 to be held in Centennial Section A on Friday, Nov 7, 1:35 PM to 3:05 PM | ||||||||||||||||||
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG and the Government Evaluation TIG | ||||||||||||||||||
| Chair(s): | ||||||||||||||||||
| Maria Jimenez, University of Illinois Urbana-Champaign, mjimene2@uiuc.edu | ||||||||||||||||||
| Discussant(s): | ||||||||||||||||||
| Jennifer Martineau, Center for Creative Leadership, martineauj@ccl.org | ||||||||||||||||||
|
| Session Title: Longitudinal and Growth Curve Analysis | |||||||||||||||||
| Multipaper Session 603 to be held in Centennial Section B on Friday, Nov 7, 1:35 PM to 3:05 PM | |||||||||||||||||
| Sponsored by the Quantitative Methods: Theory and Design TIG | |||||||||||||||||
| Chair(s): | |||||||||||||||||
| Patrick McKnight, George Mason University, pmcknigh@gmu.edu | |||||||||||||||||
| Discussant(s): | |||||||||||||||||
| Frederick Newman, Florida International University, newmanf@fiu.edu | |||||||||||||||||
|
| Session Title: Promoting Policy-Relevant Impact Evaluation for Enhanced Development Effectiveness | |||
| Panel Session 604 to be held in Centennial Section C on Friday, Nov 7, 1:35 PM to 3:05 PM | |||
| Sponsored by the Presidential Strand and the International and Cross-cultural Evaluation TIG | |||
| Chair(s): | |||
| Jim Rugh, Independent Consultant, jimrugh@mindspring.com | |||
| Abstract: The results agenda set adopted by many development agencies has driven a desire for stronger evidence to be provided by impact evaluation. At the same time there have been calls from some quarters for impact evaluation to become more rigorous. Various agencies have been involved in promoting different initiatives to expand coverage by quality impact evaluations, but have been aware of issues regarding both methodological debates and questions of ownership. The presenters in this session provide differing perspectives on the development impact evaluation debate: from a bilateral agency, a developing country evaluator and that of an insider in the new initiatives | |||
| |||
| |||
|
| Session Title: Research on Evaluation Methods | |||||||||||||||||||||||||
| Multipaper Session 605 to be held in Centennial Section F on Friday, Nov 7, 1:35 PM to 3:05 PM | |||||||||||||||||||||||||
| Sponsored by the Research on Evaluation TIG | |||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||
| John LaVelle, Claremont Graduate University, john.lavelle@cgu.edu | |||||||||||||||||||||||||
|
| Session Title: Extension Education Evaluators Adapt “A Checklist for Building Organizational Evaluation Capacity” to Extension Contexts | ||||
| Panel Session 606 to be held in Centennial Section G on Friday, Nov 7, 1:35 PM to 3:05 PM | ||||
| Sponsored by the Extension Education Evaluation TIG | ||||
| Chair(s): | ||||
| Heather Boyd, Virginia Polytechnic Institute and State University, hboyd@vt.edu | ||||
| Discussant(s): | ||||
| Michael Lambur, Virginia Polytechnic Institute and State University, lamburmt@vt.edu | ||||
| Abstract: Extension education organizations in the past few years have made a commitment to support evaluation capacity building (ECB) for organizational, workforce and program improvement. These organizations have provided budgets and administrative support to the ECB enterprise as well as hired full- and part-time evaluators and evaluation capacity builders. The Extension system is a dynamic laboratory for evaluation capacity building for several reasons, including the pressures on it to show public value for the tax monies that support it. Panelists for this presentation take elements of 'A Checklist for Building Organizational Capacity' by King and Volkov (2007) and apply and/or adapt the items in the checklist to their extension-based organizational realities. | ||||
| ||||
| ||||
| ||||
|
| Session Title: The Core Concepts of Applied Cost-Effectiveness and Cost-Benefit Analysis |
| Skill-Building Workshop 607 to be held in Centennial Section H on Friday, Nov 7, 1:35 PM to 3:05 PM |
| Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG |
| Presenter(s): |
| Patricia Herman, University of Arizona, pherman@email.arizona.edu |
| Brian Yates, American University, brian.yates@mac.com |
| Abstract: To engage decision-makers who are charged with doing more for clients and taxpayers with dwindling private and public resources, evaluators increasingly need to measure and improve not just effectiveness but also cost-effectiveness. Because cost-effectiveness analysis (CEA) must start from the determination of effectiveness, an efficient approach is for evaluators to add measures of costs to their planned studies, thus allowing CEA (and if effects are monetizable, cost-benefit analysis or CBA) to be performed. This workshop is gives evaluators both conceptual foundations for the proper application of cost-effectiveness and cost-benefit analysis, and concrete tools for cost and benefit assessment. Core concepts taught through hands-on examples include the appropriate counterfactual, the perspective of the analysis and its implications, and the identification and measurement of the appropriate costs, effectiveness, and benefits so that the cost-effectiveness and cost-benefit of alternative programs can be compared and optimized. Specific assessment tools are referenced as well. |
| Session Title: Measuring Use and Influence in Large Scale Evaluations | |||||||||||||||
| Multipaper Session 608 to be held in Mineral Hall Section A on Friday, Nov 7, 1:35 PM to 3:05 PM | |||||||||||||||
| Sponsored by the Evaluation Use TIG | |||||||||||||||
| Chair(s): | |||||||||||||||
| Susan Tucker, E & D Associates LLC, sutucker@sutucker.cnc.net | |||||||||||||||
|
| Session Title: Quasi-Experimental Research Designs |
| Skill-Building Workshop 609 to be held in Mineral Hall Section B on Friday, Nov 7, 1:35 PM to 3:05 PM |
| Sponsored by the Quantitative Methods: Theory and Design TIG |
| Presenter(s): |
| Jason Siegel, Claremont Graduate University, jason.siegel@cgu.edu |
| Eusebio Alvaro, Claremont Graduate University, eusebio.alvaro@cgu.edu |
| Abstract: Quasi-experimental designs allow for assessment when an experimental design is not possible. Our 90-minute session will cover 10 different quasi-experimental designs including: Regression-Discontinuity Analyses, Counterbalanced Designs, and Multiple Time-Series Designs. After the quasi-experimental designs are introduced, participants will be given various situations and asked to configure the best possible quasi-experimental design for each. |
| Session Title: Bringing the Wisdom of Elders to Indigenous Evaluation |
| Think Tank Session 610 to be held in Mineral Hall Section C on Friday, Nov 7, 1:35 PM to 3:05 PM |
| Sponsored by the Indigenous Peoples in Evaluation TIG |
| Presenter(s): |
| Joan LaFrance, Mekinak Consulting, joanlafrance1@msn.com |
| Rosemary Christensen, University of Wisconsin Green Bay, christer@uwgb.edu |
| Abstract: This Think Tank's goal is to explore and define what it means to fully engage Elders in program evaluation by engaging with Elders at the annual AEA conference in a relaxed discussion format. Three Elders who are recognized cultural experts will join evaluators interested in learning how to incorporate Elder wisdom and knowledge in evaluations conducted in Indigenous communities. Participants will join in a circle to share experiences and explore ideas that can be tested in various evaluation settings. We hope to stimulate increased partnerships with Elders and to more fully learn from their wisdom while engaging them in our work. |
| Session Title: Measurement Strategies and Evaluation Approaches in Substance Abuse and Mental Health | |||||||||||||||||||||||||
| Multipaper Session 611 to be held in Mineral Hall Section D on Friday, Nov 7, 1:35 PM to 3:05 PM | |||||||||||||||||||||||||
| Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG | |||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||
| Roger Boothroyd, University of South Florida, boothroy@fmhi.usf.edu | |||||||||||||||||||||||||
|
| Session Title: Cross-Culturally Competent Evaluation - Similar Vision But Different Lenses? |
| Think Tank Session 612 to be held in Mineral Hall Section E on Friday, Nov 7, 1:35 PM to 3:05 PM |
| Sponsored by the Non-profit and Foundations Evaluation TIG |
| Presenter(s): |
| Nancy Csuti, The Colorado Trust, nancy@coloradotrust.org |
| Discussant(s): |
| Kien Lee, Association for the Study and Development of Community, kien@capablecommunity.com |
| LaKeesha Woods, Association for the Study and Development of Community, lwoods@capablecommunity.com |
| Abstract: The report Case Studies of Cross-Culturally Competent Evaluations is designed to present the reader with real-life situations that involve evaluation in a diverse world. It is a series of situations that evaluators, funders and community members have faced that challenged their notion of cross-culturally competent evaluation. The case studies are presented and responses to questions posed to various individuals follow. The participants in the think tank will receive a draft of this document and hear a short presentation of key themes. Then, breaking up into several small groups, ideally according to profession, each group will discuss the following questions: 1. What is the role of cross-cultural competency in acceptable standards for an evaluation? 2. Who is responsible for assuring evaluations are culturally appropriate? 3. Should funders be expected to pay more for cross culturally appropriate evaluations? The different insights and options held by the different groups will be shared in the reconvening. |
| Session Title: Incorporating Qualitative Inquiry into Complex, Multi-Site Evaluation | |||||||||||||||||||||||
| Multipaper Session 613 to be held in Mineral Hall Section F on Friday, Nov 7, 1:35 PM to 3:05 PM | |||||||||||||||||||||||
| Sponsored by the Qualitative Methods TIG | |||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||
| Janet Usinger, University of Nevada Reno, usingerj@unr.edu | |||||||||||||||||||||||
| Discussant(s): | |||||||||||||||||||||||
| Janet Usinger, University of Nevada Reno, usingerj@unr.edu | |||||||||||||||||||||||
|
| Roundtable Rotation I: Measuring Capacity Building Through Evaluation: Impacting a Foundation’s Decision-Making |
| Roundtable Presentation 615 to be held in the Slate Room on Friday, Nov 7, 1:35 PM to 3:05 PM |
| Sponsored by the Non-profit and Foundations Evaluation TIG |
| Presenter(s): |
| Nakia James, Momentum Consulting & Evaluation LLC, momentumevaluations@yahoo.com |
| Michelle Bakerson, Momentum Consulting & Evaluation LLC, momentumevaluations@yahoo.com |
| Abstract: The W.K. Kellogg Foundation, one of the most recognized and prestigious foundations in the world, continues to empower organizations to successfully realize and achieve their mission. Though each organization offers diversity in opportunities to a variety of demographics, each has been supported with both grant funds and professional development resources provided by the Foundation. Subsequently, the W.K. Kellogg Foundation contracted Momentum Evaluations, L.L.P to conduct a formative evaluation of sixteen organizations, with a focus on their Capacity Building efforts. The purpose was to determine the extent to which the identified primary organizations are 1) achieving capacity building, 2) to obtain a clear description of these activities, 3) the progress they are making towards their capacity building efforts, and 4) the use of additional sources of support. This evaluation was designed to be a tool for facilitating grantee improvement and future decision-making. Accordingly, Collaborative Evaluation and Decision-and- Accountability approaches were selected. |
| Roundtable Rotation II: The Role of Foundation Trustees and Emerging Evaluation Practices |
| Roundtable Presentation 615 to be held in the Slate Room on Friday, Nov 7, 1:35 PM to 3:05 PM |
| Sponsored by the Non-profit and Foundations Evaluation TIG |
| Presenter(s): |
| Samantha Nobles-Block, FSG Social Impact Advisors, samantha.nobles-block@fsg-impact.org |
| Abstract: What does “evaluation” mean to foundation trustees? How does the trustee perspective shape the way a foundation uses evaluation? This session will explore FSG’s recent dialogue with trustees to understand how they think about evaluation today, uncover what they think is missing from current approaches, and gauge reactions to emerging practices. FSG will share excerpts from materials targeted to foundation trustees that discuss the range of evaluation approaches available in language adapted to their concerns. We hope that these materials also provide insight to evaluators and foundation staff that will enable them to better understand the evaluative needs and expectations of foundation boards. Roundtable participants will be asked to talk about the board-level dynamic they have observed as evaluators or as foundation staff and reflect on the role of trustees in guiding the use of evaluation. |
| Session Title: Flexibility and Creativity in Evaluation Methods: This Wasn’t in the Textbook! | |||||||||||||||
| Multipaper Session 616 to be held in the Agate Room Section B on Friday, Nov 7, 1:35 PM to 3:05 PM | |||||||||||||||
| Sponsored by the Graduate Student and New Evaluator TIG | |||||||||||||||
| Chair(s): | |||||||||||||||
| Joel Nadler, Southern Illinois University at Carbondale, jnadler@siu.edu | |||||||||||||||
|
| Session Title: Advancing Evaluation in Organizational Settings |
| Multipaper Session 617 to be held in the Agate Room Section C on Friday, Nov 7, 1:35 PM to 3:05 PM |
| Sponsored by the Business and Industry TIG |
| Chair(s): |
| Otto Gustafson, Western Michigan University, ottonuke@yahoo.com |
| Abstract: Evaluation is a regular activity in organizations, yet few managers or business professionals refer to their work as evaluation. The application of serious evaluation in business and industry has traditionally focused on personnel-related aspects of organizations, such as training and human resource development. In this session, the presenters explore the use of evaluation in business beyond these areas to reveal applications of evaluation in business. The first paper considers the characteristics of evaluative organization, those which have integrated both evaluation thinking and practice into their culture. It will also discuss what makes organizations of this type 'something more' than learning organizations. The second paper introduces a criteria of merit checklist for evaluating organizational effectiveness. The tool is designed for use by professional evaluators and management practitioners to assess the overall effectiveness of an organization. The final paper examines how process improvements can be evaluated by organizations. |
| The Evaluative Organization: Something More than Learning |
| Amy Gullickson, Western Michigan University, amy.m.gullickson@wmich.edu |
| Evaluations performed in an organizational setting tend to focus on specific internal areas such as process improvement, quality control, or employee performance. Some organizations, however, have integrated evaluation into their culture in such a way that assessing merit, worth and/or significance is an integral part of every employee's daily work. This presentation will outline the characteristics of these 'evaluative' organizations, which are something more than learning organizations. Discussion will include the barriers and enablers to developing an evaluative culture, evaluation anxiety, and cross-disciplinary nature of this kind of culture. |
| Evaluating Organizational Effectiveness: A New Perspective |
| Wes Martz, Kadant Inc, wes.martz@gmail.com |
| The current practices of evaluating organizational effectiveness are lacking on a number of fronts - not the least being the struggle to explain the construct either theoretically or empirically. Numerous suggestions have been made to improve the assessment of organizational effectiveness. However, issues abound related to criterion instability, conflicting criteria, official goals versus operative goals, weighting criteria of merit, sub optimization, boundary specifications, narrowly defined value premises, inclusion of multiple stakeholders, ethical considerations, and the struggle to synthesize evaluative findings into an overall conclusion. This presentation will explain the failure to develop satisfactory approaches to evaluate organizational effectiveness, propose a checklist for practicing evaluators and managers to utilize when evaluating organizational effectiveness, and illustrate the practical application of the organizational effectiveness evaluation checklist. |
| Evaluating Process Improvement: An Organizational Scorecard Approach |
| Otto Gustafson, Western Michigan University, ottonuke@yahoo.com |
| Continuous improvement programs are designed to help organizations maximize their effectiveness by engaging employees at all levels to improve their daily processes through waste elimination and innovation. But how can organizations understand whether and to what extent continuous improvement is occurring? One method used to gauge business unit performance in the area of continuous improvement is to evaluate and score process improvements, quantify results and compare against set goals. This paper examines how one Fortune 500 company evaluates and drives process improvements in the context of the nuclear power industry. Inherent programmatic strengths and weaknesses will be discussed. In addition, recommendations to strengthen and expand process evaluation to other organizational contexts will be forwarded. |
| Session Title: Intermeshing Cogs at Work: Experiences and Lessons Learned From State and Local Educational Program Evaluations | |||
| Panel Session 618 to be held in the Granite Room Section A on Friday, Nov 7, 1:35 PM to 3:05 PM | |||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | |||
| Chair(s): | |||
| Kathleen Toms, Research Works Inc, katytoms@researchworks.org | |||
| Abstract: This panel consists of members of Research Works, Inc., an independent research and evaluation company which consistently evaluates programs at both the State and local levels. We, as principal investigators, research associates and research assistants contributing to multiple evaluations at both levels simultaneously have been struck by the lack of coordination between these two levels of evaluation. This panel will discuss if these levels should be collaborating with and informing each other. Is the State evaluation merely a meta-evaluation of the local studies? Should local evaluators be collecting the data that State evaluators need even if it means their implementation evaluations are not able to be completed? We propose some ways to facilitate a more coordinated approach and will ask the audience for their experiences in navigating this situation from either or both perspectives, and to answer the question: Is there an ideal interaction across system levels of evaluation? | |||
| |||
| |||
| |||
| |||
|
| Session Title: Translational Health Research: Implications for Evaluation Theory From a Practice Imperative | ||||
| Panel Session 619 to be held in the Granite Room Section B on Friday, Nov 7, 1:35 PM to 3:05 PM | ||||
| Sponsored by the Health Evaluation TIG | ||||
| Chair(s): | ||||
| James Dearing, Kaiser Permanente, james.w.dearing@kp.org | ||||
| Abstract: In theorizing about the evaluation of social and health programs, evaluation theorists have tended to focus on issues of internal validity (effect of program components on outcomes) more so than on external validity (replication of program effects across sites) or program diffusion (broad spread of a program across many practice sites). Current Federal emphasis on late-stage translation of research results to affect behavior in practice settings is an opportunity for evaluation researchers to prioritize the study of program external validity as advocated by evaluation theorists (Cronbach, Cook, and Shadish) and program diffusion as epitomized by Everett Rogers. Here, we focus on challenges of an external validity or diffusion design perspective, key variables of interest to evaluators and their clients, and introduce tools that can be used to formatively assess program potential for external validity and diffusion. We provide examples of translational research from the nation's largest nonprofit integrated healthcare system. | ||||
| ||||
| ||||
|
| Session Title: Preparing Future Evaluators: Approaches, Theories, and Needs | |||||||||||||||||||||||
| Multipaper Session 620 to be held in the Granite Room Section C on Friday, Nov 7, 1:35 PM to 3:05 PM | |||||||||||||||||||||||
| Sponsored by the Teaching of Evaluation TIG | |||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||
| Rick Axelson, University of Iowa, rick-axelson@uiowa.edu | |||||||||||||||||||||||
|
| Session Title: Improving Evaluation Policy by Focusing State, County, and Community Social Service Providers on Results-Oriented Services | |||
| Panel Session 621 to be held in the Quartz Room Section A on Friday, Nov 7, 1:35 PM to 3:05 PM | |||
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG | |||
| Chair(s): | |||
| Gordon Hannah, Finger Lakes Law and Social Policy Center Inc, gordonjhannah@gmail.com | |||
| Abstract: New York State passed a law in 2007 requiring all services provided by state agencies to include outcome or performance provisions. This multi-paper presentation will describe an intervention designed to help nine county social service departments meet the requirements of this new law. The intervention attempted to achieve this goal by promoting the systematic use of evaluation and continuous quality improvement processes to achieve desired outcomes. Such systematic use was encouraged through changes to policies and practices regarding contracting and monitoring third-party providers. The papers in this presentation will (1) describe the evaluation policies in place at both the state and county level prior to the intervention; (2) describe the goals and design of the intervention, and how it played out; (3) describe the evaluation policies that changed as a result of our intervention; and (4) discuss factors that impacted the effectiveness of the intervention. | |||
| |||
| |||
| |||
|
| Roundtable Rotation I: Developing Advocacy Evidence Systems and More Systematic Approaches for Gathering and Sharing Credible Advocacy Evidence: Lessons Learned from International Non-governmental Organizations (NGOs) |
| Roundtable Presentation 622 to be held in the Quartz Room Section B on Friday, Nov 7, 1:35 PM to 3:05 PM |
| Sponsored by the Advocacy and Policy Change TIG |
| Presenter(s): |
| Carlisle Levine, CARE, clevine@care.org |
| Abstract: Operational international non-governmental organizations (NGOs) offer those advocating on behalf of global development issues a unique value: established country presences that provide access to on-the-ground knowledge. A number of international NGOs, some with support from private foundations, are seeking to take greater advantage of this unique value by strengthening their advocacy evidence systems and developing more systematic approaches for gathering and sharing credible advocacy evidence in order to influence policy makers often within the U.S. government but also at all policy making levels. These international NGOs have been laying the groundwork for better systems and more systematic approaches for capturing and sharing basic project and program data, staff learning, and harder evidence in order to define policy problems and identify policy solutions. In this roundtable, a subset of these NGOs will share their experiences to date: their definitions of the problem; their responses; their challenges, lessons learned and advances. |
| Roundtable Rotation II: Out of the Frying Pan and Into the Fire: When Evaluators Enter the World of Policy |
| Roundtable Presentation 622 to be held in the Quartz Room Section B on Friday, Nov 7, 1:35 PM to 3:05 PM |
| Sponsored by the Advocacy and Policy Change TIG |
| Presenter(s): |
| Elizabeth Autio, Northwest Regional Educational Laboratory, autioe@nwrel.org |
| Abstract: As evaluators, we pride ourselves on our unbiased, just-the-facts approach to data collection and reporting. However, this can make us feel disconnected from the real world of social programs; how often have you wondered if your carefully-crafted report is actually read, or is it “another one for the shelf”? Yet, sometimes clients ask us to make recommendations from our evaluation data; other times, we might take on projects that explicitly have a policy component. What happens when evaluators enter the world of policy? The opportunity to do so is exciting in its potential impact, but also differs from our traditional role. What key things are different? Do we have the necessary content expertise? When and how should we exercise caution? This roundtable will start with a brief overview, drawing on examples from two recent projects in the Pacific Northwest. It will then open the floor to discussion. |
| Session Title: National Evaluations of School Wellness Policy and Programs |
| Multipaper Session 623 to be held in Room 102 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM |
| Sponsored by the Health Evaluation TIG |
| Chair(s): |
| Laura Leviton, Robert Wood Johnson Foundation, llevito@rwjf.org |
| Discussant(s): |
| Laura Leviton, Robert Wood Johnson Foundation, llevito@rwjf.org |
| Abstract: Childhood obesity has increased rapidly in the past decade and now constitutes a serious epidemic in the US. Both government and the nonprofit sector have developed concerted efforts to address this problem. The Robert Wood Johnson Foundation has funded evaluations of these efforts. This session will describe three evaluations: the USDA's school wellness policy requirements; the efforts of the Clinton Foundation/American Heart Association to improve the school environment, and Arkansas Act 1220 of 2003, an ambitious and comprehensive effort to prevent childhood obesity through the schools. Discussion will focus on comparing evaluations for government versus nonprofit efforts through the schools. |
| Evaluating School District Wellness Policies: Methodological Challenges and Results |
| Jamie Chriqui, University of Illinois Chicago, jchriqui@uic.edu |
| Frank J Chaloupka, University of Illinois Chicago, fjc@uic.edu |
| Anna Sandoval, University of Illinois Chicago, asando1@uic.edu |
| In response to growing concerns about childhood overweight and obesity, Congress passed a law (P.L. 108-265) in 2004 requiring local education agencies participating in the National School Lunch Program to adopt and implement a wellness policy by no later than the first school day following June 30, 2006. The federal mandate included goals related to: (1) nutrition education, (2) physical activity, (3) reimbursable school meals, (4) nutrition guidelines for all competitive foods sold/served, and (5) implementation and evaluation. This presentation will review methodological challenges associated with collecting and evaluating a nationally representative sample (n=579) of wellness policies. Policies have been obtained via Web research with telephone follow-up from 504 districts (87%) and confirmed to not exist in 29 districts (5%). District-level factors (e.g., SES, race/ethnicity) associated with response status and response method (Web vs. telephone) will be described. Strategies for evaluating the variability in wellness policy content will be presented. |
| Assessing the Impact of the Healthy Schools Program: Preliminary Findings |
| Dennis Deck, RMC Research Corporation, ddeck@rmccorp.com |
| Audrey Block, RMC Research Corporation, ablock@rmccorp.com |
| The Healthy Schools Program is run by the Clinton Foundation and American Heart Association and funded by the Robert Wood Johnson Foundation. It helps schools improve access to healthier foods and increase physical activity opportunities. Schools receive onsite technical assistance and can access an online tool that helps them identify their status as a healthy school and develop a customized action plan. The goal of the evaluation, being conducted by RMC Research Corporation, is to help the Alliance and its partners understand how to better support schools with the implementation and maintenance of the intended policy and program changes and how changes might affect behaviors related to childhood obesity. This presentation will review baseline data that characterize the current state of schools' policies and action plans concerning nutrition and physical activity; students' current nutrition and physical activity behaviors; and students' Body Mass Indices. |
| The Impact of Arkansas Act 1220 of 2003: Findings to Date From a Comprehensive Evaluation |
| Martha Phillips, University of Arkansas, martha.phillips@arkansas.gov |
| James Raczynski, University of Arkansas for Medical Sciences, jmr@uams.edu |
| Arkansas Act 1220 of 2003 was among the first and most comprehensive legislative initiatives designed to address the growing rate of childhood obesity in the state. The Act included limited mandates but established mechanisms at the state and local levels to promote, if not ensure, changes in school environments to support healthy nutrition and physical activity choices by students. A comprehensive evaluation of the impact of the Act, grounded in behavior change theory and overseen by a multi-disciplinary research team, is being completed with funding provided by the Robert Wood Johnson Foundation This presentation will provide a brief history of the Act, an overview of the evaluation and its conceptual framework, and a review of findings to date, including a discussion of school environmental and policy changes, changes in family and adolescent behaviors, and findings from the monitoring of potential unintended consequences (e.g., unhealthy diet behaviors, weight-based teasing). |
| Evaluation Challenges in Working with Foundation-Sponsored Grant Programs Versus Federally-Sponsored Grant Programs |
| Audrey Block, RMC Research Corporation, ablock@rmccorp.com |
| Dennis Deck, RMC Research Corporation, ddeck@rmccorp.com |
| RMC Research Corporation is the evaluator for the Healthy Schools Program, a school-based obesity prevention program that helps schools improve access to healthier foods and increase physical activity opportunities for students and staff. Schools may receive onsite technical assistance from a relationship manager and access the Healthy School Builder, an online tool that helps them identify their status as a healthy school and develop a customized action plan. The Robert Wood Johnson Foundation is the primary sponsor of the program and the evaluation. This presentation will discuss the evaluation challenges in working with schools to collect data without the normal compliance or accountability criteria that are present in federally-sponsored grant programs. These include lack of meaningful funding available to schools, low program buy-in (possibly related to lack of funding), confusion between the Healthy Schools program and similar and competing initiatives, and lack of understanding about what program participation entails. |
| Session Title: The Randomized Controlled Trial in Your Evaluation Toolkit: A Candid Discussion | |||
| Panel Session 624 to be held in Room 104 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM | |||
| Sponsored by the Human Services Evaluation TIG | |||
| Chair(s): | |||
| Jennifer Hamilton, Westat, jenniferhamilton@westat.com | |||
| Abstract: The randomized controlled train (RCT) is widely considered the optimal study design to minimize bias and provide the most accurate estimate of the impact of a given intervention or program. However, the design and implementation of an RTC presents a unique set of challenges. In fact, without the proper attention, a researcher may unintentionally limit the study's internal validity (the extent to which the difference between the treatment and control groups are real rather than a product of bias) or its external validity (generalizability to a wider population). Therefore, this panel is intended to raise awareness of these issues and to provide a frank discussion of possible solutions. | |||
| |||
| |||
| |||
|
| Session Title: Systems Thinking for Curriculum Evaluation |
| Skill-Building Workshop 625 to be held in Room 106 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM |
| Sponsored by the Systems in Evaluation TIG and the Pre-K - 12 Educational Evaluation TIG |
| Presenter(s): |
| Glenda Shoop, Pennsylvania State University, gshoop@psu.edu |
| Janice Noga, Pathfinder Evaluation and Consulting, jan.noga@stanfordalumni.org |
| Margaret Hargreaves, Abt Associates Inc, meg_hargreaves@abtassoc.com |
| Abstract: An educational program, and the curriculum that is at its center, is not self-contained. In actuality, educational programs are integrated, socio-technical systems that interact with the larger social, political, and organizational environment. If the function of curriculum evaluation is to make decisions about improvement as well as effectiveness, systems thinking can provide the broader perspective needed to understand the quality of what is going on. In this workshop, participants will learn how to apply systems analysis principles to the evaluation of educational curricula. The workshop will present two models for systems thinking currently used by the presenters to evaluate educational programs. Through a series of hands-on exercises, participants will be encouraged to draw on their own experience in curriculum evaluation as they are stepped through the processes of design, data collection, and analysis that underlie each model. |
| Session Title: Social Impact of the Arts | |||||||||||||||||
| Multipaper Session 626 to be held in Room 108 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM | |||||||||||||||||
| Sponsored by the Evaluating the Arts and Culture TIG | |||||||||||||||||
| Chair(s): | |||||||||||||||||
| Ching Ching Yap, University of South Carolina, ccyap@gwm.sc.edu | |||||||||||||||||
| Discussant(s): | |||||||||||||||||
| Kathlyn Steedly, Academy for Educational Development, ksteedly@gmail.com | |||||||||||||||||
|
| Session Title: Case Studies in Evaluation: United States Federal Agencies - Part 2 | |||||||||
| Multipaper Session 627 to be held in Room 110 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM | |||||||||
| Sponsored by the Government Evaluation TIG | |||||||||
| Chair(s): | |||||||||
| Samuel Held, Oak Ridge Institute for Science and Education, sam.held@orau.org | |||||||||
| Discussant(s): | |||||||||
| Susan Berkowitz, Westat, susanberkowitz@westat.com | |||||||||
|
| Session Title: Peer Review: From Evaluating Science to Evaluating Science Policy | ||||||||
| Panel Session 628 to be held in Room 112 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM | ||||||||
| Sponsored by the Research, Technology, and Development Evaluation TIG | ||||||||
| Chair(s): | ||||||||
| Isabelle Collins, Technopolis Ltd, isabelle.collins@technopolis-group.com | ||||||||
| Abstract: Peer review is one of the main elements of the evaluator's toolkit when looking at the evaluation of science and technology. However, as the focus of evaluation has shifted to evaluating the policies and programs behind the research, the use of peer review has evolved with it. New forms and new uses are emerging, some of which stretch the principles beyond their original intentions, and take the ideas into areas beyond the field of RTD. This panel looks at some of these developments and their implications in the field of science, science policy and the wider policy arena. | ||||||||
| ||||||||
| ||||||||
|
| Session Title: How to Create Objective Evaluation Criteria for Complex Processes and Outcomes |
| Demonstration Session 629 to be held in Room 103 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM |
| Sponsored by the Quantitative Methods: Theory and Design TIG |
| Presenter(s): |
| Knut M Wittkowski, Rockefeller University, kmw@rockefeller.edu |
| Tingting Song, Rockefeller University, tsong01@rockefeller.edu |
| Abstract: This demonstration extends commonly used ranking and scoring instruments for univariate data (Mann-Whitney 1947) and censored data (Gehan 1965) to multivariate data with applications in the evaluation of complex processes and outcomes. To reach a broad audience, many examples evaluate athletes and sports teams. Other examples will address medical problems, such as adverse experiences, side effects, and quality of life. The demonstration consists of three parts. The first part discusses the history of u-scores (Arbuthnoth 1692), extends u-scores to multivariate data using a simple representation of partial orderings (Deuchler 1914). The second part demonstrates how information about relationships between variables can be incorporated through (a) transforming data, (b) special partial orderings, and (c) combining partial orderings. The third part discusses computational and statistical aspects of non-parametric 'factor analysis'. Demonstrations will include spreadsheets (available from http://muStat.rockefeller.edu), the package 'muStat' (available from http://cran.r-project.org and http://csan.insightful.com/), and Web-services available from http://muStat.rockefeller.edu. |
| Session Title: Evaluation Dashboards: Practical Solutions for Reporting Results |
| Demonstration Session 630 to be held in Room 105 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM |
| Sponsored by the Non-profit and Foundations Evaluation TIG |
| Presenter(s): |
| Veena Pankaj, Innovation Network Inc, vpankaj@innonet.org |
| Ehren Reed, Innovation Network Inc, ereed@innonet.org |
| Abstract: A driver would be at a loss if not for the valuable information imparted by his car's dashboard. Many nonprofit managers, without easy access to information about their organization's performance, find themselves feeling like a dashboard-less driver. Performance dashboards, popularized by knowledge managers and CIOs, are a natural ally for evaluators and provide a quick and efficient way for managers to gauge the performance of a specific program or an entire organization. This session will walk through the process of planning for and developing a performance dashboard and will spotlight two different dashboards created for nonprofit organizations. |
| Session Title: Policy and Practice Issues for Evaluators, Project Directors and the Community: Lessons Learned From the Intersection of Local and National Multi-site Evaluations | |||||
| Panel Session 631 to be held in Room 107 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM | |||||
| Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG | |||||
| Chair(s): | |||||
| Sandra Ortega, National Data Evaluation Center, ortegas@ndec.us | |||||
| Abstract: The panel focuses on how lessons learned from multi-level evaluations can impact policy and practice. Panel members have worked on numerous national projects either as local evaluators, national evaluators, or local project directors. The panel will identify the main challenges that evaluators face during multi-level projects and propose solutions to overcome them. They will also review strategies that have not worked for them in the past and examine why they believe these were unsuccessful. The panel will discuss the unique challenges presented by multi-level and multi-site evaluation projects the importance of collaborative work between the multiple levels of stakeholders, how to make national data useful for local communities, how local evaluators can build an effective relationship with local community members, how national stakeholders can facilitate the work of local evaluators, and whether some evaluation practice models are more fitting than others for national demonstration projects. | |||||
| |||||
| |||||
| |||||
|
| Session Title: Training-the-Trainer: Building Evaluation Capacity at the United States Environmental Protection Agency |
| Demonstration Session 632 to be held in Room 109 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM |
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG |
| Presenter(s): |
| Yvonne Watson, United States Environmental Protection Agency, watson.yvonne@epa.gov |
| Terell Lesane, United States Environmental Protection Agency, lasane.terell@epa.gov |
| Abstract: In response to increasing demands for government accountability and the need to promote program improvement and organizational learning, the U.S. Environmental Protection Agency (EPA)'s Evaluation Support Division designed a series of Train-the-Trainer courses in Logic Modeling, Performance Measurement, Program Evaluation and a Performance Management Primer for Managers that center on equipping Agency staff to deliver training and technical assistance to others. Course materials include, training slides, exercises, case studies and a 'script', complete with key talking points to aid the trainer with course delivery. This demonstration will walk conference participants through the Train-the-Trainers materials, highlighting aspects of the training that were successful or unsuccessful in EPA's organizational context. We will discuss how these and other training efforts have influenced the Agency's evaluation culture, helped develop a common program evaluation language, and shaped perceptions regarding evaluation. |
| Session Title: From Planning to Use: Methodological Considerations in Evaluation School-Based Programs | ||||||||||||||||||||||||||||
| Multipaper Session 633 to be held in Room 111 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM | ||||||||||||||||||||||||||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | ||||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||||
| Loria Brown, Jackson State University, loria.c.brown@jsums.edu | ||||||||||||||||||||||||||||
|
| Session Title: Models and Frameworks of Evaluation and Meta-Evaluation | |||||||||||||||||
| Multipaper Session 634 to be held in Room 113 in the Convention Center on Friday, Nov 7, 1:35 PM to 3:05 PM | |||||||||||||||||
| Sponsored by the Theories of Evaluation TIG | |||||||||||||||||
| Chair(s): | |||||||||||||||||
| Rebecca Eddy, Claremont Graduate University, rebecca.eddy@cgu.edu | |||||||||||||||||
|