| Session Title: Evaluation Anthropology Praxis Today: A Five Year Retrospective | ||||
| Panel Session 662 to be held in Lone Star A on Friday, Nov 12, 4:30 PM to 6:00 PM | ||||
| Sponsored by the Presidential Strand | ||||
| Chair(s): | ||||
| Jacqueline Copeland-Carson, Copeland Carson & Associates, jcc@copelandcarson.net | ||||
| Discussant(s): | ||||
| David Fetterman, Fetterman & Associates, fettermanassociates@gmail.com | ||||
| Rodney Hopson, Duquesne University, hopson@duq.edu | ||||
| Abstract: Evaluation Anthropology Praxis: Charting a New Future | ||||
| ||||
| ||||
|
| Session Title: Systems Perspectives on Using Logic Models to Improve Evaluation Quality | |||
| Panel Session 663 to be held in Lone Star B on Friday, Nov 12, 4:30 PM to 6:00 PM | |||
| Sponsored by the Systems in Evaluation TIG and the Program Theory and Theory-driven Evaluation TIG | |||
| Chair(s): | |||
| Patricia Rogers, Royal Melbourne Institute of Technology, patricia.rogers@rmit.edu.au | |||
| Abstract: Truth, Beauty and Justice have always been at the heart of logic models. The very notion of "logic" has embedded within it the idea of exposing the truth of an argument in elegant, beautiful ways. Justice is served by opening up and revealing embedded assumptions and values. But logic models do not always meet these ideals. To what extent does current practice in using logic models enhance the quality of evaluation? How should they be used? How critical are quality logic models to quality evaluation? What constitutes a quality logic model? This presentation features three speakers who have deeply considered these issues in different ways in different parts of the world. They draw on these experiences and observations together with insights from systems approach to evaluation. | |||
| |||
| |||
|
| Session Title: The Evaluative Journey: Implementing Evaluation Activities That Faciliate Ongoing Decision Making |
| Demonstration Session 664 to be held in Lone Star C on Friday, Nov 12, 4:30 PM to 6:00 PM |
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG |
| Presenter(s): |
| Paul St Roseman, Sakhu and Associates, pstroseman@sakhuandassociates.com |
| Abstract: As Evaluation continues to develop as both a discipline and an essential process of organizational development, there is increased demand for evaluators to develop responsive and inclusive service delivery strategies. This demonstration presents approaches used to focus and frame evaluation efforts as an ongoing decision making process. Topics that will be examined include: (1) administrative coaching as a pathway to develop, interpret, and utilize evaluation products, (2) co-authorship as a tool for data analysis, and (3) web-based resources as a tool for virtual collaboration. This presentation is most appropriate for evaluation practitioners who collaborate with administrators and their staff to design, implement, sustain and utilize evaluation products. |
| Session Title: Multiple Sites, Multiple Layers, Multiple Players: Lessons From the Field on Keeping Quality High and Frustration Low | ||||||
| Panel Session 665 to be held in Lone Star D on Friday, Nov 12, 4:30 PM to 6:00 PM | ||||||
| Sponsored by the Non-profit and Foundations Evaluation TIG | ||||||
| Chair(s): | ||||||
| Jacqueline WIlliams Kaye, Atlantic Philanthropies, j.williamskaye@atlanticphilanthropies.org | ||||||
| Abstract: Multi-site initiative evaluations face multiple layers of complexity rooted in factors such as variation in local cultures and contexts, sheer numbers of entities involved, the initiative design itself, as well as other factors. This session’s panel of experts will discuss strategies to address issues such as creating appropriate communication and coordination mechanisms across stakeholders; “right-sizing” the evaluation relative to the scope of the initiative and capacity of the players; balancing the desire for comparable, cross-site indicators with flexibility for local customization; when and how to make technical assistance available to local sites; and maintaining quality and integrity in the data collection, analysis and reporting processes. The perspectives of evaluation staff from two large funding organizations as well as evaluation professionals representing both national and local evaluator roles will provide hard-won insights into strategies for ensuring high quality multi-site evaluations that also respect the interests and constraints of stakeholders involved. | ||||||
| ||||||
| ||||||
|
| Session Title: The Case for Brief(er) Measures | |||
| Panel Session 666 to be held in Lone Star E on Friday, Nov 12, 4:30 PM to 6:00 PM | |||
| Sponsored by the Quantitative Methods: Theory and Design TIG | |||
| Chair(s): | |||
| Lee Sechrest, University of Arizona, sechrest@email.arizona.edu | |||
| Abstract: It is often assumed that longer measures will be better than shorter measures; that may not always be the case. Determining how measures might be shortened without important cost to reliability or validity would be of great potential value to program evaluators and other researchers. Some instances of the utility and even superiority of single item measures have been identified, and principles underlying them have been described. Very brief scales have been developed by application of methods of intensive data analysis, often resulting in better predictions of criteria than possible with the full scales. Moreover, similar methods can be effective in reducing even very large omnibus measures and sets of measures to a small subset of items or scales that effectively can, by regression methods, reproduce the information in the total set. Description of these approaches and methods and illustration of their applications will be the focus of this panel. | |||
| |||
| |||
|
| Session Title: Evaluation 201: Evaluation Skills Needed After Coursework |
| Skill-Building Workshop 667 to be held in Lone Star F on Friday, Nov 12, 4:30 PM to 6:00 PM |
| Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG |
| Presenter(s): |
| Martha Ann Carey, Maverick Solutions, marthaann123@sbcglobal.net |
| Molly Engle, Oregon State University, molly.engle@oregonstate.edu |
| Abstract: Research methods and subject matter expertise are generally thought of as the necessary skills needed to begin doing evaluations. In addition, there are essential skills not found in textbooks or college courses. Drawing from their experiences in many roles as an evaluator, evaluation team member, and program developer in various settings including academia, government, and non profit organizations, the workshop presenters will provide an overview of tools in planning evaluations beyond logic models. Including monitoring, conflict management, team building, communication among others, these tools are especially important in working with cluster and multisite programs that involve complexity, as well as opportunities, beyond single site evaluations; especially relevant topics are team work and multiple perspectives. The audience for this workshop includes evaluators new to the field and their supervisors. Exercises will include working with examples suggested by workshop members, and examples of funded multisite programs. |
| Roundtable Rotation I: Examining Collaboration in an Evaluation of a Large Scale Civic Education Program |
| Roundtable Presentation 668 to be held in MISSION A on Friday, Nov 12, 4:30 PM to 6:00 PM |
| Sponsored by the and the Collaborative, Participatory & Empowerment Evaluation TIG |
| Presenter(s): |
| Liliana Rodriguez-Campos, University of South Florida, liliana@usf.edu |
| Connie Walker-Egea, University of South Florida, cwalkerpr@yahoo.com |
| Michael Berson, University of South Florida, berson@coedu.usf.edu |
| Abstract: Collaboration is the ability to actively work with others in a mutually beneficial relationship in order to achieve a shared vision, not likely to otherwise occur. The level of collaboration varies for each evaluation and it will depend on the situation within the evaluation. The collaborative relationship between the evaluators and stakeholders was a key component to achieve the goals and objectives of an evaluation of a civic education program. The group of collaboration members was the core decision-making body for the evaluation and was deeply involved in the collaborative effort. The supportive evaluation environment facilitated the collaboration and actively engaged the key stakeholders during the evaluation process. These key stakeholders had a high level of collaboration, assuming responsibility for the entire program and developing appreciation of all aspects of their work. This roundtable will examine the contribution and the role of the collaboration members throughout this evaluation process. |
| Roundtable Rotation II: Using Mixed Methods to Evaluate a School Based Civic Engagement Initiative |
| Roundtable Presentation 668 to be held in MISSION A on Friday, Nov 12, 4:30 PM to 6:00 PM |
| Sponsored by the and the Collaborative, Participatory & Empowerment Evaluation TIG |
| Presenter(s): |
| Michael Berson, University of South Florida, berson@usf.edu |
| Liliana Rodriguez-Campos, University of South Florida, liliana@usf.edu |
| Aarti P Bellara, University of South Florida, abellara@mail.usf.edu |
| Abstract: Accountability, rigorous evidence, and causality are common terms used to describe federally funded educational program evaluations, which often imply the use of experimental methods. Given the United States Department of Education’s 2003 priority to rigorous scientific methods (ED, 2003), evaluators have engaged in scholarly discourse discussing the strengths and weaknesses of this policy( American Evaluation Association [AEA], 2003, Bickman, et.al., 2003; Chatterji, 2004, 2009; Cooksy, Mark, & Trochim, 2009; Donaldson & Christie, 2005; Julnes & Rog, 2007; Mark, 2003; Scriven, 2003, 2009). Applied evaluation is a practical tool that takes on multiple forms based upon the context and nature of the specific program, and often these programs require multiple methods that complement each other (Rallis & Rossman, 2003) and provide cross-checks on evaluation findings. The purpose of this paper is to describe the successful use of a mixed method design to evaluate a federally funded school based civic initiative. |
| Session Title: Evaluating Community Capacity Building as a Prevention Strategy | |||||
| Multipaper Session 669 to be held in MISSION B on Friday, Nov 12, 4:30 PM to 6:00 PM | |||||
| Sponsored by the Special Needs Populations TIG | |||||
| Chair(s): | |||||
| Laura Leviton, Robert Wood Johnson Foundation, llevito@rwjf.org | |||||
|
| Session Title: Evaluation Opportunities Within a National Science Foundation (NSF) Program | ||||
| Multipaper Session 670 to be held in BOWIE A on Friday, Nov 12, 4:30 PM to 6:00 PM | ||||
| Sponsored by the Independent Consulting TIG | ||||
| Chair(s): | ||||
| Arlen Gullickson, Western Michigan University, arlen.gullickson@wmich.edu | ||||
| Discussant(s): | ||||
| Peter Saflund, The Saflund Institute, psaflund@earthlink.net | ||||
|
| Session Title: Practicing Culturally Responsive Evaluation: Graduate Education Diveristy Intership (GEDI) Program Intern Reflections on the Role of Competence, Context, and Cultural Perceptions - Part II | ||||
| Multipaper Session 671 to be held in BOWIE B on Friday, Nov 12, 4:30 PM to 6:00 PM | ||||
| Sponsored by the Multiethnic Issues in Evaluation TIG | ||||
| Chair(s): | ||||
| Rita O'Sullivan, University of North Carolina at Chapel-Hill, ritao@email.unc.edu | ||||
| Discussant(s): | ||||
| Michelle Jay, University of South Carolina, mjay@sc.edu | ||||
|
| Session Title: Evaluations of Court and Corrections Programs | ||||||||||||||||||||||||||||||
| Multipaper Session 672 to be held in BOWIE C on Friday, Nov 12, 4:30 PM to 6:00 PM | ||||||||||||||||||||||||||||||
| Sponsored by the Crime and Justice TIG | ||||||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||||||
| Roger Przybylski, RKC Group, rogerkp@comcast.net | ||||||||||||||||||||||||||||||
|
| Roundtable Rotation I: My First Year as an Internal Evaluator: What I Didn't Know That I Didn't Know |
| Roundtable Presentation 673 to be held in GOLIAD on Friday, Nov 12, 4:30 PM to 6:00 PM |
| Sponsored by the Graduate Student and New Evaluator TIG and the Collaborative, Participatory & Empowerment Evaluation TIG |
| Presenter(s): |
| Pamela Bishop, University of Tennessee, Knoxville, pbaird@utk.edu |
| Abstract: In my few short years working as a professional evaluator, I had always held positions in which I was external to the organization being evaluated. Although the process of program evaluation has never been mundane, external evaluation carried with it the expectation of a certain sequence of events: begin the evaluation process, conduct the evaluation, and close the evaluation. When I accepted my first internal evaluation position in February 2009, I quickly learned I would need to not only redefine my ideas of the way the evaluation process works, but also my ideas of what an evaluator actually does. This roundtable is a forum for discussing the learning journey for new evaluators, graduate students, internal evaluators, or those considering becoming internal evaluators, about what it means (and does not mean) to be an internal evaluator. |
| Roundtable Rotation II: Evaluator/ Practitioner Collaborations |
| Roundtable Presentation 673 to be held in GOLIAD on Friday, Nov 12, 4:30 PM to 6:00 PM |
| Sponsored by the Graduate Student and New Evaluator TIG and the Collaborative, Participatory & Empowerment Evaluation TIG |
| Presenter(s): |
| Angela Moore, National Institute of Justice, angela.moore.parmley@usdoj.gov |
| Winnie Reed, National Institute of Justice, winnie.reed@usdoj.gov |
| Carolyn Block, Illinois Criminal Justice Information Authority, crblock@rcn.com |
| Deshonna Collier-Goubil, National Institute of Justice, deshonnac@hotmail.com |
| Abstract: Evaluators are often called on to collaborate with practitioners however many young scholars lack the the practical experience that would inform them about the information that is most needed in the field. Evaluation research requires data, and the gatekeepers to data access and data understanding are often practitioners – including caretakers of large, archived datasets, and direct service providers who collect and maintain client data. Successful collaboration depends on a set of skills not taught in most PhD programs. This roundtable will focus on advice for new evaluators on the benefits of collaboration, alternative roads into research collaborations with practitioners, the skills necessary to create and maintain successful collaborations, barriers to collaboration and how to overcome them, conflicts between differing agendas and work with practitioners, pitfalls and how to avoid them or deal with them, collaborative proposals for funding, designing research that protects confidentiality, collaboration in disseminating the results of the evaluation, and the ways in which collaborations evolve over time. Concrete examples from the field will be discussed at the roundtable. |
| Roundtable Rotation I: Accreditation as a Pathway to Build Community and Generate Renewal |
| Roundtable Presentation 674 to be held in SAN JACINTO on Friday, Nov 12, 4:30 PM to 6:00 PM |
| Sponsored by the Assessment in Higher Education TIG |
| Presenter(s): |
| Lorna Escoffery, Escoffery Consulting Collaborative Inc, lorna@escofferyconsulting.com |
| Abstract: Accreditation processes aim to promote institutional self-evaluation and improvement. However, they are time consuming and can seem intimidating to participants for they require a comprehensive self-study, generating and sharing sensitive data, and collaborating across hierarchical levels and organizational structures. However, these processes can be productive and effective if senior leadership provides adequate resources, fosters a process involving all the community, and encourages information sharing. The University of Miami School of Medicine engaged in such a process for the 2009 re-accreditation visit by the LCME and the results validate that an accreditation process can be a valuable tool to evaluate the quality of a medical education program and the organization(s) supporting it. The accreditation process prompted important changes as the medical school community adopted organizational and educational objectives, became more knowledgably about the school, and embraced the process as well as the changes it generated. |
| Roundtable Rotation II: Using Evaluation to Help Transform Departments in the Challenging Economic Environment of Higher Education |
| Roundtable Presentation 674 to be held in SAN JACINTO on Friday, Nov 12, 4:30 PM to 6:00 PM |
| Sponsored by the Assessment in Higher Education TIG |
| Presenter(s): |
| Sabra Lee, Lesley University, slee@lesley.edu |
| Ellen Iverson, Carleton College, eiverson@carleton.edu |
| Abstract: This presentation provides an overview of an evaluation for a federally sponsored program that employs a systems-approach in helping higher education geosciences departments adapt to, prosper in and become stronger in a changing and challenging economic environment. The program includes national workshops, traveling workshops and a collection of website resources. The program evaluation uses a participant-oriented systems approach, employing a mixed-methods approach to provide formative input and embedded assessment of the program. Case studies exemplify the range of effects and impacts of the program’s strategies. Through the use of a case study, the session provides examples of methods and instruments used to evaluate the program (including website forms, departmental applications, action plans, surveys, and interview protocols). Participants interested in evaluation of professional development that looks at the impact of both face-to-face and website resources as well as those interested in discussing participant-oriented systems approaches may find this presentation valuable. |
| Session Title: How to Cast Your Net: Network Analysis Techniques in Public Health Evaluation |
| Demonstration Session 675 to be held in TRAVIS A on Friday, Nov 12, 4:30 PM to 6:00 PM |
| Sponsored by the |
| Presenter(s): |
| Lana Wald, Washington University in St Louis, lwald@gwbmail.wustl.edu |
| Bobbi Carothers, Washington University in St Louis, bcarothers@wustl.edu |
| Douglas Luke, Washington University in St Louis, dluke@wustl.edu |
| Jenine Harris, Saint Louis University, harrisjk@slu.edu |
| Abstract: Social network analysis can serve as a valuable evaluation approach for understanding and quantifying relationships between individuals and organizations. Network analysis techniques provide evaluators with tools to visualize relationships and the flow of information. These techniques contribute to a more in depth understanding of the context in which a program operates and can be a useful component of a comprehensive evaluation. Network analysis entails three steps: 1) defining who is in the network, 2) measuring network participants, and 3) showing network relationships in a visual form. We will demonstrate these steps along with how this approach has been utilized in two multi-site initiatives. Lessons learned from these evaluations will also be presented. At the end of the session, participants will be able to describe the steps for network analysis and identify how it can be applied in their work. |
| Session Title: Methodological Considerations: Choosing an Appropriate Cost Analysis Methodology for the Evaluation | ||||||||||||||||||||||||||||||||||||||||
| Multipaper Session 676 to be held in TRAVIS B on Friday, Nov 12, 4:30 PM to 6:00 PM | ||||||||||||||||||||||||||||||||||||||||
| Sponsored by the Costs, Effectiveness, Benefits, and Economics TIG | ||||||||||||||||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||||||||||||||||
| Nadini Persaud, University of the West Indies, npersaud07@yahoo.com | ||||||||||||||||||||||||||||||||||||||||
|
| Session Title: From Compliance to Reliance: Critical Moments in Integrating Evaluation Into an Organization’s Work | ||||
| Panel Session 677 to be held in TRAVIS C on Friday, Nov 12, 4:30 PM to 6:00 PM | ||||
| Sponsored by the Evaluation Use TIG and the Organizational Learning and Evaluation Capacity Building TIG | ||||
| Chair(s): | ||||
| Jennifer Iriti, University of Pittsburgh, iriti@pitt.edu | ||||
| Abstract: What are the critical moments in an organization’s journey from “evaluation for compliance” to relying on evaluation for core program design and decision-making? This panel will introduce participants to a success case in which evaluators worked with an education non-profit over 6 years to integrate evaluation and evaluative thinking into the work of the organization. First, context about the case and how the organization’s orientation toward evaluation has changed is provided. Then the following issues are examined from the perspectives of evaluator, organizational leader, program implementer, and funder: a) value-added of the evaluation work; b) critical moments in the work; c) generalizable lessons about evaluation integration and use; and, d) challenges to the organization’s shift toward reliance on evaluation. The goal of the session is to draw on a real-world longitudinal example to bring to life the actions, conditions, and tools that supported use and ownership of evaluation. | ||||
| ||||
| ||||
| ||||
|
| Session Title: Working Together and Getting the Message Heard in Teen Pregnancy Prevention Programs | ||||||||||||||||||||||||||||||||
| Multipaper Session 678 to be held in TRAVIS D on Friday, Nov 12, 4:30 PM to 6:00 PM | ||||||||||||||||||||||||||||||||
| Sponsored by the Health Evaluation TIG | ||||||||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||||||||
| Robert LaChausse, California State University, San Bernardino, rlachaus@csusb.edu | ||||||||||||||||||||||||||||||||
|
| Session Title: Beyond Fidelity: Evaluating the Implementation of Evidence-based Practices |
| Think Tank Session 679 to be held in INDEPENDENCE on Friday, Nov 12, 4:30 PM to 6:00 PM |
| Sponsored by the Human Services Evaluation TIG |
| Presenter(s): |
| Miles McNall, Michigan State University, mcnall@msu.edu |
| Abstract: The focus of this think tank will be on re-conceptualizing the current approach to evaluating the implementation of “evidence-based practices” (EBPs). In most cases, implementation evaluations focus on the extent to which EBPs are implemented with fidelity to their original models. However, because EBPs are implemented in a wide variety of organizational, political and cultural contexts, evaluations of EBPs that maintain an exclusive focus on fidelity miss important contextual factors that may impact both the fidelity and effectiveness of interventions. As such, there is a need to develop broader evaluation frameworks that capture the factors that influence the success or failure of implementation. Preliminary frameworks derived from the implementation literature will be presented to generate a discussion that leads to the development of a new framework for implementation evaluation. |
| Session Title: Basic Change: Examining a Simple Design From Multiple Approaches |
| Demonstration Session 680 to be held in PRESIDIO A on Friday, Nov 12, 4:30 PM to 6:00 PM |
| Sponsored by the Quantitative Methods: Theory and Design TIG |
| Presenter(s): |
| Julius Najab, George Mason University, jnajab@gmu.edu |
| Caroline Wiley, University of Arizona, crhummel@email.arizona.edu |
| Simone Erchov, George Mason University, sfranz1@gmu.edu |
| Abstract: In this demonstration, we will cover various approaches to a simple, yet common, evaluation design (pre-post-test) and review the conceptual aspects and results to each method. A majority of evaluators and data analysts can quickly and accurately interpret basic analyses (e.g. means, standard deviations, and t-tests) but the more advanced techniques are often difficult to interpret or inaccessible to those unfamiliar. The variety of disciplines that form evaluation will often expose evaluators to analyses not commonly used in their field of training. We will analyze pre-post data using multiple methods and inform evaluators about the conceptual aspects, similarities/inconsistencies, and the conclusions drawn from each method. The purpose of this demonstration is to inform evaluators unfamiliar with advanced quantitative techniques on different perspectives and approaches to examining data from a commonly used evaluation design. |
| Session Title: Using the Comprehensive Organizational Assessment Tool to Diagnose and Evaluate Organizational Capacity |
| Demonstration Session 681 to be held in PRESIDIO B on Friday, Nov 12, 4:30 PM to 6:00 PM |
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG |
| Presenter(s): |
| Ashley Kasprzak, JVA Consulting LLC, ashley@jvaconsulting.com |
| Randi Nelson, JVA Consulting LLC, randi@jvaconsulting.com |
| Abstract: The Comprehensive Organizational Assessment (COA) is a research-based tool developed by JVA Consulting to measure gains in nonprofit organizational capacity and guide capacity building training and technical assistance. Demonstration participants will learn about the COA’s 17 critical nonprofit capacity indicators and its use as a pre-post outcome measurement tool. The facilitator will compare characteristics and applications of the COA to 20 other organizational assessment tools currently used in the U.S. The COA’s background, including its use with 14 capacity building initiatives, will be presented. The facilitator will describe data collection methods used with the COA and demonstrate the interview process used with nonprofit leadership. Audience members will view a pre-populated COA and be invited to role-play the interview process. In conclusion, the facilitator will demonstrate consulting with nonprofit leadership to select action steps for improving organizational capacity. Each participant will receive a hard-copy sample of a completed COA report |
| Session Title: National Evaluation of the Addiction Technology Transfer Center Network (ATTC): Findings and Observations From a Contextually Rich, Mixed Method Evaluation Study | ||||||
| Panel Session 682 to be held in PRESIDIO C on Friday, Nov 12, 4:30 PM to 6:00 PM | ||||||
| Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG | ||||||
| Chair(s): | ||||||
| Roy Gabriel, RMC Research Corporation, rgabriel@rmccorp.com | ||||||
| Abstract: The Substance Abuse and Mental Health Services Administration contracted with MANILA Consulting Group, Abt Associates, and RMC Research to conduct the first independent national evaluation of SAMHSA/Center for Substance Abuse Treatment’s Addiction Technology Transfer Center (ATTC) program since it was funded in 1993. The ATTC program supports the workforce that provides addictions treatment services to 23 million Americans each year through training, consultation and product development. The 3-year evaluation was preceded by a two-year evaluation design contract to develop an appropriate evaluation approach for this long standing, contextually rich program. The final design comprised 3 studies aiming to identify and build upon the successes of technology transfer and disseminate effective strategies. This session will include (1) an overview of the ATTC program and the importance of the evaluation; (2) presentations on each of three studies; and (3) a discussion of key findings, decisions and challenges related to this evaluation effort. | ||||||
| ||||||
| ||||||
| ||||||
| ||||||
|
| Roundtable Rotation I: Evaluating K-12 Professional Development: Implementation of the Sheltered Instruction Observation Protocol (SIOP) Model |
| Roundtable Presentation 683 to be held in BONHAM A on Friday, Nov 12, 4:30 PM to 6:00 PM |
| Sponsored by the Pre-K - 12 Educational Evaluation TIG |
| Presenter(s): |
| Michelle Bakerson, Indiana University South Bend, mmbakerson@yahoo.com |
| Abstract: Evaluators are often contracted by school districts or organizations receiving grants to develop and facilitate programs to benefit the school or organization. One such school district in Northern Indiana is a district whose K-12 teachers received professional development over a five year period in the Sheltered Instruction Observation Protocol (SIOP) model. The evaluation was conducted to determine the extent to which teachers are implementing strategies of the (SIOP) model in the classroom after participating in the professional development and to what degree that implementation was occurring within the classroom? The evaluation was designed to be a learning tool for facilitating the improvement of the professional development provided at this school. Accordingly, a collaborative evaluation approach was utilized to actively engage the school and the teachers during the whole process. A cross-sectional survey design was also selected for this evaluation. The steps, advantages, and obstacles of this evaluation will be shared. |
| Roundtable Rotation II: School Climate: A Comprehensive Data Collection and School Improvement System |
| Roundtable Presentation 683 to be held in BONHAM A on Friday, Nov 12, 4:30 PM to 6:00 PM |
| Sponsored by the Pre-K - 12 Educational Evaluation TIG |
| Presenter(s): |
| Barbara Dietsch, WestEd, bdietsc@wested.org |
| Sandy Sobolew-Shubin, WestEd, ssobole@wested.org |
| Rebeca Cerna, WestEd, rcerna@wested.org |
| Greg Austin, WestEd, gaustin@wested.org |
| Abstract: During the Roundtable, participants will engage in a discussion about the importance of using multiple sources to assist program providers in making data-driven decisions in their schools and communities. Presenters will discuss an innovative data collection system that provides a means to obtain staff perceptions about learning and teaching conditions in order to regularly inform decisions about professional development, instruction, the implementation of learning supports, and school reform. Two components make up the system: a student survey (California Healthy Kids Survey) and the web-based California School Climate Survey. The system was developed to provide data that link instruction with the assessment of non-cognitive barriers to learning, such as substance abuse, violence and victimization, and poor mental health among students. It addresses issues such as equity, bias, and cultural competence, which have been linked to the achievement gap plaguing racial/ethnic minorities and can be customized to include questions of local concern. |
| Session Title: Making Sense of the Relationships Between Nonprofits, Funders, and Evaluation | |||||||||||||||||||||||
| Multipaper Session 684 to be held in BONHAM B on Friday, Nov 12, 4:30 PM to 6:00 PM | |||||||||||||||||||||||
| Sponsored by the Non-profit and Foundations Evaluation TIG | |||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||
| Salvatore Alaimo, Grand Valley State University, alaimos@gvsu.edu | |||||||||||||||||||||||
|
| Session Title: Current Issues in Evaluating High School Programs | |||||||||||||||||||||||||||
| Multipaper Session 685 to be held in BONHAM C on Friday, Nov 12, 4:30 PM to 6:00 PM | |||||||||||||||||||||||||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | |||||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||||
| Manolya Tanyu, Learning Point Associates, manolya.tanyu@learningpt.org | |||||||||||||||||||||||||||
| Discussant(s): | |||||||||||||||||||||||||||
| Eric Barela, Partners in School Innovation, ebarela@partnersinschools.org | |||||||||||||||||||||||||||
|
| Session Title: Improving Evaluation Quality: A Focus on Better Measures of Implementation | ||||
| Multipaper Session 686 to be held in BONHAM D on Friday, Nov 12, 4:30 PM to 6:00 PM | ||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | ||||
| Chair(s): | ||||
| Helene Jennings, ICF Macro, helene.p.jennings@macrointernational.com | ||||
|
| Session Title: The Environmental Evaluators Network: Quality in an Era of Results-based Performance | |||||
| Panel Session 687 to be held in BONHAM E on Friday, Nov 12, 4:30 PM to 6:00 PM | |||||
| Sponsored by the Environmental Program Evaluation TIG | |||||
| Chair(s): | |||||
| Matt Keene, United States Environmental Protection Agency, keene.matt@epa.gov | |||||
| Discussant(s): | |||||
| Kathryn Newcomer, George Washington University, newcomer@gwu.edu | |||||
| Abstract: In June 2010 the 5th annual Environmental Evaluators Networking Forum will host 250 domestic and international representatives from government agencies, foundations, consulting firms, non-profits, academia, and international institutions to discuss the effects of an era of results-based performance on the quality of environmental program and policy evaluation. In the format of a panel discussion, key staff from sponsor organizations of the Forum will share and discuss: 1) the current state and future of the quality of environmental evaluation in terms of opportunities, challenges, skills, tools, and methods; and 2) the relationship between the Environmental Evaluators Network and the quality of evaluation at their respective organizations. The purpose of this session is to provide the audience with a better understanding of current trends in environmental evaluation and describe how the Environmental Evaluators Network has evolved to address issues of evaluation quality. | |||||
| |||||
| |||||
|
| Session Title: Your Input, Please: Research, Technology and Development (RTD) Topical Interest Group (TIG) Draft User’s Guide to Conducting Research and Development Evaluation |
| Think Tank Session 689 to be held in Texas D on Friday, Nov 12, 4:30 PM to 6:00 PM |
| Sponsored by the Research, Technology, and Development Evaluation TIG |
| Presenter(s): |
| Gretchen Jordan, Sandia National Laboratories, gbjorda@sandia.gov |
| Discussant(s): |
| Brian Zuckerman, Science and Technology Policy Institute, bzuckerm@ida.org |
| Cheryl Oros, Oros Consulting LLC, cheryl.oros@gmail.com |
| Juan Rogers, School of Public Policy Georgia Institute of Technology, jdrogers@gatech.edu |
| George Teather, George Teather and Associates, gteather@sympatico.ca |
| Abstract: Evaluation of research, technology, and development (RT&D) is a maturing discipline. Although in recent years resource guides for R&D evaluation practitioners have been created and disseminated (e.g., Ruegg and Feller 2003; Ruegg and Jordan 2007), there is not agreement among R&D evaluators, and especially those working in government, as to when evaluations should be conducted, what types of evaluation to pursue, and how they should be approached. While U.S. government policy calls upon R&D agencies to conduct evaluation and to strengthen their capacity to do so, no guidance has been provided for policy implementation. During 2010, the RTD TIG, as a community of practice, has responded to this call by developing a user-focused White Paper regarding evaluation practices as applied to government-funded R&D. At this Think Tank, the TIG leadership will present the current draft of the White Paper for discussion. |
| Session Title: Strategy as Evaluand: Quality and Utilization Issues in Evaluating Strategy | |||
| Panel Session 690 to be held in Texas E on Friday, Nov 12, 4:30 PM to 6:00 PM | |||
| Sponsored by the Evaluation Use TIG , the Organizational Learning and Evaluation Capacity Building TIG, and the Non-profit and Foundations Evaluation TIG | |||
| Chair(s): | |||
| Michael Quinn Patton, Utilization-Focused Evaluation, mqpatton@prodigy.net | |||
| Abstract: Strategy as a new unit of analysis for evaluation involves a different purpose and use: strategic use. Traditionally, evaluation has focused on projects and programs. Organizational development makes the organization the unit of analysis for assessing effectiveness, usually focused on mission fulfillment. Management, in contrast, often focuses on strategy as the defining determinant of effectiveness. Herbert Simon, one of the preeminent management and organizational theorists, posited that the series of decisions which determines behavior over some stretch of time may be called a strategy. Distinguished management scholar Henry Mintzberg in his recent book Tracking Strategies defines strategy as "pattern consistency in behavior over time" (2007: 1). Philanthropy, government, and non-profits, greatly influenced by business management trends, are paying a great deal of attention to strategic issues. This session will examine issues of quality and utilization when focusing on strategy as an evaluand, or unit of analysis and impact. | |||
| |||
| |||
| |||
|
| Session Title: Research on Data Collection Approaches | ||||||||||||||||||
| Multipaper Session 691 to be held in Texas F on Friday, Nov 12, 4:30 PM to 6:00 PM | ||||||||||||||||||
| Sponsored by the Research on Evaluation TIG | ||||||||||||||||||
| Chair(s): | ||||||||||||||||||
| Leslie Fierro, SciMetrika, let6@cdc.gov | ||||||||||||||||||
|
| Session Title: Building Capacity for the 4-H Science, Engineering, and Technology Initiative to Get to Outcomes | ||||
| Multipaper Session 692 to be held in CROCKETT A on Friday, Nov 12, 4:30 PM to 6:00 PM | ||||
| Sponsored by the Extension Education Evaluation TIG | ||||
| Chair(s): | ||||
| Suzanne Le Menestrel, United States Department of Agriculture, slemenestrel@nifa.usda.gov | ||||
| Discussant(s): | ||||
| Martin Smith, University of California, Davis, mhsmith@ucdavis.edu | ||||
|
| Session Title: Teaching Through an Interdisciplinary Focus | ||||||||||||||||||
| Multipaper Session 693 to be held in CROCKETT B on Friday, Nov 12, 4:30 PM to 6:00 PM | ||||||||||||||||||
| Sponsored by the Teaching of Evaluation TIG | ||||||||||||||||||
| Chair(s): | ||||||||||||||||||
| Linda Schrader, Florida State University, lschrader@fsu.edu | ||||||||||||||||||
| Discussant(s): | ||||||||||||||||||
| Jean A King, University of Minnesota, kingx004@umn.edu | ||||||||||||||||||
|
| Session Title: Data Collection Instruments for Quality Evaluation |
| Skill-Building Workshop 694 to be held in CROCKETT C on Friday, Nov 12, 4:30 PM to 6:00 PM |
| Sponsored by the International and Cross-cultural Evaluation TIG |
| Presenter(s): |
| Chung Lai, International Relief & Development, clai@ird-dc.org |
| Mamadou Sidibe, International Relief & Development, msidibe@ird-dc.org |
| Abstract: This skill building session will focus on the association between field work and data quality criteria using a data assessment model and examples of data collection instruments as solutions to improve data quality for evaluations. Good quality data to measure performance results often presents challenges to project implementers. There are political, economical, social, cultural, environmental issues and circumstances present in all situations that affect project implementation. Moreover, these circumstances also affect the data collection process and hence data quality. To help minimize these circumstances, data collection instruments can be designed to improve data quality by observing data quality criteria: validity, integrity, precision, reliability, timeliness. These criteria will be elaborated in a data assessment model as well as some of the data collection instruments, such as monitoring procedures, questionnaires, data entry guides, data definition codebook. |
| Session Title: Collaboration in Government-Sponsored Evaluations | ||||||||||||||||||||||||||
| Multipaper Session 695 to be held in CROCKETT D on Friday, Nov 12, 4:30 PM to 6:00 PM | ||||||||||||||||||||||||||
| Sponsored by the Government Evaluation TIG and the Collaborative, Participatory & Empowerment Evaluation TIG | ||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||
| Maria Whitsett, Moak, Casey and Associates, mwhitsett@moakcasey.com | ||||||||||||||||||||||||||
|
| Session Title: Theorectical Issues in Feminist Evaluation | |||||||||||||||
| Multipaper Session 696 to be held in SEGUIN B on Friday, Nov 12, 4:30 PM to 6:00 PM | |||||||||||||||
| Sponsored by the Feminist Issues in Evaluation TIG | |||||||||||||||
| Chair(s): | |||||||||||||||
| Kathryn A Bowen, Centerstone Research Institute, kathryn.bowen@centerstone.org | |||||||||||||||
| Discussant(s): | |||||||||||||||
| Michael Bamberger, Independent Consultant, jmichaelbamberger@gmail.com | |||||||||||||||
|
| Session Title: Export and Translation: Evaluating the Sharing of People, Programs, and Instruments Across Borders | |||||||||||||||
| Multipaper Session 697 to be held in REPUBLIC A on Friday, Nov 12, 4:30 PM to 6:00 PM | |||||||||||||||
| Sponsored by the International and Cross-cultural Evaluation TIG | |||||||||||||||
| Chair(s): | |||||||||||||||
| Mary Crave, University of Wisconsin, crave@conted.uwex.edu | |||||||||||||||
|
| Session Title: Improving Evaluations of Nutrition, Physical Activity, and Obesity Programs Through Schools, Providers, and Statewide Efforts | ||||||||||||||||||||||||||
| Multipaper Session 698 to be held in REPUBLIC B on Friday, Nov 12, 4:30 PM to 6:00 PM | ||||||||||||||||||||||||||
| Sponsored by the Health Evaluation TIG | ||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||
| Jenica Huddleston, University of California, Berkeley, jenhud@berkeley.edu | ||||||||||||||||||||||||||
|
| Session Title: Technical Assistance in Action: How Does the Practice Look? | ||||||
| Panel Session 699 to be held in REPUBLIC C on Friday, Nov 12, 4:30 PM to 6:00 PM | ||||||
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG | ||||||
| Chair(s): | ||||||
| Pamela Imm, University of South Carolina, pamimm@windstream.net | ||||||
| Abstract: The current research about the necessary types of levels of intensity of technical assistance to influence program and community outcomes is limited. In fact, technical assistance efforts remain mostly intuitive rather than data driven (Florin, Mitchell, Stevenson, & Klein 2000). Interestingly, the investment in technical assistance continues to grow with many federal agencies contracting with technical assistance providers to work closely with their grantees to promote high level planning, implementation and outcomes. This session will provide an opportunity for technical assistance providers to discuss their work and to offer ideas for how to conceptualize and measure technical assistance in a variety of settings. This will include ideas for qualitative and quantitative measurement. | ||||||
| ||||||
| ||||||
| ||||||
| ||||||
| ||||||
|