| Session Title: Studying Systems Through Social Network Analysis: Empowering Institutional Change |
| Multipaper Session 917 to be held in Centennial Section A on Saturday, Nov 8, 4:00 PM to 5:30 PM |
| Sponsored by the Systems in Evaluation TIG and the Organizational Learning and Evaluation Capacity Building TIG |
| Chair(s): |
| Steve Fifield, University of Delaware, fifield@udel.edu |
| Abstract: This session examines the relationship of social network analysis (SNA) and systems concepts in the evaluation of initiatives to improve deeply rooted institutional practices. Key systems concepts, including perspectives, boundaries, and entangled systems, shape our use and reflections on SNA as a tool to understand and inform organizational change. We ground reflections on SNA and systems concepts in our experiences studying change initiatives in school leadership, secondary mathematics teaching, and multidisciplinary, multi-institutional scientific research. In these cases we use systems concepts to explore variations on shared themes: institutional practices in tension, evaluation design issues that led us to SNA, and shifts in our roles as evaluators as agents in communities undergoing change. By reflecting on our experiences and the results of these projects, we hope to contribute to critical conversations about and useful integration of SNA and systems concepts. |
| Social Network Analysis and Systems Concepts in Studies of Interdisciplinary S&T Research |
| Steve Fifield, University of Delaware, fifield@udel.edu |
| Social network analysis (SNA) is a recent addition to my studies of science and technology (S&T) research initiatives. This leads me to consider how systems concepts can inform SNA, and how SNA can put systems concepts into practice. Here I draw on ongoing evaluations of statewide interdisciplinary S&T initiatives and research on interdisciplinary S&T research centers. These are examples of changes in academic S&T research toward managed, collaborative, and interdisciplinary projects and organizations. In initiatives to catalyze the growth of research networks, SNA can multiply perspectives on processes and outcomes by representing different kinds of networks in formation. SNA can contribute to dynamic, relational understandings of changes in S&T research in combination with ethnographic methods that examine the meanings and performances of social ties in research networks. I describe this approach in a study of group interaction customs across entangled disciplinary and institutional boundaries in two S&T research centers. |
| Using Social Network Analysis to study Environments That Support Teacher Leadership Development |
| Ximena Uribe Zarain, University of Delaware, ximena@udel.edu |
| This study describes the use of social network analysis to evaluate the leadership configuration of mathematics teachers. The goal of this method is to use teacher collaboration data to map and explain the course of leadership and the adequate environment for a teacher to succeed as a leader. This is a shift in instructional leadership evaluation from a focus on the qualities of isolated individuals to a more contextual and systemic understanding of interrelationships and how people understand their environments. Network maps are tools to describe and reflect on organizational structure and dynamics, including the overlaps and entanglements of personal networks. To better understand the nature of relationships related to leadership among mathematics teachers, this study combined in depth interviews with network analysis. Network maps were taken back to the key players in the organization to see how they interpreted the network patterns. |
| Session Title: Practical Applications for Using Propensity Scores |
| Demonstration Session 918 to be held in Centennial Section B on Saturday, Nov 8, 4:00 PM to 5:30 PM |
| Sponsored by the Quantitative Methods: Theory and Design TIG |
| Presenter(s): |
| MH Clark, Southern Illinois University at Carbondale, mhclark@siu.edu |
| Abstract: Quasi-experiments are excellent alternatives to true experiments when random assignment is not feasible. Unfortunately, causal conclusions cannot easily be made from results that are potentially biased. Some advances in statistics that attempt to reduce selection bias in quasi-experiments use propensity scores, the predicted probability that units will be in a particular treatment group. Because propensity score research is still relatively new, many applied social researchers are not familiar with the methods, applications and conditions under which propensity scores should be used. Therefore, the proposed demonstration will present an introduction to computing and applying propensity scores using SPSS. The demonstration will include: 1. a basic method for computing propensity scores; 2. how propensity scores can be used to make statistical adjustments using matching, stratifying, weighting and covariate adjustment; 3. a discussion of known limitations and problems when using propensity score adjustments; and 4. how to improve propensity score computations. |
| Session Title: Successfully Managing Evaluation Management: Approaches to Common Challenges From Three Different Evaluation Perspectives | |||
| Panel Session 919 to be held in Centennial Section C on Saturday, Nov 8, 4:00 PM to 5:30 PM | |||
| Sponsored by the Presidential Strand and the Evaluation Managers and Supervisors TIG | |||
| Chair(s): | |||
| Laura Feldman, University of Wyoming, lfeldman@uwyo.edu | |||
| Abstract: This three-person panel focuses on four common challenges to successful evaluation management and supervision: 1) how to determine which evaluation projects to pursue and balancing the factors (e.g., staff, interest, organizational development) that influence these decisions; 2) how to maintain internal and external evaluation quality; 3) how to manage one's time; and 4) when and where to allocate money. The panelists offer insights based on their personal management styles and type of organization (i.e., a state agency, a University-based evaluation group, and an urban school district). Collectively, the panelists have more than 30 years of evaluation experience. Regardless of their roles or organization, they are constrained by the same resources: money, staff, and time. Presenters discuss how these constraints guide their decision making and activities. Their responses emphasize their desire to make decisions that will maintain the highest standards of evaluation practice and management. | |||
| |||
| |||
|
| Session Title: Communication and Cognition in Evaluation Utilization | |||||||||||||||||
| Multipaper Session 920 to be held in Centennial Section D on Saturday, Nov 8, 4:00 PM to 5:30 PM | |||||||||||||||||
| Sponsored by the Evaluation Use TIG | |||||||||||||||||
| Chair(s): | |||||||||||||||||
| William Bickel, University of Pittsburgh, bickel@pitt.edu | |||||||||||||||||
|
| Session Title: Ethics and Evaluation: Respectful Evaluation With underserved communities | ||||
| Panel Session 921 to be held in Centennial Section E on Saturday, Nov 8, 4:00 PM to 5:30 PM | ||||
| Sponsored by the Multiethnic Issues in Evaluation TIG | ||||
| Chair(s): | ||||
| Helen Simons, University of Southampton, h.simons@soton.ac.uk | ||||
| Discussant(s): | ||||
| Ricardo Millett, Millett & Associates, ricardo@ricardomillett.com | ||||
| Abstract: A commitment to respecting and honoring underserved communities within evaluations has drawn this panel together to present what we have learned about conducting ethical evaluations with peoples who are: indigenous, minority, disadvantaged and/or otherwise marginalized within our societies. Within these lessons the evaluators' responsibilities are made explicit about the engagement that needs to happen with these communities and the critical lens the evaluators need to maintain about the causes of their marginalization. We begin with Critical Race Theory (CRT) and how a transformative agenda can guide ethical evaluation practice. Moves towards the control of evaluation work done in Indian Country then highlights to need for respect for cultural mores and aspirations. Within MSori communities in New Zealand this respect underpins a relationship ethic that also resonates with the fourth presenters who speak to the need for underserved communities to be collaborative partners in evaluation and research. | ||||
| ||||
| ||||
| ||||
|
| Session Title: Managing the Tension between Performance Measurement and Evaluation in the Emerging Political Environment | |||
| Panel Session 922 to be held in Centennial Section F on Saturday, Nov 8, 4:00 PM to 5:30 PM | |||
| Sponsored by the Theories of Evaluation TIG | |||
| Chair(s): | |||
| George Julnes, Utah State University, george.julnes@usu.edu | |||
| Discussant(s): | |||
| Eleanor Chelimsky, Independent Consultant, oandecleveland@aol.com | |||
| Abstract: Often viewed with some suspicion by opposing proponents, performance measurement and evaluation are being brought together more frequently by government initiatives. This panel will examine how the tension between these two approaches can be managed best in the current changing political environment. The presentations and discussion will examine the challenges to effective management of performance management and evaluation and will suggest solutions for moving forward. | |||
| |||
| |||
|
| Session Title: Science of Science Management: Development of Assessment Methodologies to determine Research and Development Progress and Impact |
| Think Tank Session 924 to be held in Centennial Section H on Saturday, Nov 8, 4:00 PM to 5:30 PM |
| Sponsored by the Research, Technology, and Development Evaluation TIG and the Government Evaluation TIG |
| Presenter(s): |
| Deborah Duran, National Institutes of Health, durand@od.nih.gov |
| Abstract: Research and development funders seek to fund effective, efficient, and impactful programs. However, current methodologies are insufficient to assess the practical application of many innovative R&D programs. Under the current approach of setting planned annual milestones, assessments may indicate met or unmet; but fail to address the adaptive learning involved with the scientific discovery process. For example, projects may struggle to initially meet the planned goals; but, later adapt using sound scientific principles and discovery processes. This example highlights an important problem in science management: What patterns, pathways, or profiles can be developed to assess performance and identify intervention points? The emerging field of Science of Science Management strives to develop systematic studies to explore the complexities of science administration, to provide evidence and analytic tools for decision making, and to inform science policy. |
| Session Title: Foundations of Evaluation: Theory, Method, and Practice |
| Skill-Building Workshop 925 to be held in Mineral Hall Section A on Saturday, Nov 8, 4:00 PM to 5:30 PM |
| Sponsored by the Graduate Student and New Evaluator TIG |
| Presenter(s): |
| Chris LS Coryn, Western Michigan University, chris.coryn@wmich.edu |
| Daniela Schroeter, Western Michigan University, daniela.schroeter@wmich.edu |
| Abstract: This skill-building workshop is designed to provide an overview of the field and discipline of evaluation, including its theory, research, and practice perspectives. It is designed not only to provide an overview of the field and discipline, but also to generate critical thinking about evaluation theory and practice to assist attendees in formulating their own ideas about it. The presenters will provide an introduction to the foundations of evaluation, including basic concepts and definitions, evaluation's rationale and uses, the evaluation field's history and standards, alternative evaluation models and approaches, general evaluation processes and procedures of collecting, analyzing, and synthesizing information, and metaevaluation, the process of evaluating evaluations. The session is divided into mini-lectures, group discussions, and work with case examples. Each mini-lecture will be followed by exercises and discussions. Case studies will be used throughout the workshop to demonstrate core concepts, methods, and approaches. |
| Session Title: Improving and Applying Measurement Techniques to Identify and Account for Differences Across Social Groups |
| Multipaper Session 926 to be held in Mineral Hall Section B on Saturday, Nov 8, 4:00 PM to 5:30 PM |
| Sponsored by the Quantitative Methods: Theory and Design TIG |
| Chair(s): |
| Mary Kay Falconer, Ounce of Prevention Fund of Florida, mfalconer@ounce.org |
| Abstract: Recognizing the social diversity of populations served by many programs, evaluators should address how measurement can be improved and used to identify and account for differences across social groups. Adopting measurement techniques that account for differences across ethnicities or other social characteristics of program participants provides greater assurances that the measures are valid for the entire participant group. This session will cover at least two relevant approaches for improving and applying measurement techniques. In addition, information on recent efforts by product developers (testing resources) to improve several widely used measurement tools across different social groups will be shared. As a third component, presenters will be asked to list lessons learned and recommendations related to measurement when evaluating programs serving diverse populations. |
| Meeting the Challenge of Social Diversity in Measurement Tools |
| Mary Kay Falconer, Ounce of Prevention Fund of Florida, mfalconer@ounce.org |
| With ethnic diversity being important in target and participant populations for many programs that are being evaluated, the validity of widely used measurement tools must be addressed. Based on a survey of product and testing services, recent efforts to improve the validity of selected tools across multiple ethnic groups will be identified. The coverage of ethnic and age groups with these tools by product developers will also be indicated. The list of tools included in this survey or compilation of information will be reviewed and approved by the other presenters in the session. The objective in this presentation will be to provide one account of the 'status or progress" of measurement in meeting the challenges of evaluations of programs serving diverse ethnic populations. |
| Testing a Model of the "Mistreatment and Barriers to Help-Seeking by Elder Women Abused by an Intimate Other" |
| Frederick Newman, Florida International University, newmanf@fiu.edu |
| Richard Beaulauria, Florida International University, |
| Laura R Seff, Florida International University, emis2go@cs.com |
| An instrument was developed based on a qualitative analysis of 21 focus groups of women (N=134, ages 50 to 85) that represented Hispanics, White Non-Hispanics, and Black (Caribbean Haitian or Jamaicans). Using criteria of two or more persons in two or more groups having similar coded concepts to identify a "qualitatively supported factor," using Atlas.ti, we were able to identify three clusters of factors: Internal Barriers (e.g., self blame, helplessness/powerlessness, secrecy, protecting family), External Barriers (Family response, Clergy Response, Justice System Response, Community Response), and Abuser Behaviors (Isolation, Intimidation, Jealousy). We are also seeking a form of convergent validation with a standardized measure, the Conflict Tactics Scale (CTS), and comparing these three race-ethnicities on the CTS for different age groups. In addition to testing the model on 450 women (150 in each race-ethnicity) we intend to identify similarities and differences among the major ethnic and age groupings. |
| Using the Rasch Item Partition Model to Improve Theory-Based Evaluation |
| John Gargani, Gargani and Company Inc, john@gcoinc.com |
| I describe the Rasch item partition (RIP) model, a hierarchical generalized linear model that can be used to simultaneously estimate program impacts and gather evidence about the underlying mechanisms presumed to cause impacts. I explain how evaluators can use the RIP model to integrate theory-based evaluations with randomized trials, providing examples from randomized trials in which RIP models shed light on presumed mechanisms of change. |
| Session Title: New and Interesting Evaluations of Early Childhood Services and Interventions | |||||||||||||||||||||||||
| Multipaper Session 927 to be held in Mineral Hall Section C on Saturday, Nov 8, 4:00 PM to 5:30 PM | |||||||||||||||||||||||||
| Sponsored by the Human Services Evaluation TIG | |||||||||||||||||||||||||
| Chair(s): | |||||||||||||||||||||||||
| Tracy Greever-Rice, University of Missouri, greeverricet@missouri.edu | |||||||||||||||||||||||||
| Discussant(s): | |||||||||||||||||||||||||
| Marty Tombari, Colorado Foundation for Families and Children, mtombari@coloradofoundation.org | |||||||||||||||||||||||||
|
| Session Title: Building Evaluation Capacity Building for Public Health: Community, CBO, and State Examples | ||||||
| Panel Session 928 to be held in Mineral Hall Section D on Saturday, Nov 8, 4:00 PM to 5:30 PM | ||||||
| Sponsored by the Organizational Learning and Evaluation Capacity Building TIG | ||||||
| Chair(s): | ||||||
| Antonia Spadaro, Centers for Disease Control and Prevention, aqs5@cdc.gov | ||||||
| Abstract: In community-based participatory research, partnerships form between academics and external partners, such as community groups or health departments. For the Prevention Research Centers (PRCs), funded by the Centers for Disease Control and Prevention (CDC), these partnerships often steer university researchers towards helping local organizations or state partners with evaluation projects through grants, contracts, or technical assistance, resulting in evaluation capacity-building and organizational learning, Knowledge gained from this process provides stakeholders with tools for decision-making, program improvement, and demonstrating outcomes. This session will highlight three PRCs’ roles in partnering with stakeholders - the community, community-based organizations (CBOs), and state entities - for building evaluation capacity and promoting public health. The panel will describe concepts and experiences in evaluation capacity building such as the increase of stakeholders’ evaluation knowledge, skills, and abilities; the growth of new projects because of these evaluation endeavors; and the development of a new AEA local affiliate. | ||||||
| ||||||
| ||||||
|
| Session Title: Evaluating Structural Changes in Residential Children's Homes: Challenges, Strategies and Lessons Learned | |||
| Panel Session 929 to be held in Mineral Hall Section E on Saturday, Nov 8, 4:00 PM to 5:30 PM | |||
| Sponsored by the Non-profit and Foundations Evaluation TIG | |||
| Chair(s): | |||
| Toni Freeman, The Duke Endowment, tfreeman@tde.org | |||
| Abstract: The ENRICH (Environmental Interventions in Children's Homes) evaluation plan included a group randomized design, along with comprehensive process evaluation and contextual assessments, to evaluate a structural intervention designed to promote and support physical activity and healthful nutrition (eating fruits and vegetables) among children and adolescents residing in approximately 30 residential children's homes (RCHs) in North and South Carolina. ENRICH was designed to be specific to the RCH setting; however, we believe that this evaluation approach and framework are applicable to interventions in other organizational settings, including schools, worksites, churches, and other community organizations. The three sessions included in this panel will provide detailed description of the organizational settings for the intervention, including strategies used to identify key organizational characteristics important to the design of the intervention and evaluation; an overview of the intervention, evaluation framework and methodology, including evaluation challenges; and, finally, outcome evaluation results and lessons learned. | |||
| |||
| |||
|
| Session Title: Policy, Practice, and Standards: Educational Evaluation in an Age of Accountability | ||||||||||||||||||||||
| Multipaper Session 930 to be held in Mineral Hall Section F on Saturday, Nov 8, 4:00 PM to 5:30 PM | ||||||||||||||||||||||
| Sponsored by the Pre-K - 12 Educational Evaluation TIG | ||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||
| Tiffany Berry, Claremont Graduate University, tiffany.berry@cgu.edu | ||||||||||||||||||||||
|
| Session Title: Empowerment Evaluations: Insights, Reflections, and Implications | |||||||||||||||||||
| Multipaper Session 931 to be held in Mineral Hall Section G on Saturday, Nov 8, 4:00 PM to 5:30 PM | |||||||||||||||||||
| Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG | |||||||||||||||||||
| Chair(s): | |||||||||||||||||||
| Aarti Bellara, University of South Florida, bellara@coedu.usf.edu | |||||||||||||||||||
|
| Session Title: Overviews of Research on Observation Instruments Used in Arts Education Evaluation Studies | ||||||
| Panel Session 932 to be held in the Agate Room Section B on Saturday, Nov 8, 4:00 PM to 5:30 PM | ||||||
| Sponsored by the Evaluating the Arts and Culture TIG | ||||||
| Chair(s): | ||||||
| Suzanne Callahan, Callahan Consulting, callacon@aol.com | ||||||
| Abstract: In this session, we will address the research on three observation arts-education evaluation instruments: two for determining teachers' quality and one for examining the quality of arts-focused professional development. The first instrument is used to observe artist-teacher pairs who integrate the arts into elementary-school core literacy teaching. The psychometric properties of the instrument and how the findings were used in an evaluation will be addressed. The second instrument is used as an in-class observation tool for both formative and summative evaluation purposes. The instrument development and results of the findings will be presented. The final observation instrument is used to assess the quality of professional development workshops delivered to individuals who work with pre-school children. The alignment of the tool with best practices in adult education and how the tool was used to both guide observations and present the results of the observational data will be presented. | ||||||
| ||||||
| ||||||
|
| Session Title: Nonprofit Evaluation Practice: Special Applications | ||||||||||||||||||
| Multipaper Session 933 to be held in the Agate Room Section C on Saturday, Nov 8, 4:00 PM to 5:30 PM | ||||||||||||||||||
| Sponsored by the Non-profit and Foundations Evaluation TIG | ||||||||||||||||||
| Chair(s): | ||||||||||||||||||
| Carl Hanssen, Hanssen Consulting LLC, carlh@hanssenconsulting.com | ||||||||||||||||||
|
| Session Title: Walking the Tightrope: Developing Valid Tools to Measure Fidelity of Implementation That Meet Stakeholder Needs |
| Multipaper Session 934 to be held in the Granite Room Section A on Saturday, Nov 8, 4:00 PM to 5:30 PM |
| Sponsored by the Pre-K - 12 Educational Evaluation TIG |
| Chair(s): |
| Tania Jarosewich, Censeo Group LLC, tania@censeogroup.com |
| Abstract: Examination of program implementation of grant expectations is less common than is examination of student outcomes. However, without measuring fidelity of implementation, the connection between grant activities and outcomes is weak. Recognizing that a multiple site implementation provides unique opportunities for measuring fidelity of implementation, this panel will focus on methods and systems of collecting systematic data regarding implementation in statewide grant reading and math programs. The authors will describe the tensions inherent in developing valid instruments that meet client expectations and local grantees' needs, can be used by a variety of stakeholders, and may be applied to high-stakes decision-making. Panelists will focus on the methods they used to develop and validate the instruments, and to train users in collecting valid data. The papers will also reflect on the convergences and divergences of the processes used in each evaluation. |
| Developing Classroom-Level Measures and School-Level Measures of Implementation Fidelity |
| Catherine Callow-Heusser, EndVision Research and Evaluation, cheusser@endvision.net |
| The Bureau of Indian Education's Reading First program uses school-reported self-assessment data based on the Planning and Evaluation Tool for Effective Schoolwide Reading Programs - Revised (Kame'enui and Simmons, 2003) as a measure of implementation fidelity to contribute to decisions about continued funding. However, as the external evaluators, we felt an independent measure would likely be more aligned with student outcomes. We developed a school-based measure of implementation fidelity that included research-based indicators that aligned with the four pillars of Reading First: instructional programs and strategies, valid and reliable assessments, professional development, and instructional leadership. Additionally, a classroom-level measure of implementation fidelity is aligned with reading programs and research-based reading teaching strategies. Both classroom-level measures and school-level measures of implementation fidelity explain substantial portions of the variability in student outcomes. In this presentation, we will discuss development of the instrumentation and statistical outcomes. |
| Measuring Program Implementation Using a Document-Based Program Monitoring System |
| James Salzman, Cleveland State University, j.salzman@csuohio.edu |
| The Reading First - Ohio (RFO) grant indicated that implementation would be measured through a 'rubric reflective of the state's accountability system' (ODE, 2003, p. 157). The RFO Center designed a rubric to use in a document review process to hold districts accountable for attaining and sustaining fidelity. School personnel gathered artifacts and wrote summary statements for each of the grant's indicators to provide evidence of implementation for the document review. Regional consultants, supervised by the Center, reviewed the documentation multiple times each year, as both a formative and summative process. Schools that did not show progress toward full implementation after their second year in the program could lose funding. This presentation will discuss the tightrope that the Center walked in developing a tool that met the requirements of staff members of the Ohio State Department of Education and also showed strong reliability and validity. |
| Measures of Implementation Fidelity for Use by External Evaluators and Program Staff |
| Tania Jarosewich, Censeo Group LLC, tania@censeogroup.com |
| The Oklahoma Reading First grant did not identify how the Oklahoma State Department of Education (SDE) would monitor program implementation or evaluate district and school adherence to grant requirements. Evaluation staff and the State Department of Education team developed a statewide monitoring system that would provide a clear understanding of the strengths and needs of implementation across the state and allow state staff to make the high-stakes decision of which districts would be continued in the grant. The state evaluation team and SDE staff used a school self-assessment, a reading observation form, and site visit protocols to collect information about fidelity of implementation through a site visit at each participating Reading First school. This presentation will describe development, training, and use of the tools, and discuss the challenges inherent in developing a system in which an external evaluation team and internal project staff collect data about fidelity of implementation. |
| Syntheses of Local Data for Global Evaluation of Program Fidelity and Effectiveness |
| Elizabeth Oyer, Evaluation Solutions, eoyer@evalsolutions.com |
| Tania Jarosewich, Censeo Group LLC, tania@censeogroup.com |
| The Illinois Math and Science Partnership state evaluation framework includes five dimensions of program outcomes, including quality of professional development activities and partnerships as well as changes in teachers' content knowledge, teachers' instructional practices, and students' achievement. The cornerstone of the state-level evaluation design is the cross-site meta-analyses of local evaluation results to assess program effectiveness. The meta-analytic approach is combined with hierarchical linear modeling to analyze local and global outcomes. Measures of program implementation are a key moderating variable in analyses and are measured at the local level using a combination of logs, journals, classroom observations, and extant data. The quality of the partnerships are evaluated by triangulating a comprehensive interview protocol with artifact analyses and surveys of all key partners including teachers, local education administrators, higher education faculty, and industry partners. The presentation will discuss the issues related to balancing the needs of local evaluations with the need to provide global analyses of the statewide initiative. |
| Session Title: Using Indicators to Unite Partners and Players in a Common Evaluation Enterprise: Examples From the Centers for Disease Control and Prevention (CDC) | ||||||||
| Panel Session 935 to be held in the Granite Room Section B on Saturday, Nov 8, 4:00 PM to 5:30 PM | ||||||||
| Sponsored by the Health Evaluation TIG | ||||||||
| Chair(s): | ||||||||
| Thomas Chapel, Centers for Disease Control and Prevention, tchapel@cdc.gov | ||||||||
| Abstract: Consensus of key participants regarding a program and its components is optimal, but conceptual consensus should be operationalized as indicators, and those indicators should be matched with appropriate data sources. In federal programs that are implemented by networks of grantees and frontline practitioners, the indicator process is a formidable one because evaluation skills and availability of data sources vary. The CDC programs on this panel use indicators as a tool for monitoring and illustrating grantee performance. Representatives will discuss their programs, involvement of their grantees and partners in developing evaluation approaches, and the perceived need for indicators. The process for developing and implementing indicators will be discussed as will decisions regarding where to impose uniformity or grant autonomy in indicators and data collection. Transferable lessons from CDC's experience will be identified. | ||||||||
| ||||||||
| ||||||||
|
| Session Title: Methods and Data Integrity in International Evaluation | ||||||||||||||||||||||||||||||
| Multipaper Session 936 to be held in the Granite Room Section C on Saturday, Nov 8, 4:00 PM to 5:30 PM | ||||||||||||||||||||||||||||||
| Sponsored by the International and Cross-cultural Evaluation TIG | ||||||||||||||||||||||||||||||
| Chair(s): | ||||||||||||||||||||||||||||||
| Michael Bamberger, Independent Consultant, jmichaelbamberger@gmail.com | ||||||||||||||||||||||||||||||
|
| Roundtable Rotation I: Viability of Independent Practice in Times of Evaluation Policy Review |
| Roundtable Presentation 937 to be held in the Quartz Room Section A on Saturday, Nov 8, 4:00 PM to 5:30 PM |
| Sponsored by the Independent Consulting TIG |
| Presenter(s): |
| Norma Martinez Rubin, Evaluation Focused Consulting, norma@evaluationfocused.com |
| Abstract: Review of evaluation policies is pertinent to independent evaluators’ practice and merits the particular attention of independent evaluators who are sole proprietors. These entrepreneurial evaluators are likely to be affected by shifts in evaluation policies created in theoretical realms and sought as guidance by funding sources for evaluation projects. This has implications in the availability and accessibility to evaluation projects and types of clients sought by various independent evaluation firms. In this round table discussion, we set out to identify the strengths, weaknesses, opportunities, and threats of having a set of evaluation and business policies, which if smartly intertwined, can support the viability of independent practices. What those policies are, and how well or not they affect a triple bottom line (people, planet, profits) can inform the direction that independent consultants choose to take in business development for their consulting practices. |
| Roundtable Rotation II: Navigating the Murky Waters of an Institutional Review Board (IRB): Guidance for Evaluators |
| Roundtable Presentation 937 to be held in the Quartz Room Section A on Saturday, Nov 8, 4:00 PM to 5:30 PM |
| Sponsored by the Independent Consulting TIG |
| Presenter(s): |
| Tia Neely, Pacific Research and Evaluation LLC, tia@pacific-research.org |
| Abstract: For evaluators, obtaining institutional review board (IRB) approval can be daunting. The challenge is magnified if the evaluator is not university-affiliated with an on-site IRB. If the decision is made to submit to the IRB, navigating the system requires extensive knowledge of the IRB’s requirements and procedures. Is the study exempt? Expedited? Full board? Does the evaluator need a consent form? Protocol? Translations? This presentation will be facilitated by an evaluator who has both insider and external experience from her past role as an IRB reviewer and her current position as an evaluator with an independent evaluation firm. General guidelines for when IRB approval is needed will be discussed, along with how to determine what type of IRB submission needs to be completed. Participants will be given sample protocols and consent forms to assist them in any future IRB submissions. |
| Roundtable Rotation I: An Evaluation of the Transformation of Undergraduate Education at Rutgers University |
| Roundtable Presentation 938 to be held in the Quartz Room Section B on Saturday, Nov 8, 4:00 PM to 5:30 PM |
| Sponsored by the Assessment in Higher Education TIG |
| Presenter(s): |
| Aubrie Swan, Rutgers the State University of New Jersey, aswan@eden.rutgers.edu |
| Abstract: Measures of success and prestige for higher education institutions have traditionally relied on indices such as scores of incoming students, reputation rankings, and amounts of funding. A new era of accountability has ushered in a focus on student learning and engagement in higher education. The Task Force on Undergraduate Education at Rutgers University recently developed and implemented a number of goals related to attracting and supporting high quality students, creating a welcoming community, and increasing faculty participation in undergraduate education. An evaluation of these changes is currently being conducted. This paper will present relevant research for conducting such an evaluation, information about evaluation methods, and general advice about evaluating institutional change in higher education settings, through the context of the Rutgers undergraduate education transformation evaluation. |
| Roundtable Rotation II: Student Perspectives on the Meaning of Good College Teaching: Mexico-USA Differences |
| Roundtable Presentation 938 to be held in the Quartz Room Section B on Saturday, Nov 8, 4:00 PM to 5:30 PM |
| Sponsored by the Assessment in Higher Education TIG |
| Presenter(s): |
| Edith Cisneros-Cohernour, University of Autonoma Yucatan, cchacon@uady.mx |
| Genny Brito Castillo, University Modelo, cchacon@uady.mx |
| Abstract: The purpose of this paper presentation is to examine similarities and differences on student evaluations of college teaching in US and Mexico. The researchers centered on studying the meaning that students give to the construct "good college teaching", the gathered information about the meanings that students give to the construct, "good teaching, " the process that they follow when they rate their instructors, and the trade-offs that result from using student ratings as measures of instructional quality. |