| Session Title: Exploring Values Alignment in Five Evaluations of Science of Learning Centers |
| Multipaper Session 127 to be held in Malibu on Wednesday, Nov 2, 4:30 PM to 6:00 PM |
| Sponsored by the Research, Technology, and Development Evaluation TIG |
| Chair(s): |
| Brenda Turnbull, Policy Studies Associates, bturnbull@policystudies.com |
| Discussant(s): |
| Gretchen Jordan, Sandia National Laboratories, gbjorda@sandia.gov |
| Abstract: During the past six years, and continuing for another five, the National Science Foundation has funded six Science of Learning Centers (SLCs), which are research consortia each focused on an interdisciplinary set of topics in the learning sciences (e.g., temporal learning, spatial cognition). NSF required the SLCs to engage an external evaluator to provide formative and summative evaluation services. Evaluation has had an interesting history among the SLCs, as the four center evaluators in this session will attest. The role of evaluators, client valuing of the evaluation enterprise, funder requests of the evaluation, and even the evaluation teams have changed considerably over the course of the SLC program life span. This session delves into the experiences of the evaluators as they have addressed issues of what is valued in research center evaluation, how it is valued, and by whom it is valued in these dynamic, political, multiple stakeholder environments. |
| It's an Evolution: Changing Roles and Approaches in the Evaluation of the Pittsburgh Science of Learning Center |
| Brian Zuckerman, Science and Technology Policy Institute, bzuckerm@ida.org |
| The Science and Technology Policy Institute (STPI) has served as the Pittsburgh Science of Learning Center (PSLC)'s external evaluator since the Center's first year of operations. This presentation will describe changes in the Center logic model and evaluation approach in tandem with the evolution of the PSLC's research strategy and organization over the lifecycle of the award. Points to highlight include formative assessments of "centerness" in the early years of the award; the use of (and pitfalls in) bibliometric assessment of research quality and interdisciplinarity; and the development of a set of key indicators for annual analysis and visualization. The complexity of the role of the evaluator, and the need to balance sometimes-competing values and stakeholder concerns, will also be discussed. |
| Evaluating the Temporal Dynamics of Learning Center: Addressing Multiple Stakeholders' Information Needs |
| Brenda Turnbull, Policy Studies Associates, bturnbull@policystudies.com |
| The Temporal Dynamics of Learning Center (TDLC) comprises four networks totaling 39 labs across 12 institutions. The evaluation uses member surveys to assess center functioning (e.g., communication about science and communication about administrative matters such as budgets), cross-lab collaboration, and trainee experiences. Responding to the center's aim of creating a "network of networks" for scientific research, the evaluators use social network analysis of survey data to document evolving cross-lab collaboration within and across TDLC's networks. Responding to NSF's interest in the value added by a center mode of funding, the evaluators survey a comparison group of scientists and compare their collaboration practices with those of TDLC investigators and trainees. Responding to external Site Visit Teams who review centers for NSF, the evaluators conduct bibliometric analyses of center and comparison scientists' publications. A challenge for the evaluators has been stakeholders' unfamiliarity with survey methods for evaluation. |
| Organizational Consultant, Critical Friend, and Evaluator: The Value and Challenge of Flexible Roles |
| Kristine Chadwick, Edvantia, kristine.chadwick@edvantia.org |
| Jessica Gore, Edvantia, jessica.gore@edvantia.org |
| The evaluations of the Science of Learning Center on Visual Language and Visual Learning (VL2) and the Spatial Intelligence and Learning Center (SILC) have required a great deal of flexibility. In both evaluations, Edvantia became the evaluator halfway through the centers' 5-year funding due to center leaderships' dissatisfaction with prior evaluators. In both cases, the leadership was not clear on what evaluation was needed and wanted to try something (someone) else. The leadership teams were unfamiliar with evaluation prior to the centers, and their values toward evaluation were forming during the first two or three years of center operation. Since Edvantia took over, the centers' leadership teams have valued the ability of the evaluator to be an organizational consultant and critical friend. Less valued by leadership yet highly valued by the funder has been the evaluation function-assessing the relative merit, "centerness," or "value added" of center-based research. |
| Coming from Behind: Developing a Logic Model and an Evaluation Strategy Late in the Center's Life Cycle |
| Judah Leblang, Lesley University, bleblang@lesley.edu |
| Judah Leblang served as the primary evaluator for the Center of Excellence for Learning in Education, Science and Technology (CELEST) from July 2009 to June 2010. Until PERG took over the evaluation, CELEST had no logic model or rigorous evaluation plan. His evaluation work was focused on assessing the "centerness" or value of CELEST as a center, and in collecting and analyzing data for the project during Year 6, as well as reviewing key trends during Years 1-5. Leblang, in conjunction with PERG staff, worked with the CELEST PI, co-PIs and advisory board in order to prepare for the project's critical site review, which occurred in March 2010. CELEST had undergone a major reorganization shortly before PERG became the evaluators, and much of the evaluation focused on the organizational changes made by CELEST's leadership team, and how those changes impacted key stakeholders, including researchers, graduate students, and the participating institutions. |