Return to search form  

Session Title: Evaluation in Education: Promises, Challenges, Booby Traps and Some Empirical Data
Panel Session 804 to be held in International Ballroom B on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Katherine McKnight,  Pearson Achievement Solutions,  kathy.mcknight@pearsonachievement.com
Abstract: NCLB legislation, with its emphasis on accountability and evidence-based programs and practices, implies a central role of evaluation in education program and policy decision-making. Therefore, the onus is on evaluators to design and conduct evaluations that produce usable information capable of serving as the basis for effective education decision-making. To produce usable information, research and evaluation must be relevant to the needs of the decision-makers. The focus of this panel is to describe the kind of information needed for a usable knowledge base to guide education decision-making and to suggest guidelines for evaluators in the design and conduct of program evaluations in the field of education. For education to advance as a field grounded in science, evaluators must continually assess gaps in the knowledge base, searching for and testing general principles upon which that knowledge base can expand and effectively inform program development and policy-making.
Evaluations as Tests of Theory
Lee Sechrest,  University of Arizona,  sechrest@u.arizona.edu
References to 'evidence-based' policy and practice in education (and other fields) are so frequent and casual as to suggest that the problems of identifying, synthesizing, and interpreting research pose no real difficulties. Yet deciding what should count as evidence is not straightforward, and how to synthesize conceptually, methodologically, and empirically diverse findings across samples that are frequently ill-defined and unrepresentative of any population of interest is extraordinarily difficult if even possible. Moreover, societal, economic, and cultural conditions under which recommendations must be applied are markedly variable and change in unpredictable ways over time. It is pointless to fall back on the virtually uniform conclusion that 'more research is needed'. What is needed is better theory about educational variables, effects at all levels and interpretation of evidence in relation to theory. Evidence can further the development of theory that can provide the foundation for the knowledge base applied to policy and practice.
What do we Mean by "What Works?"
Katherine McKnight,  Pearson Achievement Solutions,  kathy.mcknight@gmail.com
A common approach to evaluation is the proverbial 'horse race' study, in which interventions are pitted against each other. Too often these studies fail to elucidate why one intervention would be better than the other, and for whom. Problems arise from a lack of program definition (the black box problem) and rationale (why it ought to work). We are left with differences in outcomes without an understanding of how they were produced. Without that understanding Lipsey (1990) argues, an intervention "can only be reproduced as ritual in the hope it will have the expected effects." Small theories are necessary for building the kind of knowledge needed to understand study outcomes and reproduce them elsewhere; they give meaning and explanation to events and support new insights and problem-solving efforts (Lipsey, 1990). In this paper, we focus on how education research would benefit from the small theory approach.
Education and Instructional Materials Development: Towards Evidenced Based Practice
Christopher Brown,  Pearson School Companies,  christopher.brown@phschool.com
There appears to be a small but growing realization that Education has much to learn from other industries, especially medicine and agriculture, about becoming an evidenced based practice. Progress seems very slow and some mechanisms, such as NCLB and state regulations, have had unintended consequences that may be hampering the effort. This paper will discuss the state of evidenced based practice in K12 schools as well as the R&D conducted for instructional materials. It will suggest the need to examine and strengthen the entire evidenced based value chain including the roles, capabilities and expectations of researchers and evaluators, developers, teachers, students, parents, schools of education, states, districts, and the federal government. It will discuss the considerable friction between an evidenced-based perspective and the regulatory/compliance based system of US K12 education. Ideas for strengthening the value chain and truly engaging in evidenced based practice will be presented.
What is Taught and What is Tested? Evidence From the Program of International Student Assessment
Werner Wittmann,  University of Mannheim,  wittmann@tnt.psychologie.uni-mannheim.de
There are a lot of debates about problems of teaching to the test. Grades should best mirror what has been taught and individual differences in grades should reflect different amounts of learning related to the content of instruction. How are the PISA test scores in reading, math and science related to the respective grades? PISA-data from the USA and selected countries are reported, demonstrating large differences in the predictability of grades from cognitive and non-cognitive variables. The implications of these results for evidence-based education are discussed.
A Research and Development (R&D) Approach to Education Interventions
Ronald Gallimore,  Pearson Achievement Solutions,  ronaldg@ucla.edu
NCLB, with its emphasis on accountability and evidence-based practice, pressures education decision-makers and researchers to demand and provide immediate evidence for a given intervention if it is to be adopted. The need for accountability and scientific evidence in education is not at issue; however, the process by which evidence is accumulated in this type of pressure-driven system is not optimal for developing a useful knowledge base by which to develop programs and determine policy. In this paper, we focus on a systematic, multi-faceted and iterative approach to accumulating evidence for an intervention designed to improve student learning and achievement of Native Hawaiian students. This example reflects an R&D approach to developing, testing and refining education interventions consistent with Lipsey's (1990) notion of building small theories and accumulating a useful knowledge base upon which to develop effective interventions.
Search Form