|
Are There Important Differences Between Curriculum Evaluation and Program Evaluation?
|
| Presenter(s):
|
| Karen Zannini Bull, Syracuse University, klzannin@syr.edu
|
| Abstract:
It has been claimed that program evaluation evolved out of curriculum evaluation during the 1970's (Pinar, et al., 1995). If so, we might wonder, three decades later, whether there are currently important differences between curriculum and program evaluation that could be exploited to improve evaluation practice in each area. This paper examines that question through a comparative analysis. We will employ Smith's analytical framework for characterizing program and curriculum evaluation (Smith, 1999).
|
|
Do RCT-Mixed Method Designs Offer An Improved "Gold Standard" for Determining "What Works?" in Educational Programming
|
| Presenter(s):
|
| John Hitchcock, Ohio University, hitchcoc@ohio.edu
|
| Burke Johnson, University of South Alabama, bjohnson@usouthal.edu
|
| Abstract:
Randomized controlled trials (RCTs) are currently advocated by federal agencies and prominent methodologists as the "gold standard" or best way to answer the question of "What Works?" In this methodological article, we attempt to broaden the meaning of the term "What Works?" to include evidence of explanatory causation, program process, program exportability, and information required when programs might need intelligent tailoring and adaptation to local contexts. We present an argument for an improved "gold standard" based on the language and logic of mixed methods research. Based on a cross-disciplinary literature review, we document how qualitative data can improve and have improved traditional quantitative/RCT approaches to documenting "What Works?" Our intention is to provide an overview of how qualitative work can supplement some cutting edge concerns in RCT implementation, analysis and interpretation, as well as to increase discussion about optimal ways to evaluate educational programs, not to present the final answer.
|
|
Inquiry Into Context - Lessons for Evaluation Theory and Practice From Applying the Principles of Evaluability Assessment
|
| Presenter(s):
|
| Kate McKegg, The Knowledge Institute Ltd, kate.mckegg@xtra.co.nz
|
| Meenakshi Sankar, Martin Jenkins, meenakshi@mja.co.nz
|
| Abstract:
There is reasonable agreement within the evaluation profession that there is no 'best' evaluation plan or design. The criteria that have emerged to judge the quality or value of an evaluation include utility, feasibility, propriety, accuracy, credibility and relevance. Judgments using these criteria are dependent on the situation, they are context bound. Similarly, programs and policies are implemented and shaped by the context in which they operate. Thus for evaluators, being able to understand context is critical if evaluations are to be relevant, credible and useful. In this paper, the authors will discuss our experiences in applying many of the principles of evaluability assessment to undertake structured inquiry into context and its potential impact on the types of evaluation questions that can be addressed, the types of methods and approaches that can feasibly be used, and the types of use that can most likely be planned for. Using case study examples, the authors will talk about the strengths and limitations of evaluability assessment as a form of structured inquiry into the influence of context.
|
|
Taking Stock: Reflections on the Centers for Disease Control and Prevention's (CDC's) Framework for Program Evaluation at Ten Years
|
| Presenter(s):
|
| Michele Mercier, Centers for Disease Control and Prevention, zaf5@cdc.gov
|
| Stewart Donaldson, Claremont Graduate University, stewart.donaldson@cgu.edu
|
| Abstract:
2009 marks the 10th anniversary of the publication of the Centers for Disease Control & Prevention's (CDC) Framework for Program Evaluation in Public Health. The framework was developed to incorporate, integrate, and make accessible to public health practitioners useful concepts and evaluation procedures from a range of evaluation approaches. While the framework has been widely adopted for evaluating federally funded programs throughout the United States, the evaluation contexts in which the framework is being utilized have not been systematically identified or characterized. Using data derived from peer-reviewed journal publications from 1999-2009, we examine the framework's impact, influence and reach in public health and beyond to evaluation more generally, the social sciences, and education.
|
| | | |