Return to search form  

Session Title: Practical Arguments, Checklists, and Meta-Evaluation: Tools for Improving Evaluation Theory and Practice
Multipaper Session 740 to be held in Liberty Ballroom Section B on Saturday, November 10, 10:30 AM to 12:00 PM
Sponsored by the Theories of Evaluation TIG
Chair(s):
Bernadette Campbell,  Carleton University,  bernadette_campbell@carleton.ca
The Relation Between the Application of the Process Specific to Program Evaluation and the Quality of Judgments and Recommendations
Presenter(s):
Marthe Hurteau,  Université du Québec à Montréal,  hurteau.marthe@uqam.ca
Stéphanie Mongiat,  Université du Québec à Montréal,  smongiat@hotmail.com
Sylvain Houle,  Université du Québec à Montréal,  houle.sylvain@uqam.ca
Abstract: Based on Scriven's (1980) “Logic of Evaluation”, Fournier (1995) and Stake's (2004) works, Hurteau & Houle's (2005) and Hurteau, Lachpelle & Houle's (2006) researches has led to the conception and validation of a modeling of the process specific to program evaluation. The results also emphasize on the fact that evaluators don't often refer to standards or rely on implicit ones to elaborate their claims, which has an impact on the quality of judgments and recommendations. In another context, and using a different methodology, Arens (2006) reaches the same conclusion concerning the use of standards. Further researches, based on the analysis of 40 evaluation reports, allow Hurteau, Mongiat & Houle to conclude that evaluators don't systematically refer to the modeling and that these considerations seem to have an impact on the quality of judgments and recommendations. Researchers will present the modeling in question and the results of their work.
The Logic of Practical Arguments in Evaluation
Presenter(s):
Nick L Smith,  Syracuse University,  nlsmith@syr.edu
Abstract: Although evaluation logic usually refers to the logic of research designs, the real logic of evaluation concerns the construction of convincing evaluation arguments that assist clients in making decisions and taking action. This paper presents a strategy for constructing local, context-sensitive, and case-specific evaluation arguments. Five aspects are considered: (1) the essential characteristics of client-centered evaluation practice, (2) the types of claims comprising evaluation arguments, (3) the levels of evidence associated with various claims, (4) how claims form lines of argument, and (5) the criteria for comparative evaluation of multiple lines of argument. Collectively, these aspects provide a logic enabling evaluators to produce persuasive, case-specific evaluation arguments. Through a closer analysis of the nature of evaluation practice, and the influence of evaluation context on evaluation arguments, this paper contributes to the development of a fundamental form of logic for the development of practical arguments across any type of client-centered evaluation.
An Evaluation Checklist: Educative and Meta-evaluation Opportunities
Presenter(s):
Jennifer Greene,  University of Illinois at Urbana-Champaign,  jcgreene@uiuc.edu
Lois-ellin Datta,  Datta Analysis,  datta@ilhawaii.net
Jori Hall,  University of Illinois at Urbana-Champaign,  jorihall@uiuc.edu
Jeremiah Johnson,  University of Illinois at Urbana-Champaign,  jeremiahmatthewjohnson@yahoo.com
Rita Davis,  University of Illinois at Urbana-Champaign,  -
Lizanne DeStefano,  University of Illinois at Urbana-Champaign,  destefano@uiuc.edu
Abstract: Evaluation checklists offer succinct statements of the ambitions, intentions, and commitments of a given approach to evaluation, as translated into guidelines for evaluation practice. Checklists can be used to plan and guide evaluation implementation and to assess the quality of a particular evaluation (meta-evaluation). Evaluation checklists also have educative and capacity building potential, both as a general resource and as implemented in tandem with an evaluation study, e.g., through self-conscious attention to checklist items throughout an evaluation study. This paper offers a window into both the educative and meta-evaluative functions of evaluation checklists. As part of a field test of an educative, values-engaged approach to STEM education evaluation, a checklist representing the key features of this approach is being used to guide both a meta-evaluation and self-reflective practice by the evaluation team. The paper will highlight the contributions of the checklist to the quality and meaningfulness of evaluation practice.
Using the Metaevaluation Synthesis to Improve the Quality of State-level Evaluations
Presenter(s):
Paul Gale,  San Bernardino County Superintendent of Schools,  ps_gale@yahoo.com
Abstract: The presenter will briefly illustrate the rationale and methods to conduct a Metaevaluation Synthesis. The intent is for professionals to learn about a systematic, replicable method of integrating several metaevaluations of a single program's series of evaluations to create five indicators of quality. These indicators are standards-driven and based on the Program Evaluation Standards (1994) attributes of utility, feasibility, propriety, and accuracy. As such, they may be used to guide changes in evaluative practice and requirements as they provide the basis to identify systemic strengths and weaknesses of a program's evaluations. Although the specific context was predicated on the Comprehensive School Reform Program (CSRP), the method may be applied to multiple summative evaluations of any program.
Search Form