2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Enhancing Evaluation Practice: Studies Examining the Value of Different Evaluation Practices
Multipaper Session 229 to be held in Laguna A on Thursday, Nov 3, 8:00 AM to 9:30 AM
Sponsored by the Research on Evaluation
Chair(s):
Tarek Azzam,  Claremont Graduate University, tarek.azzam@cgu.edu
A Call To Action: Evaluating Evaluators' Recommendations
Presenter(s):
Jennifer Iriti, University of Pittsburgh, jeniriti@yahoo.com
Kari Nelsestuen, Education Northwest, kari.nelsestuen@educationnorthwest.org
Abstract: While there are many components to any evaluation, perhaps none is more visible to clients than the practice of making recommendations. Yet, as a field, we have virtually no empirical understanding of how recommendations are used and what impact they have on programs and outcomes. This means, as a field, we have nothing to demonstrate the merit or worth of our most visible practice. In a profession whose very purpose is to determine whether programs have been well implemented and have had positive outcomes, we must ensure that our practices are also supported by evidence of proper implementation and positive outcomes. In this paper session, the authors make the case for increased empirical study of recommendations by first summarizing the extent literature and identifying what is known and not known, then demonstrating how these gaps threaten the integrity of our profession. Finally, the authors propose a research agenda to strategically advance the field in recommendations.
Factors Impeding and Enhancing Exemplary Evaluation Practice
Presenter(s):
Nick Smith, Syracuse University, nlsmith@syr.edu
Abstract: Researchers have conducted few systematic investigations of the nature of exemplary evaluation practice. Awards for outstanding practice typically reflect post hoc recognition of a study's quality evidenced in its design or impact. These awards provide little insight into how to consistently conduct exemplary practice. This paper examines a dozen selected evaluation cases to identify those factors that may either impede or enhance exemplary practice. A variety of factors are considered including those related to evaluation purpose, study design, theoretical and methodological approach, resource management, client relationships, contextual and cultural influences, the nature of the evaluand, and sector of work. Of particular interest are considerations of whether there are necessary and sufficient situational conditions required for exemplary practice and the role of evaluator expert judgment in managing the ongoing implementation of evaluation activities. Through better understanding such factors, evaluators can improve practice, upgrade evaluator training, and strengthen the theory of practice.
Bias in the Success Case Method: A Monte-Carlo Simulation Approach
Presenter(s):
Julio Cesar Hernandez-Correa, Western Michigan University, julio.c.hernandez-correa@wmich.edu
Abstract: The Success Case Method (SCM) is an evaluation method intended to collect the minimum amount of information needed at the minimum level of intrusion, time, and cost possible. One of the main objectives of the SCM is to provide estimates of returns on investments that can help an institution assess the cost-effectiveness of an implementation. However, Brinkerhoff (2002) admitted that the SCM produces biased information with respect to the central value. This paper utilizes Monte-Carlo simulations to assess different issues related to the bias of the SCM. The non-random selection of successful and no-successful individuals in the interview step and the existence of outliers, chance, and incorrect counterfactual design may be sources of bias in the SCM. This paper also proposes intermediate steps to correct bias issues in the SCM's results and help to provide accurate estimates of returns on investments.
How Program Evaluation Standards Are Put Into Professional Practice: Development of an Action Theory for Evaluation Policy and Research on Evaluation
Presenter(s):
Jan Hense, University of Munich, jan.hense@psy.lmu.de
Abstract: The Program Evaluation Standards, issued by the Joint Committee on Standards for Educational Evaluation, aim to enhance the quality of the professional practice of evaluation. However, three decades and two major revisions after publication of their first edition, little is known beyond anecdotal evidence about the actual use and impact of the standards on the profession. The standards themselves do not explicitly articulate the mechanisms which are expected to make them instrumental in improving the practice of evaluation. Based on theoretical considerations and an analysis of the standards' underlying assumptions, a conceptual framework is proposed which outlines such mechanisms on individual and evaluation policy levels. This action theory aims to guide evaluation policy in further promoting the standards' application and utility. At the same time it can be used as a research framework for analyzing the standards' actual impact. An exploratory study is presented as an example for such research.

 Return to Evaluation 2011

Add to Custom Program