Evaluation 2009 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Metaevaluation and the Program Evaluation Standards
Panel Session 758 to be held in Sebastian Section I4 on Saturday, Nov 14, 10:55 AM to 11:40 AM
Sponsored by the Research on Evaluation TIG
Chair(s):
Chris L S Coryn, Western Michigan University, chris.coryn@wmich.edu
Discussant(s):
Leslie J Cooksy, University of Delaware, ljcooksy@udel.edu
Abstract: This panel session presents the results of two projects focused on the use of The Program Evaluation Standards (Joint Committee, 1994) for metaevaluation. A systematic content analysis of the standards text revealed numerous overlap and dependency relationships between standards, which has implications for how the standards could be differentially weighted in a condensed and efficient instrument to facilitate metaevaluation. Results from an interrater reliability study of thirty individuals with a wide range of evaluation expertise who used the standards to assess ten evaluations sheds light on the consistency with which the Standards are used to reach judgments concerning evaluation quality. Both studies provide insights for the use and further development of the Standards and suggest ways in which their use can be most effectively supported and advocated.
The Program Evaluation Standards Applied for Metaevaluation Purposes: Investigating Interrater Consistency and Implications for Practice
Lori Wingate, Western Michigan University, lori.wingate@wmich.edu
Professional evaluation rests on the premise that its procedures and results are systematic and objective. The Program Evaluation Standards (Joint Committee, 1994) have been a major contribution toward making evaluation practice more systematic. However, there are two important underlying, untested assumptions embodied within the standards: (1) Adherence to the standards will produce higher quality evaluations, which reflects the standards' "guiding" function; and (2) comparable judgments about how the quality of a given evaluation would be reached by different individuals when using the standards as criteria of merit, which reflects the standards' "assessing" function. Results of research undertaken to investigate the legitimacy of the latter assumption are presented. The purpose of the study was to assess interrater reliability as measured by coefficients of agreement among a group of thirty evaluators who were charged with the task of assessing the quality of ten evaluations, using the Program Evaluation Standards as the criteria.
Documenting Dependency Relationships Between the Standards to Facilitate Metaevaluation
Carl Westine, Western Michigan University, carl.d.westine@wmich.edu
The thirty standards set forth by The Joint Committee on Standards for Educational Evaluation in The Program Evaluation Standards (PES) (Joint Committee, 1994) form the basis for a checklist to be used for metaevaluation (Stufflebeam, 1999). However, identifying overlap between the standards should simplify the metaevaluation process. Through a systematic content analysis, we learn what the PES reveals about the overlapping nature of the standards. In most standards, specific references of up to ten standards are stated in the textual overview, guidelines, and common errors sections of the PES. Further references between standards not explicitly stated are also documented. Incongruence in references between standards implies a dependency relationship exists. Moreover, the PES functional table of contents outlines further dependency relationships between the standards. Documenting well-defined dependency relationships has implications for how the standards could be differentially weighted in a condensed and efficient instrument to facilitate metaevaluation.

 Return to Evaluation 2009

Add to Custom Program