2010 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Research on Evaluator Competencies
Multipaper Session 852 to be held in Texas F on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Research on Evaluation TIG
Chair(s):
John LaVelle,  Claremont Graduate University, john.lavelle@cgu.edu
Discussant(s):
John LaVelle,  Claremont Graduate University, john.lavelle@cgu.edu
The Search for Evaluator Competency Inventories
Presenter(s):
Jeanette Gurrola, Claremont Graduate University, jeanette.gurrola@cgu.edu
Abstract: Using Stevahn, King, Ghere, & Minnema’s (2005) Essential Competencies for Program Evaluators, a content analysis of articles/chapters for three theoretical evaluation approaches was conducted. The goal of the study is to examine the key competencies required to design and implement different evaluation approaches. The data collected will be analyzed by evaluation approach to find how they compare and differ on their emphasized use of specific evaluator competencies. The results of this study have numerous implications that can strengthen the quality of future evaluations through avenues such as evaluator training and additional research on evaluation specific to evaluation approaches. Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005). Establishing essential competencies for program evaluators. American Journal of Evaluation, 26 (1), 43-59.
Constructing a Measure for Evaluator Competencies: Exploratory and Confirmatory Factor Analyses Approach
Presenter(s):
Jie Zhang, Syracuse University, jzhang08@syr.edu
Abstract: In a practice-based field like evaluation, evaluators should be equipped with essential knowledge, skills, and dispositions defined as evaluator competencies (King, 2005) in order to perform their required tasks. Evaluator competencies are what separate evaluation from other professions. Though important, there are not any specific and comprehensive set of competencies that evaluators and many evaluation training programs can follow. The purpose of the study is to fill in the void by creating a scale of evaluator competencies, and testing reliability and validity using factor analytical framework. The research process is guided by Wilson’s (2005) four steps of constructing measures: creating construct maps, designing items, indentifying the outcome space, and establishing the measurement model. An exploratory factor analysis is first conducted to extract factors from the set of created items, and a following confirmatory factor analysis examines the validity (divergent and convergent) of the scale, and forms foundation for future research.

 Return to Evaluation 2010

Add to Custom Program