Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Perspectives on Using Stakeholder Judgment in Evaluation
Multipaper Session 251 to be held in Capitol Ballroom Section 4 on Thursday, Nov 6, 10:55 AM to 12:25 PM
Sponsored by the AEA Conference Committee
Chair(s):
Nicole Vicinanza,  JBS International,  nvicinanza@jbsinternational.com
Evaluating Training Efficacy
Presenter(s):
Yvonne Kellar-Guenther,  University of Colorado Denver,  yvonne.kellar-guenther@uchsc.edu
Abstract: Education through training is only useful if the student is able to transfer what they learn to their job. Kontoghiorghes (2004) stated that training investments often fail because it is difficult to transfer what is learned in the training environment to the workplace environment (Subedi, 2006). This transfer of learning between the two environments is called the training transfer. In my quest to evaluate an on-going training, I collected data after the trainees had been back at their jobs for awhile; an approach is supported Santos and Stuart (2006). While there have been some studies that have used a similar approach, the follow-up data relies on student self-assessments of their own learning. To test the validity of this approach, I have collected objective data that was independently scored to assess change and compared it to the subjective measures to see if there is a correlation between the two scores.
Estimating Program Impact Through Judgment: A Simple But Bad Idea?
Presenter(s):
Tony Lam,  University of Toronto,  tlam@oise.utorfonto.ca
Abstract: Estimating program impact is complicated, multifaceted, time consuming, labor intensive, and costly. To overcome these technical and logistic difficulties, evaluators, especially training evaluators, have resorted to relying on judgments to derive program impact estimates. After collecting the outcome data, evaluators ask various stakeholders to determine the degree to which the observed outcomes are attributable to the program, and sometimes to also report confidence of their estimates. Apparently, using self-reporting to estimate program impact is more efficient than deriving such estimates empirically through experimental and quasi-experimental procedures. Unfortunately and expectedly, the judgment-based impact estimates are susceptible to both intentional (e.g., self-serving bias) and unintentional biases (e.g., recall errors). In this paper, I will review the literature, describe and critique the process of using judgments to determine program impact, present the various sources of biases, and propose some strategies to overcome these biases and to incorporate self-reporting data in program impact assessments.
Using Self Assessment Data for Program Development Dialogues: Lessons Learned from Assets Coming Together (ACT) for Youth
Presenter(s):
Amanda Purington,  Cornell University,  ald17@cornell.edu
Jennifer Tiffany,  Cornell University,  jst5@cornell.edu
Jane Powers,  Cornell University,  jlp5@cornell.edu
Abstract: Self-assessments aid in gathering data, engaging participants, and fostering discussion about improvements in practices and policies. We report on a New York State project that conducts self-assessments to promote program, organizational and coalition development, particularly around integrating Positive Youth Development practices and principles. Our presentation will focus on the process of developing and administering self-assessments, as well as data analysis and disseminating findings. We will also discuss challenges encountered including: reporting findings and data in ways that are easily grasped by the diverse groups doing self-assessments, strategies for using self-assessment findings more effectively to inform practice and policy, and using self-assessments from multiple groups to inform “big picture” analysis of systems and initiatives.

 Return to Evaluation 2008

Add to Custom Program