|
How do People Learn to Evaluate? Lessons drawn from work analysis
|
| Presenter(s):
|
| Claire Tourmen,
Ecole Nationale d'Enseignement Supérieur Agronomique de Dijon,
klertourmen@yahoo.fr
|
| Abstract:
How do people learn to evaluate? We try to bring new answers to this old question. We have chosen to take a look at real evaluation activities. We have studied the way it is practiced and learned by beginners, and the way it is practiced and has been learned by experts. The use of models and methods stemming from work psychology highlights some specificities of evaluation practices and learning processes. It shows the importance of specific know-how that are more than technical application and procedures. The best way to help beginners is therefore not to put too much emphasis on the method's progress but to make them practice the methods in various situations, according to various goals.
|
|
Experiential Lessons From a Five Year Program Evaluation Partnership
|
| Presenter(s):
|
| M Brooke Robertshaw,
Utah State University,
robertshaw@cc.usu.edu
|
| Joanne Bentley,
Utah State University,
kiwi@cc.usu.edu
|
| Heather Leary,
Utah State University,
heatherleary@gmail.com
|
| Joel Gardner,
Utah State University,
jgardner@cc.usu.edu
|
| Abstract:
Evaluation is a necessary core competency for instructional designers and instructional technologists to develop. A partnership was developed between Utah Library Division and the Utah State University Instructional Technology department to evaluate the impact of federal monies for technology throughout the state of Utah. These evaluations took place in 2001 and 2006. Many expected and unexpected experiential lessons were learned, including why service learning is an important tool when training instructional designers and instructional technologists.
|
|
Developing Understanding: A Novice Evaluator and an Internal, Participatory and Collaborative Evaluation
|
| Presenter(s):
|
| Michelle Searle,
Queen's University,
michellesearle@yahoo.com
|
| Abstract:
Cousins and Earl (1992) indicated that research about the “unintended effects” of participatory evaluations is needed (p. 408). Although it seems they primarily refer to clients, it is interesting to consider the effects of an internal, participatory evaluation on novice evaluators and their professional growth. This presentation examines a collaborative and participatory evaluation project that provided fertile ground for a novice evaluator to understand evaluative inquiry by engaging in all aspects of the evaluation process. Empirical data about the evaluation process can help in developing, understanding and planning for the growth of novice researchers. Clandinin and Connelly (2000) state that narrative research is an immersion into the stories of others. Exploring an evaluative context - using narrative strategies in the data collection, analysis and reporting – allows this research to reveal the dynamics of a collaborative and participatory evaluation as a learning tool for the researcher, the evaluators involved and aspiring evaluation-researchers.
|
|
What do Stakeholders Learn About Program Evaluation When Their Programs are Being Evaluated?
|
| Presenter(s):
|
| Jill Lohmeier,
University of Massachusetts, Lowell,
jill_lohmeier@uml.edu
|
| Steven Lee,
University of Kansas,
swlee@ku.edu
|
| Abstract:
A qualitative analysis of knowledge and beliefs about program evaluation was conducted through pre-and post-interviews with key stakeholders as part of a five year, objectives-based Safe Schools Healthy Students evaluation. The purpose of the interviews was twofold: 1) to assess how stakeholders' views and knowledge about program evaluation changed over the course of the evaluation and; 2) to identify important aspects of knowledge and beliefs about program evaluation that can be used to develop an instrument to measure these constructs. The findings will show how knowledge and beliefs changed over the course of the evaluation. Additionally, we will consider how the results will be used to develop a tool for assessing how, and what stakeholders learn about evaluation when their programs are being evaluated. Limitations of the study and plans for future research will be discussed.
|
| | | |