|
Session Title: Expanding Our Knowledgebase: Current Research on Teaching Evaluation
|
|
Panel Session 894 to be held in CROCKETT B on Saturday, Nov 13, 2:50 PM to 4:20 PM
|
|
Sponsored by the Teaching of Evaluation TIG
|
| Chair(s): |
| Christina Christie, University of California, Los Angeles, tina.christie@ucla.edu
|
| Abstract:
Recent calls for research on evaluation highlight the importance of exploring professional issues, including evaluation training (Mark, 2008). Although information has been published about teaching evaluation, existing studies tell us little about how individuals who are currently practicing evaluation were trained to do their jobs, the type of evaluation-related training individuals within specific substantive disciplines (e.g., public health and education) receive, or the promise of unique instructional approaches for acquiring competence in evaluation. Such information is valuable for individuals who design academic coursework and professional development trainings. Current research covering each of the aforementioned topics will be presented in an effort to begin filling gaps in the existing knowledgebase and to stimulate ideas for future research.
|
|
A Descriptive Study of Evaluator Course Taking Patterns and Practice
|
| Christina Christie, University of California, Los Angeles, tina.christie@ucla.edu
|
| Leslie Fierro, SciMetrika, let6@cdc.gov
|
| Tarek Azzam, Claremont Graduate University, tarek.azzam@cgu.edu
|
|
Discussions about evaluation practice have mainly focused on descriptions of strategies and methods used. Less is known about peripheral determinants of practice such as who is conducting evaluations and their training to do so. Data regarding courses completed by practitioners who self-reported evaluation as being their primary or secondary professional identity were analyzed to understand: What courses do practitioners report completing? Where do they complete this coursework? What constellations of coursework are practitioners most likely to complete? What factors are associated with course-taking patterns? Study findings are useful for practitioners and those en route to careers involving evaluation. Persons seeking guidance about what courses might enhance their evaluation practice and identify courses and professional development that experienced practitioners completed. Findings are also useful for academic and professional development planning; coordinators can use our findings to shape curriculum and offer courses for practitioners in areas that are not taken with frequency.
|
|
|
Evaluation Coursework in Schools and Programs of Public Health
|
| Leslie Fierro, SciMetrika, let6@cdc.gov
|
| Christina Christie, University of California, Los Angeles, tina.christie@ucla.edu
|
|
Evaluation is one of ten essential services of public health, yet little is known about how public health academe prepares students to design and conduct evaluations. In this presentation, findings from a study that systematically examined the prevalence and content of required coursework in program evaluation for students acquiring a MPH in epidemiology or social and behavioral sciences will be shared. Schools and programs of public health accredited by the Council on Education in Public Health (CEPH) as of August 2008 which offered both a MPH in epidemiology and a MPH in an area of social and behavioral sciences (N = 51) were included in the study. Findings from a review of over 1,000 courses descriptions associated with completing a MPH in epidemiology or social and behavioral sciences will be discussed and details regarding the specific content of evaluation-related coursework will be shared.
| |
|
Contexts and Teaching Evaluation: An Example From Educational Administration Programs
|
| Tara Shepperson, Eastern Kentucky University, tara.shepperson@eku.edu
|
|
Findings from a recent study of evaluation courses in doctoral programs for educational administrators, supports the prevalence of evaluation courses in schools of education. Yet, findings suggest that evaluation practice is understood within specific educational constructs that contextually focuses coursework. Study into evaluation within disciplines may require careful consideration of multiple influences. In educational administration, these include academic reform initiatives, national policy requirements, and professional standards. These and other issues impact how evaluation is taught. Methods for collecting data and relevant background issues will be discussed as possible ways to frame future studies to develop better intra-disciplinary or cross-disciplinary approaches to investigate the teaching of evaluation.
| |
|
Informal Discussion as Socialization and Teaching Tools in Program Evaluation
|
| Anne Vo, University of California, Los Angeles, annevo@ucla.edu
|
|
What is involved in the process of mastering the art of program evaluation? How do program evaluators learn their craft? This study addresses these questions by describing how an evaluation instructor uses an informal learning space to apprentice student evaluators into their work. Data collection was completed through participant-observation, interviews, and collection of site documents. Analysis was grounded in conversation analytic and ethnographic methods. Results suggest practical strategies that evaluation instructors can use to facilitate learning through discussion in informal settings and how to create a safe and productive space in which such learning can occur. Overall, study findings provide a useful model for how to infuse a traditional evaluation training program, which usually consists of coursework and practica, with alternative learning opportunities.
| |