2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: A Method to Our Madness: Program Evaluation Teaching Techniques
Panel Session 116 to be held in Coronado on Wednesday, Nov 2, 4:30 PM to 6:00 PM
Sponsored by the Teaching of Evaluation TIG
Chair(s):
Bonnie Stabile, George Mason University, bstabile@gmu.edu
Discussant(s):
Linda Schrader, Florida State University, lschrader@fsu.edu
Abstract: Those who teach evaluation are always on the lookout for teaching techniques to engagingly and effectively communicate ideas about the core concepts of evaluation. This panel is intended to serve as an idea exchange for those who teach evaluation to learn from their AEA colleagues about techniques that have worked particularly well in the classroom. Topics covered will range from novel means for the introduction of the study of evaluation, to tips for the teaching of particular tools, such as logic models. Teaching techniques that help students to effectively engage with evaluation clients will also be considered
Teaching Evaluation Using a Photography Analogy
Nick Fuhrman, University of Georgia, fuhrman@uga.edu
Evaluation and photography have a lot in common. If the purpose of evaluation is to collect data (formative and summative) that informs decisions, more than one "camera" or data collection technique is often best. We have qualitative cameras (long lens to focus on a few people in depth) and quantitative cameras (short lens to focus on lots of people, but with less detail). Some pictures will be up close and some will be wide angle. Both are needed to make a decision. In evaluation, we call different aspects of what we're measuring "dimensions." I think about three major things that we can measure...knowledge change, attitude change, and behavior/behavioral intention change following a program/activity. Each of these has dimensions (or different levels of intensity) associated with them. It takes more than one picture to determine if our educational efforts influenced knowledge, attitude, or behavior and to make decisions about program value.
A Worksheet for Conducting Evaluability Assessment
Helen Holmquist-Johnson, Colorado State University, helen.holmquist-johnson@colostate.edu
I have designed a worksheet to help students navigate / negotiate a program evaluation to be carried out in their internship agencies. The worksheet is on conducting the evaluability assessment. Though it was designed specifically for use by graduate students in the Masters of Social Work Evaluation Class, it will likely work well in other program settings.
Online Discussions in Teaching Program Evaluation
Jill Hendrickson Lohmeier, University of Massachusetts, Lowell, jill_lohmeier@uml.edu
Particularly for students from diverse geographic and professional areas, using targeted discussion questions allows/requires students to explain evaluation situations and challenges that may be unique in their areas. These discussions last at least a week and include multiple comments from all students. Students have commented on how eye opening this has been for them.
Roleplay for Teaching Investigation and Negotiation of Client Evaluation Needs
Mara Schoeny, George Mason University, mschoeny@gmu.edu
This presentation will highlight a roleplay option for teaching investigation and negotiation of client evaluation needs. The roleplay provides a brief program background and a variety of roles, including funders, project directors and service providers, each of whom have different interests in the evaluation scope and direction. The evaluators receive practice in investigation, facilitating participation and recognizing the different perspectives and interests in evaluation results.
Hot or Cold? Introducing Evaluation
Claire Tourmen, AgroSup, claire.tourmen@educagri.fr
When introducing the concept of evaluation, I begin by asking a simple question: "If I say that it's too cold in this room, what did I do?" Students must find the main operations involved in an evaluation: the final operation is to assert a judgment, such as "It is too cold". To be able to do it, I had to gather some data (by any means: I checked a thermometer, I shivered, I saw people shivering etc.). The point is that I had to interpret these data to make my judgment. Then I ask people a second question: "For example, if I saw that the temperature was 59 F, what does it mean? Is it cold or not?" The answer they always give is: "It depends!" People understand that to be able to judge any object, you need to compare gathered data to other elements that give it value.
Cookies4
Susan Allen Nan, George Mason University, snan@gmu.edu
he classic evaluation exercise of evaluating cookies started us on a cookie themed course. Starting with evaluating cookies as a way to learn about evaluation and indicators, we found at each subsequent class a reason to engage cookies. We tried weeks later to "reconstruct the baseline" and recall how the cookies had been the first week after that data was lost. We considered ethics of cookies provision just before the course evaluations. An element of developmental evaluation led the class to consume carrots in lieu of cookies. We could have introduced adjusting a cookie recipe together after formative evaluation.
Using Note Cards in Logic Model Development
Shandiin Wood, University of Arizona, shandiin.wood@gmail.com
The process involves an interview between the evaluator and the stakeholder. During this time while the stakeholder is recalling any causal conditions to a stated problem the evaluator is writing them down on the note cards and laying them on the table or surface before them. During the process of root cause analysis the evaluator is assisting the stakeholder through an "if-then" logical flow where 'if' a causal condition where to occur 'then' this other causal condition would exist. This process is much like that described in the ATM logic modeling process as described in Renger and Titicomb 2002. A key difference is the use of the note cards in this case and as a benefit the stakeholder is able to move the cards themselves as a small but important means to create buy-in from the stakeholder.
Evaluation Start to Finish: A Final Project
Liudmila Mikhailova, CRDF Global, mikhailova@msn.com
This project leads students through the whole semester in order to learn the entire process of evaluation, starting from how to design an evaluation plan, evaluation questions and evaluation instruments to evaluation methodologies, data collection and data analysis and report writing. A very important emphasis is made on cultural contexts that programs are operating in. Students work in teams to design an evaluation study as their final project. Each group of students conducts research on current or past international programs that are administered by U.S., DC-based non-profit organizations, such as World Learning, Eurasia Foundation, NED (National Endowment for Democracy), CIEE (Council for International Educational Exchange) and other development and exchange organizations. Students have to make appointments with respective organizations in order to collect information about their selected projects and interview project/program managers, which enhances their interviewing skills as well.

 Return to Evaluation 2011

Add to Custom Program