Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: New Evaluator "How-to-Session": Methods in Evaluation
Multipaper Session 124 to be held in Room 109 in the Convention Center on Wednesday, Nov 5, 4:30 PM to 6:00 PM
Sponsored by the Graduate Student and New Evaluator TIG
Chair(s):
Annette Griffith,  University of Nebraska Lincoln,  annettekgriffith@hotmail.com
Needs Assessments and Evaluations: Examples of Combining the Two
Presenter(s):
James Altschuld,  The Ohio State University,  altschuld.1@osu.edu
Abstract: Often it is easy to imbed a needs assessment (NA) technique or the process of an assessment into an evaluation and thereby achieving more than just focusing on one or the other. This paper will start with a brief overview of what NA is and examples of different types of needs. From there it will move to concrete illustrations of NAs having been successfully woven into the fabric of evaluations not only enhancing the quality of those efforts but leading to opportunities for publications. Illustrative cases include situations where discrepancies were quite differently perceived by involved stakeholders and where there was a failure to fully take needs into account when designing the program that was evaluated.
The Importance of Logic Models in Program Planning, Implementation, and Evaluation
Presenter(s):
Ralph Renger,  University of Arizona,  renger@u.arizona.edu
Abstract: Logic models continue to be an important tool to develop coherent program evaluation plans. However, over time the emphasis on completing the logic model table has eroded evaluators’ understanding that logic modeling is in fact a process necessary to ensure that key elements, such as programmatic assumptions, strategies, and outcomes, are in fact logically connected. This presentation will highlight how the evaluator can use the logic modeling process to assist program developers to avoid pitfalls during planning, implementation, and evaluation that could affect the ability of a program to demonstrate effectiveness.
Using Weighted Data in Evaluation
Presenter(s):
Steven Middleton,  Southern Illinois University at Carbondale,  scmidd@siu.edu
Nicholas G Hoffman,  Southern Illinois University at Carbondale,  nghoff@siu.edu
Abstract: Not all data collected in evaluation represent the target population. There may be a problem with non-response or the investigator may have purposefully altered the sample collection in order to guarantee the inclusion of certain subgroups. As a consequence, there is a need to make adjustments to correct this problem. As a solution the evaluator can use one of the many different methods of weighting the data that will bring the sample closer to representing that of the population. However, there are many evaluation projects that use weighting of variables and many others that do not. Various reasons to apply or not to apply weighting methods will be discussed. Also addressed will be how and why to employ different data weighting methods. Additionally, examples of data with and without utilizing various weighting methods will be demonstrated.

 Return to Evaluation 2008

Add to Custom Program