Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Basic Considerations in Theory-Based Evaluations
Multipaper Session 683 to be held in Capitol Ballroom Section 4 on Friday, Nov 7, 4:30 PM to 6:00 PM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Chair(s):
John Gargani,  Gargani and Company Inc,  john@gcoinc.com
Assessing Program Strategies and Priority Outcomes: Evaluating Both Sides of Program Models
Presenter(s):
Kathryn Race,  Race and Associates Ltd,  race_associates@msn.com
Abstract: Through exemplar case studies pulled from evaluation practice, the paper discusses ways in which program strategies are assessed descriptively such as through rubric development and other assessment tools when sample sizes prohibit the use of quantitative methods such as structural equation modeling. Examples are taken from education evaluations in formal and informal settings, including a 3-year evaluation of a multi-phased, hybrid after-school and family outreach program, a math and science partnership for certification of middle-school teachers in physical science, and science literacy programs for public school teachers. The assessment of program strategies provides a check and balance to help guide the use of strategies that align with empirical evidence. Through this process, program fidelity becomes an integral part of formative as well as outcomes evaluation relative to strength of intervention and program “dosage” assessment. Implications for the application of this approach in other evaluation venues are discussed as well.
Practical Issues in Program Evaluation
Presenter(s):
Doris Rubio,  University of Pittsburgh,  rubiodm@upmc.edu
Sunday Clark,  University of Pittsburgh,  clarks2@upmc.edu
Abstract: As many grant and contract applications are increasingly requiring well developed evaluation plans, program evaluation is becoming a desired skill set. The literature is limited in proving specific examples of how to develop and implement a program evaluation plan. We present different models for evaluating a large, multi-component, institutional program. Across all models, we found that the use of a comprehensive model is critical. The model serves to create a plan that is both formative and summative so that the program can utilize the information for internal improvements and external reporting. Another important component for a successful evaluation plan is regular exchanging of information with the stakeholders to obtain ‘buy in’ which, facilitates the evaluation. For our program evaluation, we found the use of the logic model framework to be particularly helpful in developing a successful evaluation plan. If designed properly, evaluation can significantly enhance the effectiveness of a program.
The Utility of Logic Models in the Evaluation of Complex System Reforms: A Continuation of the Debate
Presenter(s):
Mary Armstrong,  University of South Florida,  armstron@fmhi.usf.edu
Amy Vargo,  University of South Florida,  avargo@fmhi.usf.edu
Abstract: In his presentation at the 2007 American Evaluation Association conference, Michael Quinn Patton challenged the effectiveness of logic models as an evaluation tool for systems where emerging conditions call for rapid response and innovations. Other researchers such as Leonard Bickman continue to identify the need for approaches such as logic models that articulate the theory that underlies a program or intervention. This paper will contribute to this ongoing dialogue through illustrations of the use of logic models in two related evaluations that examine a privatized child welfare system. The paper will track and illustrate how logic modeling techniques may be effective at different points in time and for different audiences during system development.

 Return to Evaluation 2008

Add to Custom Program