Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: The Impact of Quality Assurance and Evaluation; Stakeholder Perceptions
Multipaper Session 427 to be held in Room 107 in the Convention Center on Thursday, Nov 6, 4:30 PM to 6:00 PM
Sponsored by the Evaluation Use TIG
Chair(s):
Edward McLain,  University of Alaska Anchorage,  ed@uaa.alaska.edu
Can Participatory Impact Pathway Analysis be Used by Evaluators to Engage Stakeholders While Increasing Cultural Competencies of Evaluators?
Presenter(s):
Alice Young-Singleton,  University of Southern California,  youngsin@usc.edu
Abstract: One prevalent foundational principle in the development of logic models (whether to facilitate program design, program implementation, or to report program outcomes) is the collaborative and participatory process required when seeking to visually depict a program’s theory (Kellogg Foundation, 1998; Gasper, 2000; (Mayeske, 2002). A large body of literature on logic models purports that evaluator’s involvement of key stakeholders to help articulate programmatic assumptions and priorities impacts logic model utility, ownership, and ultimately programmatic success (Mayeske, 2001; Cooksy, 2001; Renger, 2006; Kirkpatrick, 2001; Greene, 2001; Hopson, 2003). However, logic models are often designed by the evaluator with little input and interrogation by principle stakeholders thereby prompting the application of the tool to often be only associated with: (1) the evaluation phase of a program and (2) the evaluator (Coffman, 1999). Yet, evaluations conducted using participatory impact pathway analysis, a derivation of the logic model, suggests that evaluators may employ the logic model to not only engage stakeholders in a participatory process that ascertains a program’s theory but also to increase their own cultural competency for evaluation practice. Hence, this paper will draw on literature and evaluative studies utilizing participatory impact pathway analysis to examine how it has been used to engage stakeholders while increasing cultural competencies of evaluators in order to inform evaluation approach and practice. Through this process, this paper also seeks to focus/refocus the attention and purpose of evaluation systems on utility, employing methods that are responsive to the diversity of organizations effectively addressing “the needs of the full range of targeted populations.”
High Quality Program Evaluation with Unrealistic Outcome Expectations
Presenter(s):
Courtney Brown,  Indiana University,  coubrown@indiana.edu
Mindy Hightower King,  Indiana University,  minking@indiana.edu
Marcey Moss,  Indiana University,  marmoss@indiana.edu
Abstract: Evaluators are increasingly faced with evaluating programs with unrealistic but expected outcomes. How and what evaluators evaluate is directly related to the funding agency’s expectations, whether these are realistic in the time allotted or not. This is true of federally and privately funded evaluations, especially those focused on student achievement. This emphasis on accountability is generally tied to large-scale education reform efforts as well as state and federal legislation. However, it is often unrealistic to evaluate real changes in achievement in a three-year grant. This paper provides practical, realistic solutions to the following challenges: (1) short-term evaluations with expected outcomes more appropriate for long-term projects and (2) projects with program outcomes already determined and expected (i.e., GPRA measures). Solutions to these challenges include: building a logic model prior to program implementation; synthesizing prior research and evaluations; and looking for realistic short- or mid-term outcomes related to the expected long-term outcome.
How Danish Teachers Experience the Impact of Quality Insurance and Evaluation Initiatives on their Teaching Practices
Presenter(s):
Carsten- Stroembaek Pedersen,  University of Southern Denmark,  csp@sam.sdu.dk
Abstract: The results from a nationally representative survey of teachers in public mandatory schools are presented in this paper. This work explores how Danish teachers experience the impact of different quality assurance and evaluation (QAE) initiatives on the quality of their teaching and on their autonomy. In Demark, the public debate has mostly concentrated on the negative impact of QAE on teacher practices such as loss of autonomy and effects like teaching-to-the-test, tunnel vision, etc. Despite this, we still know very little about how teachers in general view the impact of QAE initiatives on their teaching practices. This survey provides us with an extensive data set which allows for a general description of the impact of QAE as perceived by teachers. The survey is part of an international research study of the impact of QAE processes on improving the quality of education in Denmark, England, Finland, Scotland and Sweden.

 Return to Evaluation 2008

Add to Custom Program