Return to search form  

Session Title: Putting Context in Context With Examples in Strategic Planning and Measuring Fidelity
Multipaper Session 118 to be held in Schaefer Room on Wednesday, November 7, 4:30 PM to 6:00 PM
Sponsored by the AEA Conference Committee
Chair(s):
Cheri Levenson,  Cherna Consulting,  c.levenson@cox.net
Putting Context in Context
Presenter(s):
A Rae Clementz,  University of Illinois at Urbana-Champaign,  clementz@uiuc.edu
Abstract: In its broadest sense, context can include any detail that helps to make comprehensible the meanings of words, actions, events, and even objects we experience. In evaluation, these often include demographic or economic details, social and political dynamics, cultural norms, even structural and geographical features. Sensitivity to nuances in context often differentiates truly insightful evaluations from those that are not. Despite its importance in our work, context itself is a fluid, relativistic, and confounded construct. Context is influenced by what is being looked at, who is looking, and the purpose for looking. This paper takes up questions like: • What matters about context? • In what ways does context matter? • How much context is too much or not enough? • What are valid ways of representing context? in an attempt to put the notion of context, at least as it is used in evaluation, into context.
Measuring Fidelity of Implementation of a Coach-based Professional Development Model
Presenter(s):
Tara Pearsall,  University of South Carolina,  tmcpearsall@yahoo.com
Ching Ching Yap,  University of South Carolina,  ccyap@gwm.sc.edu
Ashlee Lewis,  University of South Carolina,  ashwee301@hotmail.com
Abstract: Assessing fidelity of implementation through systematic monitoring helps program developers and evaluators gain an understanding of how the quality of program implementation can affect the outcomes of professional development (PD) programs. Examination of program implementation quality allows program developers to determine how changes in implementation may affect a programs success or failure. To assess fidelity of implementation of a coach-based, 12-week PD program, evaluators systematically documented training sessions and developed measures to quantify the fidelity of those training sessions. The measures created enabled evaluators to gain detailed information regarding weekly fidelity through examination of the activity ratio as well as overall fidelity through examination of a fidelity index. With the information gained from the fidelity measures, program developers, in the field of education and beyond, can evaluate the success of their programs by determining whether outcomes reflect the PD model, the fidelity of implementation, or both.
Search Form