|
Measuring the Fidelity of Implementation of Response to Intervention for Behavior (Positive Behavior Support) Across All Three Tiers of Support
|
| Presenter(s):
|
| Karen Childs, Florida Positive Behavior Support, childs@fmhi.usf.edu
|
| Abstract:
This workshop will provide information on the development, validation and use of implementation fidelity instruments available for use by schools in evaluating school-wide positive behavior support (otherwise known as response to intervention for behavior). The Benchmarks of Quality (BoQ) is a research-validated self-evaluation tool for evaluating fidelity of school level implementation of Tier 1/Universal level behavior support. Participants will receive information on the theoretical background, development and validation of the BoQ for Tier 1. Participants will learn how to complete the instrument, how the instrument is used by school, district and state teams to monitor implementation fidelity, and how to use results to improve implementation. Participants will also receive information about a newly developed instrument; the Benchmarks for Advanced Tiers (Tier 2/Supplemental and Tier 3/Intensive levels of support). This discussion will include an explanation of instrument development, results of the validation study, administration of the instrument, and use of results.
|
|
A Practical Approach to Managing Issues in Implementation Fidelity in K–12 Programs
|
| Presenter(s):
|
| Hendrick Ruitman, Cobblestone Applied Research & Evaluation Inc, todd.ruitman@cobblestoneeval.com
|
| Rebecca Eddy, Cobblestone Applied Research & Evaluation Inc, rebecca.eddy@cobblestoneeval.com
|
| Namrata Mahajan, Cobblestone Applied Research & Evaluation Inc, namrata.mahajan@cobblestoneeval.com
|
| Abstract:
Measuring the fidelity of program implementation and its relationship to outcomes has not yet achieved prominence in the field of K – 12 programs and curriculum studies. A recent literature review found that only 5 of 23 studies even considered the relationship between implementation and outcomes (O’Donnell, 2008). While there exist multiple publications that give evaluators the skills to measure implementation (see Berry & Eddy, 2008; Chen, 2005; Dane & Schneider, 1998), there is little practical advice that exists to inform evaluators how to manage issues related to implementation fidelity throughout the course of an evaluation. Specifically, we intend to discuss situations where participants may desire to suspend or reduce the fidelity of implementation of a K – 12 educational program, tips for evaluators to manage implementation issues, and overall implications that may result from these situations.
|
|
A Study of the Relationship Between Fidelity of Program Implementation and Achievement Outcomes
|
| Presenter(s):
|
| Sarah Gareau, University of South Carolina, gareau@mailbox.sc.edu
|
| Heather Bennett, University of South Carolina, bennethl@mailbox.sc.edu
|
| Diane Monrad, University of South Carolina, dmonrad@mailbox.sc.edu
|
| Tammiee Dickenson, University of South Carolina, tsdicken@mailbox.sc.edu
|
| Ishikawa Tomonori, University of South Carolina, ishikawa@mailbox.sc.edu
|
| Abstract:
As we consider the theme of the 2010 AEA conference: “Evaluation Quality”, the increased national focus on fidelity of program implementation comes to mind. A limitation of traditional evaluation has been that it has focused largely on program outcomes, with very little emphasis placed on the manner in which the program is implemented. The proposed research will use implementation rubrics developed for a state literacy program in South Carolina schools to investigate relationships between program components and student achievement. The implementation rubric was completed by five personnel for each school, two individuals at the school level and three individuals at the state level, making a total of 95 rubrics for the 19 schools. The specific analyses will include descriptive statistics, correlations, regression studies, and possibly hierarchical linear modeling. Results revealing the relationship between level of implementation (fidelity) for each of the components/items and achievement outcomes will be shared.
|
|
Using an Innovation Configuration Map to Empirically Establish Implementation Fidelity of an Intervention to Improve Achievement of Struggling Adolescent Readers
|
| Presenter(s):
|
| Jill Feldman, Research for Better Schools, feldman@rbs.org
|
| Ning Rui, Research for Better Schools, rui@rbs.org
|
| Abstract:
The complexity of effecting systemic change is well-documented in the literature (Baskin, 2003; Connor, 1992; Rogers, 1983; Senge et al., 1994; and Hall et al., 2006). Determining whether an approach can produce desired effects depends on a clear understanding of an innovation’s key components and the extent to which it was implemented as intended. Although teachers often claim they are using the same innovation, observations of classroom practice may suggest otherwise (George, Hall, & Uchiyama, 2000). In addition to understanding whether or not an approach works, practitioners need to know why, for whom, and under what conditions. This requires systematic measurement of fidelity of classroom implementation.
This paper highlights use of an innovation configuration (IC) map to define the key constructs, describe various fidelity levels, and present fidelity data related to implementation of an intensive professional development model for urban middle school teachers to support achievement of struggling adolescent readers
|
| | | |