|
Session Title: Evaluating Technical Assistance to Build Organizational Capacity: The Case of the Comprehensive Assistance Centers
|
|
Panel Session 392 to be held in Sebastian Section L3 on Thursday, Nov 12, 4:30 PM to 6:00 PM
|
|
Sponsored by the Organizational Learning and Evaluation Capacity Building TIG
|
| Chair(s): |
| Sharon Horn, United States Department of Education, sharon.horn@ed.gov
|
| Discussant(s):
|
| Sharon Horn, United States Department of Education, sharon.horn@ed.gov
|
| Patricia Bourexis, The Study Group Inc, studygroup@aol.com
|
| Abstract:
The United States Department of Education established Comprehensive Assistance Centers to provide technical assistance to "build the capacity of State Education Agencies to implement No Child Left Behind." Sixteen regional comprehensive centers and five content centers work together to accomplish that goal. Center evaluators are challenged by the twin issues of measuring the effectiveness of the technical assistance and of increased organizational capacity. WestEd is evaluating two regional centers and one content center and has developed an evaluation approach that addresses these challenges, although differently for the regional and content centers.
|
|
From Parts to Whole: Putting Humpty Dumpty Together
|
| Naida Tushnet, WestEd, ntushne@wested.org
|
|
The evaluations of two comprehensive assistance centers and one content center have evolved. As the centers began work, they sought credibility as technical assistance providers. Consequently, the first year of the evaluation focused on the quality, relevance, and usefulness of products and services. Because such indicators did not fully capture center work, the evaluators and center staff developed logic models to clarify short, intermediate, and long-term outcomes for activities. As the projects moved into their final years, both the technical assisters and evaluators focused on the "footprints" that would be left after the projects ended. Because the footprints stemmed from the objectives, we also asked whether the objectives added up to increased organizational capacity. We moved from asking, "Were the centers doing the work right?" to "Were the centers doing the right work?" This approach evolved from breaking the work of the centers into pieces to putting the egg back together.
|
|
|
How Will We Know if Capacity was Built?
|
| Marycruz Diaz, WestEd, mdiaz@wested.org
|
| Isabel Montemayer, WestEd, imontem@wested.org
|
|
The comprehensive centers work within particular state contexts, and their technical assistance must be relevant to state needs. As a result, center work is organized around goals and objectives. The evaluation of a center that served a single state, California, therefore, focused on the extent to which the objectives were achieved and used the footprints as indicators of increased state capacity to help districts and schools in specific ways. In addition, we aligned the footprints with the functions (Redding, S., & Walberg, H.J, Eds., Strengthening the Statewide System of Support, Center on Innovation and Improvement) of a state education agency to determine whether the centers increased organizational capacity to carry out the functions. This paper describes the process of generating the footprints and how we aligned them to state functions. It also describes how we measure both objective-specific and organizational capacity.
| |
|
Evaluating Capacity Building Across Multiple States
|
| Juan Carlos Bojorquez, WestEd, jbojorq@wested.org
|
|
The Southwest Comprehensive Center (SWCC) serves five states, each of which operates in a different context. All the states in the southwest region must address the demands of No Child Left Behind during a time of economic instability. However, the specific challenges within each state differ. For example, some of the states have a long tradition of local control, while others are more centrally governed. The evaluation problem is how we sum up the contribution of the SWCC to the region, in addition to each state. The paper will discuss how mapping footprints and functions, while helping to address this problem, does not fully solve it. Consequently, the evaluation included a survey (originally developed by Redding and Walberg) that focused on the functions, asking high-level and front-line state education staff members to assess the capacity of their state in a retrospective pre- and post-test approach.
| |
|
Capacity Isn't Build in a Day
|
| Treseen McCormick, WestEd, tmccorm@wested.org
|
| Sharon Herpin, WestEd, sherpin@wested.org
|
|
The US Department of Education designed the content centers to provide information and assistance to the regional comprehensive assistance centers (RCCs) in order to build their capacity to help states. Although some of the content centers engage only with RCCs, the Assessment and Accountability Content Center (AACC) collaborates with state education agencies as well as the 16 RCCs. In addition, the AACC has four strands of work: 1) special populations; 2) data use; 3) in-depth support to states; and 4) in-depth support to RCCs. For the evaluation, a key issue is capturing capacity-building data across multiple audiences and strands of work. The paper will discuss our approach to solving this problem.
| |