2010 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: The Process of Evaluating Supporting Partnerships to Assure Ready Kids (SPARK) and Ready Kids Follow-Up (RKF): Embracing and Informing Truth, Beauty, and Justice
Panel Session 707 to be held in Lone Star F on Saturday, Nov 13, 8:00 AM to 9:30 AM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
York Susan, University of Hawaii, Manoa, yorks@hawaii.edu
Discussant(s):
Huilan Krenn, W K Kellogg Foundation, huilan.krenn@wkkf.org
Abstract: The purpose of the proposed panel is to discuss the evaluation process from Supporting Partnerships to Assure Ready Kids (SPARK) and the Ready Kids Follow up (RKF) studies. SPARK was a nation-wide school readiness initiative funded by the W.K. Kellogg Foundation, while RKF examines school success for students who benefited from SPARK programs. Panel members include the Initiative Level Evaluator (ILE) and local evaluators from Hawaii, New Mexico, and Ohio. The panel will discuss how the evaluation process evolved from individual designs using different measures for SPARK to adopting common measures across sites for RKF. Our focus is to discuss the opportunities gained/lost in both designs, and to generate further discussion with regards to evaluating programs with divergent designs in culturally complex communities.
Methodological Challenges to Initiative-level Evaluation
Patrick Curtis, Walter R McDonald and Associates Inc, pcurtis@wrma.com
The presenter led the Initiative Level Evaluation (ILE) Team for SPARK and is now Principal Investigator for the Ready Kids Follow-up. The presentation is an overview of the methodological challenges and how those challenges were addressed in a multi-site evaluation that began in 2003. The role of the ILE Team progressed from a laissez-faire relationship with the eight SPARK grantees to major responsibility for shepherding the evaluation effort in the last two years of SPARK. Originally, the ILE Team was not clear about its role in the project, but was later challenged to provide intellectual leadership. The final evaluation report remains the only written documentation of SPARK at the initiative level
A So-called “Improved” Evaluation Method Viewed Through an Indigenous Lens
Morris Lai, University of Hawaii, Manoa, lai@hawaii.edu
In an effort to improve the methods used in the earlier evaluations of SPARK, the sites agreed upon a common set of data-collection instruments and methods of administration for the RKF study. While such an increase in consistency can be viewed as a methodological improvement, from an indigenous viewpoint, such “improvements” could result in a lessening of evaluation quality. At the Hawai‘i site, the original SPARK evaluation featured the honoring of oral interview input from participants as primary evaluation data, as opposed to being considered mainly as data that were useful in corroborating primary, often quantitative or written, data. In the “improved” evaluation approach, short responses on written instruments are now primary sources of data. I will discuss how some aspects of the “improved” approach, which when viewed through an indigenous lens, are indeed improvements, whereas other aspects could indicate a lowering of methodological quality
The Tension Between Difference and Commonality in Multi-site Initiative: What are the Challenges to Quality?
Marah Moore, i2i Institute Inc, marah@i2i-institute.com
As one of several states participating in the 5-year WKKF SPARK project, the NM SPARK evaluation straddled the divide between the commonalities of an initiative shared across multiple sites, and the unique demands of an individual site with substantial differences in context and project implementation. This tension was mirrored at our individual project level, as we funded multiple communities to participate in our statewide SPARK project. Finding the balance between common measures and learning through differences is not easily resolved. We will speak to the following questions: What challenges to evaluation quality arose in the various approaches that were taken to find that balance? What did we learn about fostering unique responses while measuring collective change? Site vs. initiative: whose success matters? How has this experience shaped our current thinking about evaluating multi-site efforts?
Including Evaluation From the Start: How Ohio SPARK Used Evaluation for Program Planning and Program Expansion
Peter Leahy, University of Akron, leahy@uakron.edu
This paper discusses how evaluation became integrated into the SPARK Ohio program from the initial planning grant stage. The role of the evaluation team in program planning and program development in the formative years of SPARK Ohio is considered with illustrations of how process evaluation led to continuous program improvement. The outcome evaluation design used in Ohio SPARK is also discussed, along with kindergarten entry and longitudinal program results since 2005. Replications of the SPARK Ohio program are spreading throughout Ohio and the role evaluation plays in the replication process, and the challenges that it has faced, are also discussed.

 Return to Evaluation 2010

Add to Custom Program