|
Engaging Youth in Program Evaluation: An Exploration of Current Practices
|
| Presenter(s):
|
| Kristi Lekies, The Ohio State University, lekies.1@osu.edu
|
| Abstract:
This study examined the extent leaders of a youth development organization engaged youth in evaluation. It also examined gender, leadership experience, evaluation experience, skills, and attitudes in predicting youth involvement. 4-H Educators in a Midwestern state (n=55) completed an online survey about their experiences. Over 70% had some experience involving youth in evaluation planning, decision-making, data collection, and discussing implications. Over 40% had involved youth in pilot testing, data entry, interpreting results, or had youth doing their own evaluations. However, few had done these activities five times or more. Regression analysis F(5, 49) = 4.29, p<.01, indicated that attitudes toward evaluation were significant in explaining youth involvement (B=.30, p<.05). Adjusted R2 = .25.
Findings suggest the importance of raising awareness about the benefits of this evaluation approach, and providing training and support, which can encourage more favorable attitudes and assist program leaders in engaging youth more fully.
|
|
Forecast: Applications, Innovations, and Contributions to Formative Evaluation Theory and Practice
|
| Presenter(s):
|
| Abraham Wandersman, University of South Carolina, wandersman@sc.edu
|
| Jason Katz, University of South Carolina, katzj@email.sc.edu
|
| Sarah Griffin, University of South Carolina, sgriffi@exchange.clemson.edu
|
| Robert Goodman, Indiana University, rmg@indiana.edu
|
| Dawn Wilson, University of South Carolina, wilsondk@mailbox.sc.edu
|
| Abstract:
There has been only minimal research on the effectiveness of formative evaluation methods in relation to program improvement (Brown & Kiernan, 2001). We suggest that a major contributor to this gap is insufficient standards (e.g., standardization and uniformity) in formative evaluation --which is a platform and sine qua non for testing (Kazdin, 2003). FORECAST (Formative Evaluation, Consultation, and Systems Technique) (Goodman & Wandersman, 1994) is an example of a uniform model with accompanying tools for formative evaluation that can be subjected to piloting, refinement, rigorous testing, and ultimately dissemination as a science-based approach to program improvement. After providing examples of past projects that have applied FORECAST models and tools and integrated their use, we will discuss three recent projects that have made significant innovations to the original FORECAST approach: 1) an NIH-funded trial to increase physical activity in underserved communities, 2) a National Science Foundation-funded university-based science, technology, engineering, and mathematics talent expansion program, and 3) a comprehensive program for teen violence prevention. We will close by suggesting next steps for the advancement of FORECAST as a science-based formative evaluation approach.
|
|
The Need for Social Theories of Power in Empowerment Evaluation
|
| Presenter(s):
|
| Thomas Archibald, Cornell University, tga4@cornell.edu
|
| Abstract:
There is no question that empowerment and other similarly aimed modes of evaluation have become major epistemological and methodological forces in the field of evaluation. There are numerous papers in major evaluation journals representing debates about these terms’ definitions as well as explicating the practical threats and promises of applying these methods in a variety of contexts. Across this rich literature, however, it is relatively rare to find in-depth considerations of social theories of power. What’s more, almost as soon as empowerment evaluation and related approaches began developing and being disseminated, critiques also emerged, claiming ‘empowerment’ is a fundamentally contradictory and easily co-opted construct. Hence, in an effort to contribute to the continual evolution of empowerment evaluation’s theoretical grounding, this paper presents a critical review of the literature to ascertain what role social theories of power play (or could play) in this domain.
|
|
Where is the Power in Empowerment Evaluation (EE): Locating Power and Understanding its Role Within an EE Process
|
| Presenter(s):
|
| Divya Bheda, University of Oregon, dbheda@uoregon.edu
|
| Abstract:
Empowerment evaluation (EE) is an internal evaluation approach that offers a means through which various stakeholders within the EE process are empowered through the evaluation process. EE advocates a democratic participation process with a focus on inclusion and community ownership geared toward organizational learning and improvement. However, often, the EE approach does not explicitly acknowledge the differential power of the diverse stakeholder groups at the table—power that strongly impacts the effectiveness and legitimacy of actual democratic participation and true inclusion. This paper highlights the limitations of an EE that was conducted to evaluate existing graduate advising practices, and set new advising policy and guidelines at a U.S. northwestern college. It demonstrates how the quality of the EE process is minimized, its social justice principle devalued when the differential power and privileges of multiple stakeholder groups are not overtly, methodologically factored into the democratic participation process of an empowerment evaluation.
|
| | | |