|
Evaluation as a Key Capacity-Building Tool: Sharing Outcomes from a 10-Year Review of New Jersey’s Faith-Based Initiatives
|
| Presenter(s):
|
| Anne Hewitt,
Seton Hall University,
hewittan@shu.edu
|
| Abstract:
Officially launched in 1998, New Jersey’s Office of Faith-Based Initiatives (NJ-OFBI) was one of the first in the nation and has subsequently awarded in excess of $10 million dollars to over 300 faith-based agencies. The NJ-OFBI systematically offers grantees technical assistance (TA) opportunities in order to facilitate successful agency and community outcomes.
The primary purpose of this mixed-method evaluation was to establish which types of TA activities were capable of producing desired capacity-building outcomes in faith-based organizations over time. Grant recipients and technical assistance providers responded to a survey or participated in a focus group session. State agency personnel completed individual interviews.
Based on survey responses, a continuum of TA opportunities was developed and then aligned with capacity –building activities and agency outcomes. Preliminary results indicate a correlation between amount and scope of TA activity and positive impact on program outcomes and sustainability.
|
|
Understanding the Dimensions of Evaluation Capacity
|
| Presenter(s):
|
| Isabelle Bourgeois,
University of Ottawa,
isabelle.bourgeois@nrc-cnrc.gc.ca
|
| Abstract:
The question of evaluation capacity in service organizations has been studied empirically in only a limited fashion in recent years. The actual dimensions of evaluation capacity, or what evaluation capacity might look like in an organization, have not previously been identified systematically in the literature. This study, conducted for the purposes of the author’s doctoral dissertation, sought to identify the key dimensions of evaluation capacity in Canadian federal government organizations as well as the manageable steps required to move from low to high capacity for each of these.
The study concluded that evaluation capacity in Canadian federal government departments and agencies can be described through six main dimensions, each one broken down into further sub-dimensions and four capacity levels: exemplary, intermediate, developing and low. These dimensions are fully described in a framework of evaluation capacity, based on three series of interviews with evaluation experts and other government evaluation stakeholders.
|
|
“If You Build It They Will Come”: Context, Design, Outcomes, and Lessons Learned from an Internal Evaluation Capacity Building Initiative
|
| Presenter(s):
|
| Stacey Farber,
Cincinnati Children's Hospital Medical Center,
stacey.farber@cchmc.org
|
| Britteny Howell,
Cincinnati Children's Hospital Medical Center,
britteny.howell@cchmc.org
|
| Daniel McLinden,
Cincinnati Children's Hospital Medical Center,
daniel.mclinden@cchmc.org
|
| Stacy Scherr,
Cincinnati Children's Hospital Medical Center,
stacy.scherr@cchmc.org
|
| Abstract:
The purpose of this presentation is to share the context, design, activities, outcomes, and lessons learned from an evaluation capacity building initiative within the Education and Training department at Cincinnati Children’s Hospital Medical Center. As those who highly value the Evaluation Impact standard of the Program Evaluation Standards, we seek to enhance the influence of evaluation, expand client understanding and use of evaluative thinking and techniques, and improve client perceptions of the process and utility of evaluation. This presentation will offer thoughtful strategies (including the constraints and affordances that affect the success of these strategies) for building client capacity to better perceive, understand, and use evaluation for programming and overall business results.
|
|
From Supervisors to Staff: Building Capacity for Evaluation and Data-Driven Program Planning in Afterschool Programs
|
| Presenter(s):
|
| Laurie Van Egeren,
Michigan State University,
vanegere@msu.edu
|
| Heng-Chieh Wu,
Michigan State University,
wuhengch@msu.edu
|
| Megan Platte,
Michigan State University,
plattmeg@msu.edu
|
| Nai-Kuan Yang,
Michigan State University,
yangnaik@msu.edu
|
| Chun-Lung Lee,
Michigan State University,
leechunl@msu.edu
|
| Beth Prince,
Michigan State University,
princeern@msu.edu
|
| Celeste Sturdevant Reed,
Michigan State University,
csreed@msu.edu
|
| Abstract:
In the Michigan 21st Century Community Learning Centers (21st CCLC) afterschool programs, a concerted effort has been made to increase program quality by building programs’ capacity to make informed, data-driven decisions. Organizational values communicated from the top down create shared goals and visions for a program (Gowdy & Freeman, 1993). Within the context of evaluation, this suggests that staff attitudes about evaluation use will be affected by their supervisors’ support for evaluation efforts. In this study, 21st CCLC staff (N = 302) and supervisors (N = 56) completed surveys assessing perceptions of the importance of evaluation and data-driven program planning. Multilevel analyses confirmed that supervisors who were more positive proponents of evaluation and data use had staff who were more likely to embrace data-driven program planning. Moreover, staff who reported greater exposure to evaluation data were more likely to use data-driven program planning and higher quality planning strategies.
|
| | | |