|
The Forest and the Trees: Evaluating Complex and Heterogeneous Initiatives
|
| Presenter(s):
|
| Shimon Spiro, Tel Aviv University, shispi@post.tau.ac.il
|
| Anastasia Gorodzeisky, Juan March Institute, agorodzeisky@ceacs.march.es
|
| Abstract:
This paper discusses the difficulties encountered in evaluating large scale, complex initiatives, directed at one or more geographical areas, and comprised of many distinct and unique programs. Following a concise review of the literature, we present an approach which attempts to evaluate the forest through the trees. This approach was applied to the evaluation of Tel Aviv University's "Price-Brody Initiative in Jaffa". As part of this initiative, various departments of the University offered dozens of different programs to the population of Tel Aviv's poorest area. We developed a set of instruments that were applied to the evaluation of each program separately. These instruments were sufficiently standardized to allow for comparisons between program, and for an evaluation of the initiative as a whole. We monitored the implementation of the evaluation, and elicited feedback from stakeholders. The paper concludes with a discussion of the potential and limitations of this methodology.
|
|
Consistent Protocol, Unique Sites: Seeking Cultural Competence in a Multisite Evaluation
|
| Presenter(s):
|
| Carolyn Sullins, Western Michigan University, carolyn.sullins@wmich.edu
|
| Ladel Lewis, Western Michigan University, ladel_lewis@yahoo.com
|
| Abstract:
Evaluating one site of a federally funded, multi-site initiative to improve services for children with mental health issues and their families presents numerous challenges. Perspectives of all consumers must be heard, understood, and acted upon, but many people are understandably reluctant to participate in an evaluation concerning such sensitive issues. Further, not all the sites fit neatly into the same 'one size fits all' evaluation protocol that must be used at all the sites. Cultural competence is crucial regarding: (1) breaking the barriers to participation; (2) balancing the traditional perspectives of 'informed consent' and 'confidentiality' with those of the participants; (3) balancing the need for consistent measures in our national study with the local realities of our participants; (4) interpreting and reporting the results. Seeking input from stakeholders at each step of the evaluation helps evaluators recognize and overcome these barriers.
|
|
Findings From the National Evaluation of Systems Transformation (ST) Grantees: Contextual Factors That Influence Grantees' Efforts to Change Community Long-term Support Service Delivery Systems
|
| Presenter(s):
|
| Yvonne Abel, Abt Associates Inc, yvonne_abel@abtassoc.com
|
| Deborah Walker, Abt Associates Inc, deborah_walker@abtassoc.com
|
| Meridith Eastman, Abt Associates Inc, meridith_eastman@abtassoc.com
|
| Abstract:
Between FY2005 and FY2006, the Centers for Medicare and Medicaid Services (CMS) awarded Systems Transformation (ST) Grants to 18 states to support states' efforts to transform their infrastructure to promote more coordinated and integrated long-term care and support systems that serve individuals with disabilities of all ages.
With more than half of the ST Grantees' five-year grant periods completed, findings from the national evaluation reveal that a common set of contextual factors (e.g., leadership support) can positively and negatively influence grant implementation. Within the current economic environment, we present how the role of these factors may be changing and how budget deficits and fiscal constraints are affecting specific strategies and outcomes of the grant. A factor of particular interest is how grantees are integrating with other grant initiatives active in their states (e.g., Money Follows the Person) to augment their ability to achieve and sustain grant goals.
|
|
The Impact of Context on Data Collection: A Comparison of Two Sites
|
| Presenter(s):
|
| Amy Orange, University of Virginia, ao2b@virginia.edu
|
| Walt Heinecke, University of Virginia, heinecke@virginia.edu
|
| Edward Berger, University of Virginia, berger@virginia.edu
|
| Chuck Krousgrill, Purdue University, krousgri@purdue.edu
|
| Borjana Mikic, Smith College, bmikic@smith.edu
|
| Dane Quinn, University of Akron, quinn@akron.edu
|
| Abstract:
This paper explores the difficulties of data collection due to site contexts. Two sites from a multisite study will be contrasted: one site has a high level of student involvement in class, through the course blog, and in focus group participation while the other has little student involvement in these areas. Focus groups conducted at each site had different outcomes. At the first site, multiple invitations were extended to students with a range of available times. There was no student response to the invitations. At the second site, with short notice and two invitations to participate, many students came to share their viewpoints with the evaluator. The varying levels of student participation have created difficulties in the evaluation, which seeks to understand how technologies used in the course impact students' learning and collaboration. We provide illustrative study results to highlight some issues that may arise when context impacts data collection.
|
| | | |