|
Who Should Speak for Individuals with Intellectual Disabilities? Evaluating Quality of Life at Community Providers
|
| Presenter(s):
|
| Gordon Bonham,
Bonham Research,
gbonham@bonhamresearch.com
|
| Abstract:
Quality of life of people with intellectual disabilities is difficult to measure. Questions arise about both self response and proxy response. The Ask Me! Survey collects data annually for 1,200 individuals with developmental disabilities. Peer interviewers, allow three-fourths of the selected people to respond for themselves. Two proxies provide information for each person who cannot respond. Self-respondents answer more questions, produce more reliable scales, and have nearly the same internal consistency as proxies. Self-respondents report lower physical well-being and higher self-determination than do proxies. Two proxies agree most on emotional well-being and least on self-determination. Two day staff agree the most; family and staff proxies agreed the least. Self and proxy responses can be combined for many analysis with the statistical controls. Participatory evaluation policy can be put into practice, but doesn’t resolve all the problems in evaluating agencies serving people with differing intellectual abilities.
|
|
Adapting Appreciative Inquiry for Use at the Community Level in South Africa: Experiences with AI among Orphan and Vulnerable Children (OVC) Programs
|
| Presenter(s):
|
| Beverly Sebastian,
Khulisa Management Services (Pty) Ltd,
bsebastian@khulisa.com
|
| Peter Njaramba,
Khulisa Management Services (Pty) Ltd,
pnjaramba@khulisa.com
|
| Mary Pat Selvaggio,
Khulisa Management Services (Pty) Ltd,
mpselvaggio@khulisa.com
|
| Abstract:
To document the strengths of 32 PEPFAR-funded OVC programs in South Africa, Khulisa designed an evaluation using an Appreciative Inquiry (AI) approach. Per AI methodology, we randomly paired respondents from each program (staff, volunteers, beneficiaries, and community members) to interview one another using AI-framed tools. Unfortunately, this proved challenging due to participants’ low literacy levels and lack of experience in interviewing. Participants - especially beneficiaries and community members - found the questions complex and consequently their stories lacked detail.
Subsequently we modified our approach to have trained fieldworkers interview respondents in focus groups using AI-framed tools. This led to better understanding of (and responses to) the AI questions, as it allowed probing and gathering of richer stories – participants found the emotional telling of, and listening to, stories as informative and therapeutic; service providers felt more encouraged in their work; and beneficiaries and stakeholders had better understanding of their OVC program.
|
|
The Peer Employment Benefits Network: Evaluating the Effectiveness of Peer-to-Peer Communication among People with Disabilities
|
| Presenter(s):
|
| Jennifer Sullivan Sulewski,
University of Massachusetts Boston,
jennifer.sulewski@umb.edu
|
| Abstract:
The Peer Employment Benefits Network is a pilot project using peer-to-peer networking to change the “word on the street” about employment options for people with disabilities. Training was provided to a select group of “peer leaders” who then conducted outreach to others with disabilities. The peer-to-peer networking design presented an evaluation challenge in that peer interactions are delicate and intrusions such as observation or formalized data collection would alter the nature of the interaction itself. We used multiple methods and data sources to evaluate the project’s effectiveness while protecting the confidentiality of the peer-to-peer interaction. Data collected included tests of peer leaders’ knowledge (to assess training effectiveness), peer leaders’ self-reports on their outreach activities (to assess the amount of peer-to-peer networking), a survey of individuals peer leaders talked to (to assess the quality and usefulness of the peer-to-peer interaction), and interviews with peer leaders and other stakeholders.
|
|
Using Readability Tests to Improve the Accuracy of Evaluation Documents Intended for Low-Literate Participants
|
| Presenter(s):
|
| Julien Kouame,
Western Michigan University,
julienkb@hotmail.com
|
| Abstract:
This project was to develop and evaluate a simple and understandable survey for formative evaluation and to assess the effect of the readability test on low-literate participants.
A child abuse evaluation survey designed and pretested was borrowed for this assessement. The evaluation was conducted with 65 low-literate participants (10 years of formal schooling) for whom English was their second language. Participants were randomly assigned into two groups. One group used a form of the survey which content was tested to suit the readability level using Flesch?Kincaid formula. Participants were asked to evaluate instructions and the understandability of each item. For each group, the understanding level was calculated. Differences in understanding were determined by 2 tests. The two documents were generally well understood. However, the document with the readability test presents better understandability score. This shows that the readability test has a positive effect on the comprehension of the survey.
|
| | | |