|
Methodological Challenges of Collecting Evaluation Data From Sexual Assault Survivors: A Comparison of Three Methods
|
| Presenter(s):
|
| Rebecca Campbell,
Michigan State University,
rmc@msu.edu
|
| Adrienne Adams,
Michigan State University,
adamsadr@msu.edu
|
| Debra Patterson,
Michigan State University,
patte251@msu.edu
|
| Abstract:
This project integrated elements of responsive evaluation and participatory evaluation to compare three evaluation data collection methods for use with a hard-to-find (HTF), traumatized, vulnerable population: rape victims seeking post-assault medical forensic care. The first method involved on-site, in-person data collection immediately post-services; the second, telephone follow-up assessments one week post-services; and the third, private, self-administered surveys completed immediately post-services. There were significant differences in response rates across methods: 88% in-person, 17% telephone, and 41% self-administered. Across all phases, clients gave positive feedback about the services they received and about all three methods of data collection. Follow-up analyses suggested that non-responders did not differ with respect to client characteristics, assault characteristics, or nursing care provided. These findings suggest that evaluations with HTF service clients may need to be integrated into on-site services because other methods may not yield sufficient response rates.
|
|
Audit Report Styles: Management versus Auditor Perspectives
|
| Presenter(s):
|
| Joyce Keller,
St Edward's University,
joycek@stedwards.edu
|
| Abstract:
This study tests the impact of an audit/evaluation report (summary only), written in the two styles reflected in the professional standards promulgated by the Institute of Internal Auditors, the American Evaluation Association, and the General Accounting Office, The report written in the AEA style, will provide a balance of strengths and weaknesses while the report written in GAO/IIA style will place emphasis on findings. Both will include conclusions and recommendations. Approximately thirty managers and thirty auditors will read the report summaries and answer follow-up questions. Half of each group will receive the AEA style report first, the GAO/IIA styled report second. The other half of each group will receive the reports in converse order. Follow-up questions will address the balance in the report, the clarity of findings, the strength of the findings, the receptivity of the reader to the report and other aspects.
|
|
Reporting Statistical Practices in Evaluation: Implications of Effect Sizes and Confidence Intervals in the Interpretation of Results
|
| Presenter(s):
|
| Melinda Hess,
University of South Florida,
mhess@tempest.coedu.usf.edu
|
| John Ferron,
University of South Florida,
ferron@tempest.coedu.usf.edu
|
| Jennie Farmer,
University of South Florida,
farmer@coedu.usf.edu
|
| Jeffrey Kromrey,
University of South Florida,
kromrey@tempest.coedu.usf.edu
|
| Aarti Bellara,
University of South Florida,
bellara@coedu.usf.edu
|
| Abstract:
As the trend for accountability continues to increase in many fields (e.g., education), the need for quality evaluation efforts is becoming increasingly prevalent. However, regardless of how well an evaluation may have been conducted, failure to adequately convey all aspects of the evaluation, including methods and findings, may result in inadequate, possibly even incorrect, reporting of conclusions and implications. This research examines how studies published in evaluation journals communicate findings of traditional statistical analyses (e.g., ANOVA, chi-square) and the degree to which the reported statistics adequately and accurately support the results and associated conclusions. The study examines how inclusion of other statistics (e.g., effect sizes, confidence intervals) in addition to typical p-values may impact results and conclusions. The findings drawn from this research are anticipated to help bridge the gap between theoretical concepts and applied practices of statistical methods and reporting, thus enhancing the utility and reliability of evaluation studies.
|
|
Pragmatic and Dialectic Mixed Method Strategies: An Empirical Comparison
|
| Presenter(s):
|
| Anne Betzner,
Professional Data Analysts Inc,
abetzner@pdastats.com
|
| Abstract:
This study empirically compares the pragmatic and dialectic mixed method strategies to assist practitioners in designing mixed method studies and to contribute to theory. Two mixed method evaluations were conducted to understand the impact of smoke-free regulations on participants in stop smoking programs. The pragmatic study was conducted to obtain a broader understanding of regulation impact, and included focus groups and a telephone survey. The dialectic study sought to evoke paradox in findings and generate new insights by mixing the telephone survey described above with phenomenological interviews. The methods were integrated at sampling, analysis and interpretation. Substantive findings from the single methods are compared for convergence, divergence and uniqueness, and findings of the two mixed method approaches are compared similarly. Findings are presented with reflections on the implementation process of the two strategies and the costs of the single methods in terms of billable researcher hours and participant time.
|
| | | |