|
Does Revising the Language on a Survey Capture Non-native English Speakers' Opinions More Accurately?
|
| Presenter(s):
|
| Sally Francis,
Walden University,
sally.francis@waldenu.edu
|
| Eric Riedel,
Walden University,
eric.riedel@waldenu.edu
|
| Abstract:
The purpose of this paper is to explore the impact of revising language on a course evaluation instrument so that the form is more easily understood by non-native English speakers. Data was compared on parallel questions from new course evaluation surveys designed for non-native English speakers with original surveys designed for native English speakers. The sample included 36 course sections using the original survey and 32 course sections using the new survey from an online bachelor's completion program offered jointly by American and Latin American universities. The data were compared using common courses and weighted so the samples were statistically the same. Independent t-tests showed that students who received the new survey rated their online course instructor significantly lower then those who received the original survey. A factor analysis showed that the students who took the new survey perceived their instructor along more factors then students with the original survey.
|
|
Evaluating the Effectiveness of a 'Small Learning Community' Project on Inner-City Students
|
| Presenter(s):
|
| Deirdre Sharkey,
Texas Southern University,
owensew@tsu.edu
|
| Emiel Owens,
Texas Southern University,
owensew@tsu.edu
|
| Abstract:
The purpose of the present study is to evaluate the effectiveness of a "Small Learning Community" project on a low achieving inner-city school. The CIPP Evaluation Model was used as an assessment tool during this study. The CIPP model is a comprehensive framework for guiding evaluations of programs, projects, personnel, products, institutions, and systems. It is focused on program evaluations, particularly those aimed at effecting long-term, sustainable improvements.
|
|
Diversity in the Evaluation Field: Expanding the Pipeline for Racial/Ethnic Minorities
|
| Presenter(s):
|
| Dustin Duncan,
Harvard University,
dduncan@hsph.harvard.edu
|
| Abstract:
Racial/ethnic diversity in the evaluation field is important. Among other benefits, increasing the racial/ethnic diversity of people entering the field of evaluation is a strategy to increase the cultural competency among evaluators in general. At present, however, still too few racial/ethnic minorities are in the evaluation field. This paper will discuss strategies for expanding the pipeline of racial/ethnic minorities in the evaluation field, including creating evaluation-training programs specifically for racial/ethnic minority students and working with Historically Black Colleges & Universities. The present paper is from the perspective of a graduate student; he is presently participating in the American Evaluation Association/Duquesne University Graduate Education Diversity Internship Program. In the paper, he draws on his experiences through this internship as well as other evaluation experiences.
|
|
The Case Against Cultural Competence
|
| Presenter(s):
|
| Gregory Diggs,
University of Colorado, Denver,
shupediggs@netzero.com
|
| Abstract:
Cultural Competence: “A systematic, responsive inquiry that is actively cognizant, understanding, and appreciative of the cultural context in which the evaluation takes place; that frames and articulates the epistemology of the evaluative endeavor; that employs culturally and contextually appropriate methodology; and that uses stakeholder generated, interpretive means to arrive at the results and further use of the findings.”
“Competence” has been operationalized as a goal or developmental process, instead of as a set of skills, knowledge and ability.
Dr.Diggs argues that the use of the term “cultural competence” as used by AEA is misleading and misguided; poorly representing the basic concepts of culture and competence. CC advocates often use the term as if it is interchangeable with important concepts like cultural awareness and cultural responsiveness.
How will the merit or worth of the evaluator's alleged cultural competence be certified? That methods used are “culturally appropriate”? Who among us can do so with validity?
|
| | | |