|
How Do Evaluators Communicate Cultural Competence? Indications of Cultural Competence Through an Examination of the American Evaluation Association's Career Center
|
| Presenter(s):
|
| Stephanie Evergreen, Western Michigan University, stephanie.evergreen@wmich.edu
|
| Kelly Robertson, Western Michigan University, kelly.n.robertson@wmich.edu
|
| Abstract:
While it is widely accepted that evaluators need to be culturally competent, how the concept is operationalized and communicated in practice is not yet well understood. Thus, the goal of this study is to create an organic definition of cultural competence within evaluation by examining the ways in which it is expressed in practice by evaluators and employers through examination of the American Evaluation Association's (AEA) Career Center. To formulate a definition, we conducted a review of all resumes and job postings available on AEA's Career Center, from January to October 2009. The evaluation-specific expressions of cultural competence were grouped across themes, producing a list of skills, responsibilities, and experiences that are currently used to indicate an otherwise intangible construct. An even larger lesson may be the professional development process through which the raters of the Career Center documents were calibrated.
|
|
A Participatory Approach to Ensure Cultural and Linguistic Competence in Survey Research: The Healthy Start Participant Survey
|
| Presenter(s):
|
| Sasigant So O'Neil, Mathematica Policy Research Inc, so'neil@mathematica-mpr.com
|
| Julia Silhan Ingels, Mathematica Policy Research Inc, jingels@mathematica-mpr.com
|
| Lisa M Trebino, Mathematica Policy Research Inc, ltrebino@mathematica-mpr.com
|
| Margo L Rosenbach, Mathematica Policy Research Inc, mrosenbach@mathematica-mpr.com
|
| Abstract:
As the population of the United States becomes increasingly diverse and concerns about health disparities grow, research to understand the health needs and outcomes of disadvantaged populations is necessary. Consequently, cultural and linguistic competence is a consideration not only for health service delivery, but also health services research. A participatory research approach can help address cultural and linguistic competence needs in evaluation. In this article, we use the Healthy Start participant survey as a case study and describe participation in survey design and administration at the individual, organizational, and community levels for eight program sites; we discuss these interactions in the context of key components of cultural and linguistic competence, including awareness of diversity, knowledge of cultures, understanding of the dynamics of differences, development of cross-cultural skills, and adaptation to diversity. We conclude with a discussion of the benefits and challenges to using this participatory research approach.
|
|
The Development and Validation of the Cultural Competence of Program Evaluators (CCPE) Scale
|
| Presenter(s):
|
| Krystall Dunaway, Old Dominion University, kdunaway@odu.edu
|
| Bryan Porter, Old Dominion University, bporter@odu.edu
|
| Jennifer Morrow, University of Tennessee at Knoxville, jamorrow@utk.edu
|
| Abstract:
Although most program evaluators embrace the idea that evaluation should be shrouded in cultural competence, there is currently no measure of cultural competence in existence for the field. Therefore, the goal of this study was to develop and validate a measure of cultural competence for use with program evaluators. Items from three established measures in the fields of counseling, therapy, and healthcare were selected and altered to better suit the field of program evaluation, and then these altered items were combined, along with qualitative and demographic questions, to create the Cultural Competence for Program Evaluators (CCPE) instrument. The researchers pilot tested the CCPE via purposive heterogeneity sampling. Specifically, online surveying of members of AEA and SEA was used. The reliability and validity of the CCPE was assessed, as well as differences in level of cultural competence among program evaluators based on several demographic variables. Results and implications will be discussed.
|
| | |