|
What Do Programs Want? Evaluation TA Needs in California Tobacco Control Programs
|
| Presenter(s):
|
| Jeanette Treiber, University of California Davis, jtreiber@ucdavis.edu
|
| Abstract:
When designing program evaluation capacity building curriculum, we often assume that all areas of evaluation are equally important to address. However, program staff may have targeted needs. This paper relates the analysis of data collected through the technical assistance log kept by UC Davis' Tobacco Control Evaluation Center between November 2004 and March 2009. A large portion of the 100 tobacco control programs (run by county health departments and competitive grantees), have contacted the evaluation center for help, many repeatedly. A quantitative analysis of TA log entries is performed with multiple variables such as evaluation stage (evaluation planning, instrument development, data collection, analysis, reporting), types of document (method, sample surveys, observation forms, etc.), type of organization (government or independent agency), position of person requesting the information (project director, evaluator, health officer, etc.). The data shed light on the evaluation capacity building needs of local health promotion programs' needs.
|
|
Evaluating Research Capacity Building in Health Service Organizations
|
| Presenter(s):
|
| Erika Goldt, Michael Smith Foundation for Health Research, egoldt@msfhr.org
|
| Marla Steinberg, Michael Smith Foundation for Health Research, msteinberg@msfhr.org
|
| Abstract:
In British Columbia, Canada, six Health Authorities are responsible for delivering provincial health services: five for their respective geographic areas, and the sixth for province-wide programs and specialized services. In 2005, the Michael Smith Foundation for Health Research provided each Health Authority with a multi-year grant to develop a basic platform enabling them to increase capacity for engaging in and using research, leading to a system that is more strategic in addressing health service and policy research issues. Each Health Authority differs greatly in terms of funding, population, and pre-existing research and evaluation capacity. To evaluate this program, the evaluation framework must address the unique capacity contexts of each Health Authority while still enabling program-wide findings. This paper presents the findings of evaluating the Health Authority Capacity Building Program and addresses how it is possible to build context into evaluation at the local level while still addressing a program-wide scope.
|
|
Program Evaluation: Are You Willing, Are You Ready?
|
| Presenter(s):
|
| Janet Clinton, University of Auckland, j.clinton@auckland.ac.nz
|
| Sarah Appleton-Dyer, University of Auckland, sk.appleton@auckland.ac.nz
|
| Katheryn Cairns, University of Auckland, k.cairns@auckland.ac.nz
|
| Rebecca Broadbent, University of Auckland, r.broadbent@auckland.ac.nz
|
| Abstract:
The stakeholder's evaluation readiness or capacity and willingness to engage in program evaluation have often been shown to positively correlate with successful program outcomes and contribute to program adaptation. While engagement in program evaluation activities is worthwhile, the specific mechanisms of this relationship are not clear. The aim of this paper is to explore this relationship, its measurement and its validity. The current study uses data from a long term community-wide health promotion program where program adaption and evaluation readiness were monitored at multiple program sites over three consecutive years. A measure of evaluation readiness and program adaption using survey, documentary evidence, and interview data was developed. Descriptive and inferential statistics were used to explore the relationships between the variables over time. The paper demonstrates a positive relationship between program adaptation and evaluation readiness over time and provides a useful technique for measurement of these variables.
|
|
Training the Next Generation of Health Care Providers to Decrease Health Disparities
|
| Presenter(s):
|
| Andrea Fuhrel-Forbis, University of Michigan, andreafuhrel@yahoo.com
|
| Ann A O'Connell, The Ohio State University, aoconnell@ehe.osu.edu
|
| Petra Clark-Dufner, University of Connecticut Health Center, clarkdufner@uchc.edu
|
| K. Devra Dang, University of Connecticut, devra.dang@uconn.edu
|
| Philip Hritcko, University of Connecticut, philip.hritcko@uconn.edu
|
| E Carol Polifroni, University of Connecticut, carol.polifroni@uconn.edu
|
| Terry O'Donnell, Quinnipiac University, terry.odonnell@quinnipiac.edu
|
| Catherine Russell, Eastern Connecticut Area Health Education Center, russell@easternctahec.org
|
| Bruce Gould, University of Connecticut, gould@adp.uchc.edu
|
| Abstract:
Using interprofessional training and service learning experiences, students in medicine, dental medicine, nursing, pharmacy, and physician assistant programs were given the option of volunteering in community clinics. Students' experiences in these clinics may help prepare students for working with underserved populations, and could increase their intentions to do so. Students completed three surveys over the course of an academic year and indicated their service learning experiences, as well as their attitudes, knowledge, self-efficacy, and intentions toward working with underserved populations and toward working in interprofessional teams. Propensity score adjustment was used to help control for self-selection bias in comparisons between students who chose to participate in service learning experiences and those who did not. Analyses include hierarchical linear modeling (HLM) to explore individual change over time and differences between professional groups. Results of HLM analyses with and without propensity score adjustment are presented.
|
| | | |