|
Evaluation of Research Experiences for Undergraduates: Lessons Learned
|
| Presenter(s):
|
| Teresa Chavez, University of South Florida, chavez@coedu.usf.edu
|
| Thomas Lang, University of South Florida, tlang@tempest.coedu.usf.edu
|
| Melinda Hess, University of South Florida, mhess@tempest.coedu.usf.edu
|
| Abstract:
An external evaluation was conducted on multiple Research Experiences for Undergraduate (REU) programs sponsored by the National Science Foundation at a large research university. Though similar in scope, the evaluations across the various REU initiatives each had unique elements to meet the goals of the individual programs. Evaluation methods were both qualitative and quantitative in nature and provided formative and summative feedback to guide and refine the program across the three year implementation and inform leadership about the degree to which the programs were meeting stated goals. Triangulation of information from students and faculty across time provided a basis to guide ongoing programs and inform development and implementation of new programs. This study presents the process, lessons learned, and framework for conducting evaluation of REU initiatives and similar programs, where success of the program is determined by the program process, participant satisfaction, and impact on participant research and career goals.
|
|
Developing Future Research Leaders: Challenges for Evaluation of a Collaborative University Program
|
| Presenter(s):
|
| Zita Unger, Evaluation Solutions, zitau@evaluationsolutions.com
|
| Abstract:
The Group of Eight (Go8) universities is a coalition of Australia's leading research intensive universities. In this highly competitive environment the Group of Eight Universities have collaborated to develop and implement a Future Research Leaders Program providing best practice professional development in financial and resource management to current and emerging researchers.
Evaluation of overall impact on researcher capabilities and institutional performance utilized a mixed method approach involving (i) evaluation of nine training modules piloted at each contributing university, trialled at three universities and implemented across all eight universities; (ii) establishment of key performance indicator measures collected pre-and post-training delivery for 1,000 researchers; and (iii) eight institutional case studies about researcher productivity and institutional performance.
There are many evaluation challenges in a project of this size, complexity, sensitivity and importance. The paper discusses these challenges and how the evaluation was shaped to meet them.
|
| |