|
Investigating the Validity and Reliability of the School Technology Needs Assessment (STNA)
|
| Presenter(s):
|
| Jeni Corn,
University of North Carolina, Greensboro,
jocorn@serve.org
|
| Abstract:
The Technology in Learning (TiL) program at the SERVE Center at UNCG developed School Technology Needs Assessment (STNA) for assessing school-level needs related to the use of technology for teaching and learning. This research study investigated the validity and reliability of the School Technology Needs Assessment (STNA), using existing data from over 2000 respondents from schools across the United States. This study includes a thorough literature review, estimates of internal consistency reliability, and results from a principal factor analysis. Data analyses showed each of STNA constructs and subconstructs to have high internal consistency reliability (alpha ranged from .807 to .967). The exploratory factor analysis of STNA response data helped to verify the facets included in STNA and assess whether different items on the survey belong together in one scale. The results of this study will be used to make specific recommendations to inform future revisions to the instrument.
|
|
Building the Theoretical Contribution of the Worldly Science: The Case for Longitudinal Engagement in the Evaluation of Programmes
|
| Presenter(s):
|
| Charles Potter,
University of the Witwatersrand,
pottercs@gmail.com
|
| Gordon Naidoo,
Open Learning Systems Education Trust,
van@mail.ngo.za
|
| Abstract:
This paper suggests the value of longitudinal engagement in the evaluation of programmes. It provides a case study of the evaluation of the Open Learning Systems Education Trust's “English in Action” programme in South Africa from 1993 to the present. It describes the initial role of formative evaluation in highlighting the need for change in the programme's implementation theory from a model focused on enhancing learner involvement and learner gains, to a model of distance education and open learning focused on promoting teacher and learner gains through school, classroom and teacher support. It then documents the role of ongoing participatory multimethod evaluation in the programme's growth to scale over a twelve year period, both in assisting the development and sustainability of the programme, as well as in providing evidence supporting the renewed interest in radio learning which has taken place over recent years in developing countries, as well as more broadly internationally.
|
|
Improved Evaluation Designs for Educational Technology Projects
|
| Presenter(s):
|
| Michael Coe,
Northwest Regional Educational Laboratory,
coem@nwrel.org
|
| Abstract:
This presentation is a work-in-progress report on an NSF-funded project intended to develop improved models of evaluation and research for educational technologies. The premise is that the evaluation of educational technology applications in education is hampered by oversimplified, underspecified models of both the project designs and the project evaluations. Important aspects of the project designs are often left out of the evaluation designs, and often the relationships between project components are misrepresented in the evaluation designs. These issues may lead to unproductive evaluation questions and research methods. Many of these problems could be solved, at least in part, by applying program theory and basic causal modeling concepts. The paper will include the rationale for the project, brief examples of work we have done over the past few years, and preliminary findings from the current study.
|
| | |