The Generalization of Computer-Based Assessment Training

Wednesday, March 17, 2010
Exhibit Hall RC Poster Area (Convention Center)
Luke E. Kelly, Andrea Taliaferro and Jennifer Krause, University of Virginia, Charlottesville, VA
Background/Purpose

Previous research has shown that computer managed instruction (CMI) is a practical and time efficient way to provide training across a wide range of skills. Research in this area in physical education has shown CMI to be as effective and in some cases more effective than traditional teacher directed instruction when teaching teachers how to assess motor skills. The purpose of this study was to evaluate whether learning to assess a motor skill using video clips presented via a web-based CMI program to pre-service teachers generalized to them being able to accurately assess students' real-time live performances of the same skill.

Methods

The participants for the study were 36 volunteer pre-service undergraduate kinesiology majors. After completing the IRB procedures, all participants observed and assessed eight children performing the underhand roll in a gymnasium setting. The participants were then given access to the Motor Skill Assessment Program (MSAP) for one week and encouraged to try and earn the highest score possible on the competency assessment option. At the start of the program MSAP administered a pre-assessment composed of 10 clips. The participants were allowed to watch each clip three times in real speed and then had to enter their assessment. The program then provided training via tutorial and guided practice options. When teachers could consistently demonstrate 85% competency using the guided practice option, they could then use the competency assessment option of MSAP. The competency assessment option used the same format as the pre-assessment and could only be taken three times. After one week of training, the participants repeated the live assessment of eight students in a gymnasium setting.

Analysis/Results

The participants' data were analyzed by ANOVA with repeated measures. The findings revealed that the participants made significant (F(3,32)=220.59, p <.001) improvements on both their live and computer pre/post assessments. The mean score for the live pre-assessment was 48.47 (SD= 6.24) which improved to 79.50 (SD= 6.88) on the live post assessment. Performance on the MSAP pre- assessment was 55.54 (SD= 9.09) and post assessment 88.58 (SD= 7.29) were higher than the live assessments, which is probably a reflection of the user control over the starting of the video performances on the computer.

Conclusions

The present results are very encouraging and support the use of computer-based training for developing the assessment skills of pre-service teachers. However, additional research is warranted using additional skills to replicate the current findings.